Prompt Engineering
22 Jan 2025
Welcome back!
Let’s discuss the whatiknow contributions!
\(\langle\) pause to view whatiknow doc \(\rangle\)
People want to help you!
Here’s the whole course on one slide! (Don’t worry that it’s too small to read, it’s pretty much useless)
Zheng et al. (2024) persuasively argues that personas do not improve performance!
The paper uses a lot of technical terms. Do you know them? Do you want to spend time on them in class or just ask Google or an LLM?
sgpt "What is perplexity?"
Perplexity is a measurement used in natural language processing (NLP) to evaluate the performance of a language model. It quantifies how well a probability distribution or model predicts a sample. Specifically, perplexity is the exponentiation of the average negative log-likelihood of a sequence. A lower perplexity indicates a better predictive model, as it suggests the model is more confident in its predictions. In simpler terms, perplexity measures how “surprised” a model is by the actual outcomes, with lower values indicating less surprise and thus better performance.
sgpt "What is effect size?"
Effect size is a quantitative measure of the magnitude of a phenomenon. It is used in statistics to indicate the strength of a relationship between variables or the extent of an experimental effect. Unlike \(p\)-values, which only tell you whether an effect exists, effect size provides information about the size of the effect, making it a crucial component in understanding the practical significance of research findings. Common measures of effect size include Cohen’s \(d\), Pearson’s \(r\), and odds ratios. Effect size is important for meta-analyses and helps in comparing results across different studies.
END
This slideshow was produced using quarto
Fonts are Roboto Light, Roboto Bold, and JetBrains Mono Nerd Font