Prompt Engineering
01 Apr 2026
Context engineering
A skill is a package, fed to Claude as a zip file containing a folder structure. At a minimum, it contains a folder named for the skill with a file inside that folder called SKILL.md. Everything else is optional. Note that the SKILL.md file must be properly formatted as a markdown file with YAML frontmatter containing the skill’s name and description.
For example, it might contain a subfolder called scripts with Python or Bash scripts. It might contain additional context in a markdown file. It might contain data in a .csv file. None of this is read unless the prompt and the name or description field triggers it, saving tokens.
/skills)| Skill | Description |
|---|---|
| /update-config | Configure Claude Code settings — hooks, permissions, env vars, and settings.json changes |
| /keybindings-help | Customize keyboard shortcuts and rebind keys in ~/.claude/keybindings.json |
| /simplify | Review changed code for reuse, quality, and efficiency, then fix issues |
| /loop | Run a prompt or slash command on a recurring interval (e.g., /loop 5m /foo) |
| /schedule | Create and manage scheduled remote agents that run on a cron schedule |
https://github.com/anthropics/skills I wanted more prepackaged skills to try, so I cloned the Anthropic Skills Repo
I asked Claude where to put it and it suggested ~/.anthropic/skills/
I checked the Claude documentation about where to put it and it suggested ~/.claude/skills/
It offered to make the directory and put the files there, by the way. This raises a typical problem for LLMs: it is typically faster to do a one-off task yourself rather than ask the LLM. The LLM (and Skills in particular) excel at automating repeated tasks.
Skill Creator is perhaps the most popular of Anthropic’s packaged skills, but I wanted something related to homework D, so I found a different repo through the following directory.
https://github.com/ComposioHQ/awesome-claude-skills?tab=readme-ov-file A list of some (allegedly) curated skills can be found here
I selected the CSV skill to try, since the data for hw D is in CSV format.
First I tried a built in example but it was a bit difficult to assess the results because I didn’t know the data.
The graphics it generated looked ridiculous on the face of it, but I needed to look at the data to be sure.
Next, I tried looking at the data with Visidata, a cli tool I typically use for examining CSV files. Claude was helpful here, because it just had a three-letter month and four digit year in separate columns. Claude wrote a Python script to convert those into standard dates (the first of each month).
\(\langle\) Pause to view Visidata \(\rangle\)
Next I tried the Ames data
It called a fairly good Python script to analyze the data but again produced ridiculous graphics.
\(\langle\) Pause to view Ames analysis \(\rangle\)
Can we fix these? Yes, if we know some Python, we can tell it what graphics to use. Even without knowing Python, we could limit the correlation heatmap to, say, the four most and least correlated variables.
On the other hand, if we know Python, it may be better (not faster as a one-off but better for routine use) to create our own Skill and Python script.
https://news.ycombinator.com/item?id=45786738 discusses Claude Code in general but with an emphasis on Claude Skills.
head and tail to look at the beginnings and ends of files rather than loading the entire filestests/test_util.py differently from src/core_logic/test_util.py without having to examine the actual filesFor tasks involving tens of minutes to hours of continuous work, several techniques exist to cope with the fact that context windows will definitely be exceeded
This slideshow was produced using quarto
Fonts are Roboto, Roboto Light, and Victor Mono Nerd Font