大型言語モデルは、使用すると疲労感を覚えることがあります。
## Japanese Translation:
## Text to translate
(combining all key points while staying true to the source):**
---
### Summary
The article, dated **15 Mar 2026**, recounts the author’s experience working with large language models such as Claude and Codex after long sessions. The writer identifies several frustrations:
- *“Dumbing down”* the model to save money,
- **Context rot** (the model losing earlier prompts), and
- A bloated **Codex/Claude harness** that slows progress.
Mental fatigue causes the author’s own prompts to degrade; after roughly 30 % of a prompt is used, key context can be lost. Debugging large files creates a **slow feedback loop**—each tweak requires re‑parsing and can take about **10 minutes**, with only ~2 % compaction achieved by the end.
The author warns against “doom‑loop psychosis”: when prompts become rushed or interrupted, it’s a signal to take a break rather than continue. A *clear, confident* prompt that evokes excitement (“the AI will crush it”) is more likely to succeed; uncertainty and impatience predict failure.
To accelerate feedback, the writer now asks the LLM to reproduce specific **failure cases** within a **sub‑5‑minute loop**, effectively applying a test‑driven development (TDD) style with the model as a test harness. By giving explicit success criteria (“reproduce failure in < 5 min, optimize code path, omit unnecessary pieces”), the LLM can generate faster feedback cycles and consume less context.
The author frames exhaustion from working with LLMs as a “skill issue” and highlights the risk of **cognitive outsourcing**—letting the AI fill undefined requirements without proper human thought. When progress stalls or context grows too quickly, the solution is to **re‑iterate faster**, reduce context usage, and save debugging time.
---
This revised summary incorporates all ten key points, avoids unwarranted inferences, and presents a clear, reader‑friendly overview of the article’s main arguments.