When English Became a Programming Language: The Truth About Prompting and the End of Prompt Engineers
Introduction: The Era When English Became a Programming Language
“The best programming language right now is English.”
This phrase circulates like a joke in tech circles, but it’s not mere humor—it’s an insight that penetrates the essence of our era. In July 2025, Google’s Gemini Deep Think winning a gold medal at the International Mathematical Olympiad (IMO) perfectly proved the meaning of this statement. Gemini completed full proofs written in natural language, not complex mathematical formulas, within the 4.5-hour time limit.
Traditional programming demanded strict grammar and structure. Python’s indentation, JavaScript’s braces, SQL’s syntax rules—all were formal languages for communicating with computers. But now we can give commands to AI in natural language and have it perform complex tasks. This is the revolution of prompting.
Yet an interesting paradox emerges amid these changes. While prompting gains attention as a core technology of the AI era, the profession of prompt engineer is rapidly disappearing. What does this mean?
This post comprehensively analyzes why prompting is called a new language, various techniques, and the realistic limitations and possibilities of this technology.
Definition and Evolution of Prompting
Prompting: Grammar of a New Language
Prompting is the technique of designing and optimizing inputs to communicate effectively with large language models (LLMs). It’s not simply asking questions, but providing structured commands so AI can think and respond in desired directions.
If English is the new programming language, prompting is its grammar and rhetoric. Our ability to structure thoughts in English, provide context, and give clear instructions determines the efficiency of AI collaboration. It’s like asking a librarian for the right book in a vast library—how you construct your question completely changes the quality of answers you receive.
The implications of this change are profound. Coding is no longer the exclusive domain of specific groups. Anyone who can think clearly and express themselves in English can perform complex tasks through AI. In this context, prompting has become more than a simple technology—it’s a new literacy.
Historical Evolution of Prompting: From Simple Dialogue to Advanced Reasoning
Prompting’s development parallels AI model evolution. When early GPT models emerged with simple question-answer formats, the introduction of Chain-of-Thought (CoT) prompting in 2022 gave AI the ability to think step-by-step.
2024 saw more sophisticated techniques emerge. Role Prompting, Self-Consistency, and ReAct prompting were developed to maximize AI’s reasoning capabilities.
Then in July 2025, what could be called the culmination of these prompting techniques appeared. Google’s Gemini Deep Think won gold at the International Mathematical Olympiad (IMO). This wasn’t just showing AI’s mathematical prowess, but a historical event proving how powerful sophisticated prompting can be.
Now prompting has evolved beyond a single technology to become a new language between AI and humans. Companies report that organizations with systematic prompting processes show 20-30% better performance than those relying on ad-hoc optimization. But simultaneously, an interesting paradox emerges: as prompting’s importance grows, the need for specialized prompt engineers actually decreases.
So how is the grammar of this new language structured? What techniques should we know for effective prompting?
In-Depth Analysis of Major Prompting Techniques
1. Zero-Shot vs Few-Shot vs One-Shot Prompting
Zero-Shot Prompting is a technique that performs tasks without any examples. It’s the most basic yet powerful method, relying solely on the model’s inherent knowledge.
Advantages:
- Fast and intuitive
- No need to prepare examples
- Applicable to a wide range of tasks
Disadvantages:
- Lack of accuracy in complex tasks
- Difficulty reflecting domain-specific requirements
- Inconsistent results
Few-Shot Prompting provides 2-5 examples for the model to learn patterns. This is very effective for AI to understand context and generate desired results.
Advantages:
- High accuracy and consistency
- Domain-specific specialization possible
- Excellent performance even in complex tasks
Disadvantages:
- Time required to prepare examples
- Wrong examples can cause bias
- Increased token usage
One-Shot Prompting provides only one example as a compromise, balancing efficiency and performance.
2. Chain-of-Thought (CoT) Prompting
CoT prompting is an innovative technique developed by Google Research in 2022 that guides AI to think step-by-step. As seen in the paper analysis discussed earlier, it has limitations but remains useful in many situations.
Effective Application Areas:
- Mathematical problem solving
- Tasks requiring logical reasoning
- Complex analytical tasks
Limitations:
- Performance degradation in pattern-based learning
- Side effects from increased contextual distance
- Over-dependence on specific domains
3. Role-Based Prompting (Role/Persona Prompting)
This technique assigns specific roles or personas to AI. When you specify a role like “You are an experienced marketing expert,” it provides answers from that perspective.
According to recent research, role-based prompting is a double-edged sword. It improves performance in certain tasks but can degrade performance in others. The key is matching appropriate roles with task characteristics.
4. Self-Consistency Prompting
This technique generates multiple reasoning paths for the same problem, then selects the most consistent answer. This significantly improves AI reliability.
5. ReAct (Reasoning + Acting) Prompting
A technique combining reasoning and action, allowing AI to solve problems step-by-step while utilizing external tools. It shows particularly strong performance in research or complex analytical tasks.
While these various prompting techniques are developing, an important question arises. As these advanced prompting techniques evolve, are specialized prompt engineers still necessary? If AI is finding optimal prompting on its own, as shown by Gemini’s IMO gold medal, what is the role of human prompt engineers?
The Paradox of Prompt Engineers: A Profession Disappearing as Technology Advances
From Golden Age to Decline: 18 Months of Drama
As we’ve seen, prompting techniques are becoming increasingly sophisticated and powerful. Gemini’s IMO gold medal was a pinnacle example. Yet paradoxically, as prompting technology advances, the profession of prompt engineer is disappearing.
Prompt engineering was a highly sought-after profession through 2024. According to Glassdoor, average salaries were $136,141, with Google offering up to $279,000 and Meta up to $296,000. This was perceived as a high-income profession accessible without technical background.
But by 2025, the situation changed dramatically. According to Indeed’s Hannah Calhoon VP, prompt engineer job postings significantly decreased. Related searches that surged from 2 per million in January 2023 to 144 in April have now plateaued at 20-30 levels. A dramatic change in just 18 months.
Causes of Professional Decline
1. AI’s Automated Prompting Development AI systems gained ability to optimize prompts themselves, reducing the need for human intervention. TalentGenius CEO Malcolm Frank explained: “AI systems now self-optimize prompts, reducing the need for specialized human input.”
2. Integration into Existing Roles Prompting skills integrated into existing jobs like software developers, data analysts, and marketers. Rather than a separate specialty, it became basic literacy required across various roles.
3. Increased Model Maturity Latest LLMs understand much more intuitive and natural instructions. They can generate excellent results without complex prompting techniques.
Critical Analysis: Generalists vs. Specialists
Critically analyzing the sustainability of prompt engineering as a profession reveals several key problems.
Problem 1: Temporary Nature of Technology Prompting is likely a temporary solution compensating for current AI limitations. As AI advances further, natural communication will become possible without complex prompting.
Problem 2: Importance of Domain Knowledge Effective prompting requires deep understanding of relevant fields. Simply knowing prompting techniques alone makes it difficult to achieve meaningful results. To solve math problems, you need to understand mathematics; to code, you need programming knowledge.
Problem 3: Limited Specialization Prompting skills alone cannot create independent value. This is like why “Google search expert” doesn’t exist as a profession.
Realistic Alternative: Integrated Capability Development
While prompting is certainly a valuable skill, it’s better to combine it with other capabilities rather than make it a standalone specialty.
Effective Approaches:
- Domain Expertise + Prompting: Marketers learning prompting to utilize AI tools
- Technical Background + Prompting: Developers building AI-integrated systems
- Research Capabilities + Prompting: Researchers using AI to improve analytical efficiency
This integrated approach is the sustainable and practical way to create value.
Fundamental Reasons Why Prompting Cannot Be an Independent Profession
The Paradox of Universal Skills
Prompting’s greatest strength—its universality—is paradoxically why it can’t become an independent specialized profession. As Gemini’s IMO performance showed, the same prompting principles can apply to various models, suggesting this technology is closer to basic literacy than special expertise.
It’s like “Google search expert” or “email writing expert” not existing as professions. Important and useful skills, but difficult to create independent value.
Limitations and Ethical Considerations of Prompting
Technical Limitations
As seen in Chain-of-Thought analysis, prompting isn’t universal. It shows limitations especially in pattern-based learning or tasks requiring generalization.
Key Limitations:
- Over-optimization Risk: Excessive dependence on specific prompts
- Lack of Consistency: Same prompts can produce different results
- Bias Amplification: Wrong prompts can strengthen AI biases
Ethical Considerations
There are also ethical issues to consider in prompting processes.
Core Issues:
- Information Manipulation: Intentionally guiding biased responses through prompting
- Responsibility: Accountability for incorrect information generated by AI
- Transparency: Whether to disclose prompting processes
Ethical guidelines for prompting use are needed to address these issues.
So how will prompting technology develop in the future? And what impact will it have on our society?
Future Prospects and Development Directions
Technical Development Directions
1. Automated Prompting AI generating optimal prompts independently is developing. This allows more effective results with minimal human intervention.
2. Multimodal Prompting Prompting using not just text but images, voice, and video is advancing. This enables richer, more accurate communication.
3. Personalized Prompting Technology providing personalized prompting by learning user preferences and work patterns is emerging.
Industry Application Prospects
Education:
- Personalized learning support
- Automated assessment and feedback systems
- Creative thinking facilitation tools
Healthcare:
- Diagnostic assistance systems
- Medical professional education and training
- Patient consultation support
Research:
- Automated literature reviews
- Experiment design optimization
- Data analysis and interpretation
Social Impact
The spread of prompting technology will bring significant changes to society as a whole.
Positive Impacts:
- Improved knowledge accessibility
- Democratization of creative work
- Increased work efficiency
Negative Impacts:
- Deepening digital divide
- Degradation of uniquely human capabilities
- Job displacement risks
Social discussion and institutional preparation are needed to prepare for these changes.
Conclusion: English Has Become a Programming Language, But Prompting is Transitional
Starting with “The best programming language right now is English,” we’ve reached an interesting paradox. Prompting is certainly a core competency of the current AI era. Gemini’s IMO gold medal case shows how powerful sophisticated prompting can be.
But simultaneously, the rapid decline of prompt engineering as a profession sends a clear message. Prompting is likely a temporary bridge between AI and humans. As AI advances further, natural communication will become possible without complex prompting.
Therefore, the wise approach is not making prompting a standalone specialty, but combining it with your domain expertise. Marketers should combine marketing knowledge with prompting, developers should integrate development capabilities with prompting, and researchers should merge research methodologies with prompting.
English has indeed become a new programming language. But this doesn’t mean we need prompting specialists. Just as Google search is an important skill but “Google search expert” doesn’t exist as a profession.
Prompting is certainly a valuable skill at this point. But rather than viewing it as independent expertise, it’s better to recognize it as a tool amplifying other capabilities. Regardless of how AI communication methods change, what ultimately matters is what you create. Prompting is just a means to help that process, and we must remember that true value still comes from human ideas, creativity, and domain expertise.
References
- Coursera. (2025). “Prompt Engineering Salary: A 2025 Guide.”
- Fast Company. (2025). “‘AI Is Already Eating Its Own’: Prompt Engineering Is Quickly Going Extinct.”
- Google DeepMind. (2025). “Advanced version of Gemini with Deep Think officially achieves gold-medal standard at the International Mathematical Olympiad.”
- Indeed Hiring Lab. (2025). “AI Job Trends Report.”
- Refontelearning. (2025). “Prompt Engineer Salary Guide 2025: How to Earn $95K-$270K+ in AI Prompt Roles.”
- SolutionsArchitecture. (2025). “Few Shot Prompting AI Architecture: A Comprehensive Guide.”