I listen to startup podcasts so you don’t have to.

Prompting That Works in 2025 and Beyond

Strategy
June 19, 2025
Sander Schulhoff reveals top 2025 AI prompting tactics and why old strategies fail.
Topics discussed in the episode:
-
Is AI alignment a significant concern for future AI development?
-
How can providing additional context improve AI model performance?
-
What is ensemble prompting and how can it improve AI outputs?
-
Can prompt injection be fully solved in AI products?
-
What defense strategies against prompt injection don't work?
-
How does prompt injection pose challenges for AI products?
-
Should you still use chain-of-thought prompting with modern AI models?
-
Does role prompting improve AI performance on accuracy-based tasks?
-
What is the most effective prompt engineering technique for improving AI model output?
-
Is prompt engineering still relevant for building AI products?

Is AI alignment a significant concern for future AI development?

Opening: Sander shares his perspective on AI alignment and the potential risks associated with misaligned AI agents. Quote:

"But more recently, I have become a believer in this misalignment problem... and that is one of the things that has me super concerned."

Takeaway:
  • AI alignment is a critical challenge.
  • Misaligned AI agents can pose significant risks.
  • Consider alignment issues in AI product development.

How can providing additional context improve AI model performance?

Opening: Sander emphasizes the importance of including additional information in prompts to improve AI outputs. Quote:

"Including a lot of information just in general about your task is often very helpful."

Takeaway:
  • Provide detailed context in your prompts.
  • Additional information aids AI understanding.
  • Improves accuracy and relevance of outputs.

What is ensemble prompting and how can it improve AI outputs?

Opening: Sander introduces ensemble prompting as an advanced technique to enhance AI performance. Quote:

"Ensembling techniques will take a problem, and then you'll have like multiple different prompts that go and solve the exact same problem... and then you take the answer that comes back most commonly."

Takeaway:
  • Use multiple prompts or models on the same task.
  • Combine outputs to improve accuracy.
  • Ensemble prompting can enhance AI product reliability.

Can prompt injection be fully solved in AI products?

Opening: Sander discusses the limitations in completely preventing prompt injection attacks. Quote:

"It is not a solvable problem... It has to be the AI research labs... It has to be innovations in model architectures."

Takeaway:
  • Prompt injection can't be fully eliminated.
  • Solutions require innovations from AI research labs.
  • Focus on mitigation and ongoing vigilance.

What defense strategies against prompt injection don't work?

Opening: Sander highlights common but ineffective defenses against prompt injection in AI products. Quote:

"The most common technique by far that is used to try to prevent prompt injection is improving your prompt... This does not work, this does not work at all."

Takeaway:
  • Simply refining prompts doesn't prevent prompt injection.
  • Guardrails and keyword blocking are insufficient.
  • AI teams need more robust security measures.

How does prompt injection pose challenges for AI products?

Opening: Sander explains the risks of prompt injection and its impact on AI product security. Quote:

"Getting AI's to do or say bad things... It is not a solvable problem. That's one of the things that makes it so different from classical security."

Takeaway:
  • Prompt injection can lead AI to produce harmful outputs.
  • This challenge can't be fully eliminated.
  • AI teams must continuously monitor and mitigate risks.

Should you still use chain-of-thought prompting with modern AI models?

Opening: Sander discusses the relevance of chain-of-thought prompting with newer reasoning models. Quote:

"Generally not so useful anymore, because as you just said, there's these reasoning models that have come out, and they by default, do that reasoning."

Takeaway:
  • Chain-of-thought prompting may be unnecessary with modern models.
  • New AI models often reason by default.
  • Consider if this technique benefits your AI application.

Does role prompting improve AI performance on accuracy-based tasks?

Opening: Sander debunks the misconception that role prompting enhances AI performance on accuracy tasks. Quote:

"But from my perspective, roles do not help with any accuracy-based tasks whatsoever."

Takeaway:
  • Role prompting doesn't enhance accuracy in AI outputs.
  • Avoid relying on roles for improving performance in AI products.
  • Focus on more effective prompt engineering techniques.

What is the most effective prompt engineering technique for improving AI model output?

Opening: Sander shares his top prompt engineering technique that significantly boosts AI model performance. Quote:

"If there were one technique that I could recommend people, it is few shot prompting, which is just giving the AI examples of what you want it to do."

Takeaway:
  • Implement few-shot prompting by providing examples.
  • Examples guide AI models to produce better outputs.
  • Enhance your AI product's performance using this technique.

Is prompt engineering still relevant for building AI products?

Opening: Sander discusses the ongoing relevance of prompt engineering in AI development, despite claims that it's becoming obsolete. Quote:

"People will kind of always be saying it's dead or it's going to be dead with the next model version, but then it comes out and it's not."

Takeaway:
  • Prompt engineering remains crucial in AI product development.
  • Continuous experimentation with prompts enhances AI performance.
  • Don't assume newer models eliminate the need for prompt optimization.