Our Blogs

Insights on AI control, trends, and more. New posts twice weekly. See our services and FAQ for more.

Interesting Industry Reads

Curated external resources and research papers on AI trends and business applications.

State of AI in Business 2025 Report

MIT Technology Review & MLQ.AI

This report reveals a stark "GenAI Divide" in enterprise AI: despite $30-40 billion in investment, 95% of organizations see zero ROI from GenAI initiatives. While tools like ChatGPT boost individual productivity, enterprise-grade systems struggle with adoption—only 5% reach production due to brittle workflows and lack of contextual learning. The key finding: success isn't determined by model quality or regulation, but by implementation approach. Organizations that prioritize process-specific customization, demand systems that learn and adapt over time, and partner externally achieve twice the success rate. The highest performers report measurable value through reduced BPO costs, improved customer retention, and selective workforce optimization in support and engineering roles.

Prompt Politeness Research: Two Contrasting Perspectives

"Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance" (Yin et al., 2024)
"Mind Your Tone: Investigating How Prompt Politeness Affects LLM Accuracy" (Dobariya & Kumar, 2024)

Two recent studies examine how prompt politeness affects LLM performance, arriving at nuanced but seemingly divergent conclusions. Yin et al.'s cross-lingual study across English, Chinese, and Japanese reveals that impolite prompts often degrade performance, while overly polite language provides no guarantees—politeness effectiveness varies significantly by cultural and linguistic context. Their research emphasizes that LLMs mirror human communication traits, suggesting culturally-aware prompting strategies matter. Conversely, Dobariya and Kumar's controlled experiment using 250 prompts across mathematics, science, and history found that impolite prompts (84.8% accuracy) consistently outperformed polite ones (80.8% accuracy) in ChatGPT 4o. This counterintuitive result challenges conventional assumptions about human-AI interaction. The apparent contradiction highlights critical implementation factors: newer model architectures may process tone differently than legacy systems, task domain influences politeness sensitivity (creative vs. analytical), and cultural context remains paramount in multilingual deployments. For practitioners, these findings suggest that prompt engineering should prioritize clarity and directness over social politeness conventions, while remaining cognizant of cultural variables in global applications. The research underscores that effective AI interaction requires moving beyond anthropomorphic assumptions—what works in human communication may not optimize machine performance. Both studies agree on one principle: blindly applying human social norms to LLM interactions can undermine accuracy and effectiveness.