These are my personal notes from watching How to Lead Your Organisation’s AI-Transformation • Rasmus Lystrøm • YOW! 2024 talk.
Common Ineffective AI Strategies
1. The “Wait It Out” Strategy
- Attempting to block AI access (like ChatGPT) to prevent data leaks
- Reality: Employees find workarounds, using personal devices to access AI tools when blocked on company devices
- Creates shadow IT rather than solving the problem
2. The “Copycat” Strategy
- Building internal versions of ChatGPT (“Company Name GPT”)
- Results in shallow copies that can’t match the innovation pace of specialized AI companies
- Users find these inferior and revert to external tools
3. The “New Silo” Strategy
- Hiring a Chief AI Officer (CAIO) and creating an AI Center of Excellence
- Creates excessive guardrails that limit productivity
- Similar to unsuccessful “cloud centers of excellence”
- Results in platform building that’s perpetually “80% done”
4. The “Paid Service” Strategy
- Enabling AI services for everyone without organizational changes
- Disappointing adoption rates and productivity gains
The Reality of AI Productivity
- Research shows moderate AI users gain approximately 4% productivity boost, not the promised 30-60%
- Only 1% of AI-generated code is committed without significant rework
- DORA report indicates minimal quality improvements with decreased delivery stability
- The productivity illusion: generating large volumes of content that requires substantial revision
- Writing code was never the productivity bottleneck; meetings and coordination are
True Transformation Requirements
- AI transformation requires changes to business processes, organizational structures, and employee roles
- Conway’s Law (1967): “Organizations develop systems that mirror their communication structures”
- Strategies must be driven by business outcomes, not just technology
- Empowered teams need complete skills, authority, and autonomy to deliver value
Effective Approaches
- Avoid empowerment theater: Teams aren’t empowered if they require permission from security, platform teams, or change advisory boards
- Cross-functional teams with AI engineers, data scientists and business stakeholders
- Avoid premature platform creation: “Building a platform to build a product before you build a product is premature optimization”
- Build small things that work and get them into production
- Per John Gall (1975): “A complex system that works is invariably found to evolve from a simple system that worked”
A Philosophical Question About AI’s Direction
For decades, we have been saying, AI will come along and it will take over the menial work, releasing us humans to do the more creative work that is more fun.
But, it seems that the AI has moved into the creative space and is just leaving the more menial and mind-numbing stuff for humans to do.
Why do we need AI generated videos, for example, why not leave that up to somebody who is creative and use the AI to do the menial frame-by-frame touching up and remastering of old videos?
Shouldn’t AI and automation be doing the menial tasks on the production line, freeing up humans to do something more interesting instead?
Is it because these menial tasks are also uninteresting to program for and hard to get 100% accurate? It is much more fun to get the AI to do some squiggly lines, you can see it is doing something quickly, and it comes up with pretty pictures. It is much more fun to get it to write an article or a story than it is to get it to recognise a widget on a production line and reject those with defects.
Is AI pandering to the programmers of the AI and their funders, not to the potential uses, where it makes the most sense? We people want that short-sharp endorphin high, the instant gratification, not the hard slog to actually do something useful.
Source: https://www.thurrott.com/forums/general-discussion/a-i/thread/why-is-ai-going-in-the-wrong-direction
This talk challenges organizations to look beyond the hype of AI tools and focus on the genuine organizational transformation needed to leverage AI effectively.