2 Comments
User's avatar
Renato Siljeg's avatar

Ruslan,

This is an excellent and well-structured analysis. As someone who deals with these specific challenges daily, I can fully appreciate the nuanced understanding you've brought to this discussion.

The symbiosis between humans and a highly optimized AI co-pilot is indeed critical. While the vision of AI as the primary decision-maker is compelling, I believe the ideal balance lies in dynamic collaboration. A well-optimized co-pilot not only enhances efficiency but also ensures that the system benefits from the contextual judgment and strategic insight that humans uniquely provide.

Your emphasis on robust data differentiation strategies resonates strongly. The challenge isn't just cleaning the data but ensuring it is strategically curated to align with organizational goals. Additionally, I agree that AI systems must be designed to engage humans strategically—leveraging their expertise only when gaps arise.

Thank you for sharing this insightful perspective!

Expand full comment
Igor Mameshin's avatar

The challenge of knowledge gaps is real. Getting intrinsic knowledge out of human heads and training AI systems on this knowledge is a huge obstacle - we are talking about the knowledge of accountants, customer service managers, hiring managers, stock brokers, doctors, etc.

"Rather than AI serving as a co-pilot to human decision-makers, it should take the role of the pilot." I completely agree, but for humans to cooperate, it has to be presented a bit nicer. Human in the loop approach ensures that AI keeps learning from mistakes, while humans have complete ability to provide feedback on every AI output, as well as update the policy. Approve high-risk decisions. Keep AI on track by managing the business rules/policy and retraining AI systems based on human expert feedback.

I am currently experiencing that humans hesitate to collaborate with AI engineering teams in fear of AI replacing their jobs or messing up business processes. Humans sometime would provide negative feedback but resist explaining how to improve AI outputs. "I don't like the answer" is not very helpful. When you redefine human jobs as managers of AI agents, where each human gets several AI assistants, in this case humans are more likely collaborate with AI engineers on improving AI systems. When humans see their jobs evolving into more interesting and higher business impact roles, they begin to see a brighter future for themselves as managers of AI agentic workforce. If they see themselves as temporary gap fixers and knowledge sources, they are less likely to share that knowledge sincerely.

Expand full comment