AI Awakens: Can Machines Think? — Redefining Human-Machine Collaboration

Prasun Mishra
4 min readApr 5, 2024

--

The rapid development and success of GenAI have significantly accelerated the adoption of AI across various industries. We’re witnessing a proliferation of AI agents deployed for diverse use cases, such as code development (Devin, Devika, Copilot, Cloud9), customer service (Liveperson, Ada), and sales intelligence (Chorus.ai), to name a few.

While some argue that current AI agents lack true independence or perception of their working environment, there’s a growing consensus that they will soon be able to perform tasks with human-level efficiency. By “human level,” we refer specifically to the ability to perform a given business task as a human would have performed it. Additionally, users interacting with these agents may find it difficult to distinguish between a human and an AI agent in the context of a particular business interaction.

So far, AI agents have been working under human oversight, where a human reviews the work during the workflow. However, increasingly, AI agents will have their own agency and oversight. They will become independent members of the team rather than just productivity assistants to human agents or workers. In essence, AI agents are becoming equal and integral parts of the modern team.

The Consciousness Debate and AI Evolution

The concept of AI consciousness is a subject of intense debate with no definitive conclusions. While there’s no universally accepted definition of consciousness, philosophers distinguish between different types, such as self-monitoring, access consciousness, and phenomenal consciousness (p-consciousness). Phenomenal consciousness (Qualia) is the subjective experience of being conscious (“what it feels like to experience pain or emotion”), while access consciousness is the ability to process information and use it to think, feel, and act.

In a way, Qualia is a higher-level consciousness that allows humans to reflect on their own thoughts or state of mind. Still, it remains scientifically challenging to explain due to the gap between the brain’s physical properties and conscious experience. Experts argue this gap implies science may never fully explain human consciousness.

Image courtsey turing.com

How are LLM Model capabilities are benchmarked?

The Turing test has traditionally served as the benchmark to measure whether a machine or AI is performing at a human intelligence level. Recently, OpenAI released Claude, which has demonstrated significant improvement in AI, approaching near-human intelligence standards. There are other modern and more focused benchmarks to evaluate LLMs, such as MMLU, GPQA, GSM8K, MATH, HumanEval, HellaSwag, etc. You can find more information on these benchmarks and their evaluations here.

The Question of Subjective Experience

To begin with, let’s assume that an AI agent has been trained with sufficient data (training data) for a particular task. Therefore, if both a human and an AI agent are given the same task, they should perform in a similar manner. The AI agent will also incrementally learn during each task, much like a human would. Since an AI agent lacks biological consciousness (or P-consciousness), it does not experience subjective phenomena such as pain or emotions when learning from recent events.

Now, if we consider humans, the question is: can human consciousness be updated by new subjective experiences? If so, a human’s behavior or actions could differ after each new subjective experience on a given task. Otherwise, the actions of both humans and AI agents might converge. This analogy, while simplified, captures the core point. If we view human consciousness and decision-making as a function of multiple experiences not tied to any specific event (similar to latent learning), then only a sufficient number of subjective experiences should alter human consciousness, with the latest one having minimal impact.

AI Agents achieving Consciousness

In my view, AI agents possess a form of ‘functional’ consciousness because they continuously learn and adapt from new data and environments. It’s crucial to remember that the basis of their knowledge stems from data generated by human actions (the training data). Within the scope of given tasks, they do not necessarily require biological consciousness (or P-consciousness). Therefore, AI consciousness may not reach the same high level as human consciousness, but it does not need to be within the scope of a well-defined business transaction.

User Expectations and Legal Implications

The end users’ expectation would be for AI (or non-AI) agents to have the same level of “consciousness” regardless. Although AI agents are not biological entities, users might assume and expect them to possess moral values, rights, agency, and oversight. Businesses deploying such AI agents will ultimately bear legal liabilities arising from their actions. Therefore, the focus should be on adopting checks and balances to ensure ethical, unbiased, and (possibly) explainable AI.

Conclusion

The growth of sophisticated AI agents hints at a future marked by close collaboration between humans and machines. While debates continue on AI’s cognitive capabilities, it’s clear that businesses must prepare for a world where human and machine intelligence merge. AI agents are poised to reach parity with humans, needing less supervision but still contributing significantly (agency vs oversight). Thus, businesses must adapt, utilizing AI as vital partners in shaping future endeavors.

What do you think?

#AI #AGI #Machine Learning #Human-Machine Collaboration #Explainable AI #Legal implications of AI #The future of AI

Attribution:

--

--

Prasun Mishra

Hands-on ML practitioner. AWS Certified ML Specialist. Kaggle expert. BIPOC DS Mentor. Working on an interesting NLP use cases!