Navigating the AI Transition? A good first step seems to be to distinguish the human from the machine
As we take our first steps into this new AI transition era: let's first set the boundaries between what's human, and what's machine.
TL;DR
This article highlights the importance of setting clear boundaries between the human side of things and the machine or AI side of things.
To that end, I have created a framework in the form of a simple Glossary/Taxonomy here: https://github.com/sindoc/knowyourai-framework
Official home of the framework is here: https://sindoc.github.io/website/#/page/knowyourai
I have made the glossary available for free in RDF/SKOS format, with a bit of an extension of my own in OWL. There’s also a PDF version, as well as a version in CSV, which you should be able to import easily into the Collibra platform or your Enterprise Glossary of choice. See release notes for more information.
Use Cases
Automatic mapping of AI use cases to risk profiles
By mapping each AI Use Case to one of the terms in the Human-AI Relationships glossary, you will be able to automatically derive the risk profiles associated with a particular use case.
Regulatory Compliance
For the full rationale, feel free to read further down through this rather long article
I use this framework actively in my work in Data & AI Governance and I hope that you too will find it useful. Please let me know your comments below or on Hacker News.
I thought to share my line of thinking, in case you might have the same types of questions I am asking myself, about how to build a solid roadmap that is future-proof in these uncertain times, akin to a transition period for humanity as a whole.
Several types of emotions can arise when faced with understanding the role of AI in our lives.
An individual may be completely unaware of the fact that their lives are being impacted negatively by the AI. What if they find out too late? And if they do, what types of resources do they have available, in order to overcome their fears?
An individual may face helpless in the face of AI, questioning their own intelligence, due to their lack of understanding of technology and what’s really at stake here.
An individual may look to harness the value of AI, while mitigating its risks. This is of course the attitude we try to nurture. We want individuals to feel empowered by AI, but also understand its risks.
The AI revolution has taken a new turn now. It has already caused a paradigm-shift, the same way smart phones did, the same way telephones did, the same way Dirac & Einstein did, the same way trains did, and the list goes, all the way back to fire, in my humble opinion.
I for one don’t want AI to become another ‘fire’
Fire was a technology that reshaped our species by radically modifying consciousness4. We still haven’t quite tamed fire, although we have come a long way. We don’t want AI to cause as much pain that fire caused. Could it be as bad as fire? I would say: yes, but it’s unlikely. At least, so I want to believe because I believe that it’s a challenge we can overcome. This too shall pass, if we all do our homework, I guess.
AI could hurt us, but for the most part, it’s supposed to be a good thing: nature of its impact will depend on large-scale Data Citizen collaboration
I’m trying to be optimistic here, even though the risks are real. I’ll start with a personal anecdote, which also so you understand why I started writing this article in the first place.
I never gave up on philosophy and history as I was practicing as a technology leader in the field of Data Governance for thirteen years and before that, as a young coder. I suppose that part of me knew that this day would come. I don’t claim at all that I could predict the details. And I don’t think that anyone could have, really. But somehow, those that knew, they knew. Take Bill Joy5 for instance.
I read Bill Joy’s article when I was 18, thanks to my brother and lifetime mentor, who shared it with me as he knew that I was showing interest in entering the world of Computer Science and Artificial Intelligence. I could only truly understand the content of Bill Joy’s article and its implications after taking my first Machine Learning and Artificial Intelligence courses in college. Somehow, I was still surprised when the last AI wave hit with OpenAI’s ChatGPT. I just didn’t expect it to become so widespread so fast. As technologists, we have learned the hard way, not to talk about technology too much. Especially, if you share your life with people who are not tech-savvy.
But now, everyone around me knows about AI. And this is exactly why I think that AI has made an entrance into the real world and no-one can do anything to undo it. There’s no going back from this. It’s out there. All we can do now, is to try to understand it and adapt accordingly. See it something that has the potential to disrupt your life in ways that you could or could not see.
The best example is the job hunt example. When you apply for a job, chances are that your CV is initially rated and ranked using AI algorithms. If you don’t adapt to that, you might never get selected for an interview or at least greatly reduce your chances of getting an interview. There are many other examples, some scarier and some less scary. But you get the point. AI is here. Any sane individual, especially those raising children in this world, should pay clear attention to what’s happening and most importantly, how to take AI into account in day-to-day decision making.
One way to look at the last wave of the AI revolution in this Information Age, is to see it as a new technology that has commoditised intelligence. Every person now walks around with direct access to generalists in just about any field. This is huge.
I can imagine that people would have different reactions to something like AI. It is no longer acceptable that you don’t understand AI. I truly believe that everyone should build a good solid knowledge of AI and its implications on their lives.
As someone who has been in this field for a longtime, I now feel that it’s my responsibility to share my own fears and strategies to overcome those fears with my fellow humans, in hopes that together, we can overcome the challenges ahead, in the face of this first phase of the AI revolution.
Clear delineation is required between the people side of things and the AI side of things
As we embark on this journey through the rapidly evolving world of artificial intelligence, one question stands at the forefront: how do we navigate the delicate balance between the unique capabilities of humans and the growing power of machine-driven solutions? The line between the two is becoming increasingly blurred, with AI systems now tackling tasks once thought to be the exclusive domain of human intellect. As we venture deeper into this new frontier, it becomes more important than ever to clearly define what remains inherently human—and what can be entrusted to machines.
In this era of AI transition, understanding the boundaries between human intuition, creativity, and emotional intelligence, versus the efficiency, precision, and data processing power of machines, is essential. This is more than an intellectual challenge; it’s the foundation of how we shape the future. By distinguishing these domains, we ensure that human values and agency remain central to technological progress, guiding us toward a future where both humans and machines can thrive in harmony.
When creating a clear delineation between the “people” side and the “AI” side of things, it’s helpful to consider their roles and responsibilities in various contexts. Here’s a breakdown:
1. Human (People) Side:
• Creativity and Emotional Intelligence: Humans excel in complex, creative tasks that require emotional depth, empathy, and cultural sensitivity.
• Strategic Decision-Making: Humans are responsible for making high-level decisions, formulating long-term goals, and adapting to unpredictable environments.
• Ethics and Values: Human oversight ensures that AI operates within ethical boundaries, considering moral and societal impacts.
• Complex Problem Solving: People are better at abstract reasoning, innovation, and resolving ambiguity in ways machines can’t replicate.
• Personal Interaction: Humans manage social relationships, customer service, and negotiation processes that require human touch and understanding.
2. AI (Artificial Intelligence) Side:
• Data Analysis and Pattern Recognition: AI excels at processing vast amounts of data, identifying patterns, trends, and making predictions based on data analysis.
• Task Automation: AI can automate repetitive tasks efficiently, from manufacturing to data entry, allowing humans to focus on higher-order tasks.
• 24/7 Availability and Scalability: AI can operate continuously without fatigue, handling large-scale processes and tasks at speed and volume beyond human capacity.
• Precision and Consistency: AI performs tasks with high accuracy and without the variability introduced by human fatigue, error, or bias.
• Support in Decision-Making: AI provides data-driven insights, helping humans make informed decisions but not making the final judgment in complex moral or strategic decisions.
Clear Delineation:
Humans remain at the center of decision-making, creativity, and ethical oversight, while AI is a powerful tool that augments human capability in data-driven, repetitive, or scalable tasks. Both work in tandem, but the distinction lies in AI’s role as an enhancer and not a replacement for human insight, judgment, or values.
Why is it important to clearly distinguish the human from AI?
To prevent AI from becoming an independent entity on its own
Contrary to popular belief, there’s no such thing as AI (yet). And I hope that day never comes. There’s no point in going into the details of what that world would look like and how we could potentially go there. I refuse to entertain that idea in this article. If I have to, in a separate piece, I will. But of course, that article would be fiction and the nature of this article is non-fiction.
If the end goal is to come out of this
To make sure AI stays a tool, under control of humans
By no means, do I mean here that humans are perfect. I understand that different individuals hold varying levels of trust in humanity, depending on many factors, which we won’t get into.
Nevertheless, against the threats posed by AI, we’ve only got each other to rely on, so I see no other way than to stand together. “Together we stand, divided we fall.” So said, Roger Waters of Pink Floyd.
“This too, shall pass.” We are in a transition period and at the of this period, we will have mastered AI. I for one don’t want to live in a society that is ruled by machines.
Some practical tools and resources
Framework Formalising the Human-AI Relationships in a Simple Way
To that end, I have created a framework in the form of a simple Glossary/Taxonomy here: https://github.com/sindoc/knowyourai-framework
I have made the glossary available for free in RDF/SKOS format, with a bit of an extension of my own in OWL. There’s also a PDF version, as well as a version in CSV, which you should be able to import easily into the Collibra platform or your Enterprise Glossary of choice.
You can start by mapping your AI Use Cases to the terms proposed in this simple framework. This way, your AI use cases will have a risk profile associated with them.
European Commission. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence. Official Journal of the European Union. Available at: https://eur-lex.europa.eu/eli/reg/2024/1689/oj.
Source: https://www.technologyreview.com/2024/01/17/1086704/china-ai-regulation-changes-2024/
Cyberspace Administration of China. (2023). Interim Measures for the Management of Generative Artificial Intelligence Services. Available at: https://www.cac.gov.cn/2023-04/11/c_1682854275475410.htm.
The term ESG (Environmental, Social, and Governance) refers to the three central factors in measuring the sustainability and societal impact of an investment in a company or business. It has become a key metric for investors seeking responsible and ethical investing practices. For more information, see the United Nations’ Principles for Responsible Investment (PRI)
United Nations. (2006). Principles for Responsible Investment. Available at:
https://www.unpri.org/
Meta note: the content of this footnote is generated by by GPT4 on Sep 8, 2024.
To support the claim that fire was a technology that reshaped our species by radically modifying consciousness, you can draw from a variety of interdisciplinary studies, including anthropology, evolutionary biology, psychology, and cognitive science. Below are references and sources that you could explore:
1. Richard Wrangham’s “Catching Fire: How Cooking Made Us Human” (2009)
• Key points: Wrangham, an anthropologist, argues that the control of fire and the advent of cooking fundamentally altered human evolution. He suggests that cooking allowed for more efficient digestion, leading to increased brain size and changes in social behavior, ultimately impacting human cognition and consciousness.
• Reference: Wrangham, R. W. (2009). Catching Fire: How Cooking Made Us Human. Basic Books.
This book explores how the mastery of fire and cooking helped shape early human societies by freeing up time and energy, which could be devoted to other cognitive tasks, thus impacting consciousness.
2. Terrence Deacon’s “The Symbolic Species: The Co-evolution of Language and the Brain” (1997)
• Key points: Although Deacon focuses on language evolution, he touches on how early technologies like fire enabled the development of symbolic thought. Fire extended the day by providing light, creating space for social and cognitive activities that fostered more abstract thinking and consciousness.
• Reference: Deacon, T. (1997). The Symbolic Species: The Co-evolution of Language and the Brain. W. W. Norton & Company.
3. Paleoanthropology Studies on Cognitive Evolution and Fire
• Key points: Several studies highlight how fire may have contributed to social learning and the development of shared cultural experiences. The communal aspect of gathering around a fire might have fostered storytelling, the sharing of knowledge, and a heightened sense of group identity, all of which can be linked to the evolution of consciousness.
• Reference: Gowlett, J. A. J. (2016). “The discovery of fire by humans: A long and convoluted process”. Philosophical Transactions of the Royal Society B: Biological Sciences, 371(1696), 20150164.
This paper suggests fire contributed to cultural evolution, which has direct links to cognitive and social consciousness shifts in early humans.
4. David Lewis-Williams’ “The Mind in the Cave” (2002)
• Key points: Lewis-Williams, an archaeologist, examines how the discovery and control of fire could have influenced early human consciousness, especially in relation to art and symbolic thinking. He argues that fire, by illuminating dark caves, might have played a role in the creation of early art, which is seen as evidence of abstract thought and higher consciousness.
• Reference: Lewis-Williams, D. (2002). The Mind in the Cave: Consciousness and the Origins of Art. Thames & Hudson.
5. Studies on Circadian Rhythms and Fire’s Impact on Human Consciousness
• Key points: The use of fire to extend daylight would have altered human circadian rhythms, allowing for more evening social interaction and cognitive activities. This extension of time for learning, storytelling, and reflective thought would have influenced the development of complex consciousness.
• Reference: Samson, D. R., & Nunn, C. L. (2015). “Sleep intensity and the evolution of human cognition”. Evolutionary Anthropology: Issues, News, and Reviews, 24(6), 225-237.
6. Fire’s Role in Human Social and Cognitive Development
• Key points: Fire altered early human social structures by promoting cooperation and shared tasks, such as tending to a fire. The cognitive demands of these tasks and the social frameworks they created would have reshaped the brain and consciousness.
• Reference: Dunbar, R. I. M. (2014). “The social brain: Psychological underpinnings and implications for the structure of organizations”. Current Directions in Psychological Science, 23(2), 109-114.
Bill Joy, the co-founder of Sun Microsystems and an influential computer scientist, wrote a famous article titled “Why the Future Doesn’t Need Us,” published in the April 2000 issue of Wired magazine. In this article, Joy expressed profound concerns about the rapid advancements in genetic engineering, nanotechnology, and robotics (GNR). He argued that these technologies could pose existential risks to humanity, particularly because they have the potential to self-replicate and evolve in ways that might become uncontrollable.