Humanity in the Intelligent Age: An Interview with OEB’s Conference Chair

We are entering the intelligent age – an era shaped by AI, automation, and digital transformation. Technology is no longer just a tool; it influences how we think, learn, and make decisions. But as we embrace innovation, we must ask: What does it mean to remain truly human?

AI is redefining intelligence, but human intelligence is more than computation – it is creative, emotional, and deeply relational. As technology reshapes work and learning, will it empower or displace? Will it amplify bias or challenge it? Will it divide us further or strengthen human connection? How do we cultivate critical thinking, ethical leadership, and emotional resilience? How do we uphold empathy, privacy, responsibility, and the duty of care?

We asked Rebecca Stromeyer, conference chair & co-founder of OEB Global, about her thoughts and ideas behind this year’s main theme.


OEB Insights: The theme of this year’s OEB conference focuses on ‘Humanity in the Intelligent Age.’ We hear a great deal about AI, automation, and digital transformation, but why is it important to frame this discussion around human values such as empathy, responsibility, and the duty of care? 

RS:  Because, ultimately, technology is not just about efficiency or optimisation – it’s about us. We are at an inflection point where the systems we create are shaping not only the way we work and learn, but also how we relate to one another, how we form knowledge, and even how we define ourselves as human beings. If we treat technology as purely a functional tool and ignore its impact on human behaviour, we risk creating a world that values speed over reflection, scale over depth, and automation over genuine understanding. 

A key part of this discussion is curiosity and critical inquiry. Societies don’t progress by simply accepting new technologies. They progress by questioning them, by exploring possibilities, by challenging assumptions. If we allow technology to shape our choices without engaging with it critically, we lose something essential: the ability to think independently and explore new ideas. 

This is also where the duty of care becomes essential. While often understood as a legal term, we have chosen to frame it more broadly – not just as a professional obligation, but as a shared responsibility for shaping a future rooted in human values. It is about recognising our duty to one another, ensuring that technological progress does not come at the expense of empathy, trust, or ethical decision-making. In an increasingly digital world, where interactions are often mediated by algorithms and automation, this collective responsibility has never been more important.

OEB Insights: The theme suggests that technology is not neutral, and that it reflects the values of those who wield it. What does that mean in practical terms for the future of learning?

RS:  It means we must move beyond the illusion that technology develops in a vacuum. Every algorithm, AI model, and dataset reflects human choices: what to prioritise, what to exclude, and who benefits. In education and training, this plays out in everything from biased recruitment algorithms that favour certain demographic groups to AI-driven learning platforms that subtly reinforce existing knowledge hierarchies.

If we want technology to be a force for equity rather than division, we must be intentional in how we design and implement it. That means engaging actively with the systems we build by evaluating datasets critically, ensuring diversity in the teams that develop AI, and equipping learners with the skills to question rather than accept information passively. AI literacy is becoming just as important as digital literacy – it’s not enough to know how to use AI; we need to understand how it works, what shapes its decisions, and where its limitations lie. AI can personalise education in ways we have  never seen before, but it also raises important questions, such as “What is  being prioritised? Whose perspectives are included? How is knowledge being framed?.”

This is why questioning is so important. Education should never be just about delivering content – it should be about discovery, debate, and critical engagement. If we teach people to challenge ideas, to ask why things are the way they are, and to explore different perspectives, we give them the tools to navigate an increasingly complex world with confidence and insight. And crucially, human educators must remain central to this process – because learning is not just about acquiring knowledge; it’s about interpretation, discussion, and the kind of critical thinking that machines cannot replicate.

OEB Insights: One of the more provocative questions raised by the theme is whether AI will entrench bias or help us challenge it. What’s your view? 

RS:  It can do both. It depends entirely on how we use it. AI is often presented as an objective tool, but in reality, it mirrors the biases of its creators and the societies in which it operates. If we don’t challenge its design and application, it will almost certainly reinforce existing inequalities. AI learns from historical data, and that data often reflects biases in society. Left unchecked, it will replicate and even amplify those biases, whether in hiring decisions, grading systems, or access to learning opportunities. 

But AI also has the potential to highlight biases and help us rethink traditional learning models. It can identify where certain groups are being disadvantaged, reveal gaps in how knowledge is structured, and even personalise education in ways that support a more diverse range of learners. 

What is crucial is that we engage with AI critically, rather than accepting its outputs as objective truth. If people are taught to ask questions such as  “why did the system generate this response?” or “what is  missing from this perspective?”, then AI becomes a tool for discovery and understanding rather than a force for entrenching old patterns.

OEB Insights: The theme highlights the challenges of disinformation and political division. What role does learning play in addressing these challenges? 

RS:  Learning, in all its forms, is our first and last defence against manipulation. The sheer volume of information we encounter today means that developing the ability to discern truth from distortion is no longer just an intellectual exercise – it is a civic necessity.  If we are serious about safeguarding democracy and social cohesion in this age, we must rethink how we cultivate digital literacy and critical thinking. This means not only teaching people how to evaluate sources but also helping them to understand the psychological, political and economic forces that drive disinformation. It means developing emotional resilience alongside intellectual rigour – because when people feel disempowered or alienated, they are more susceptible to misinformation and ideological manipulation. 

One of the most effective ways to do this is by  encouraging curiosity and intellectual humility. If people are used to questioning things, exploring different viewpoints, and weighing evidence, they are much less likely to be misled by false or manipulative narratives. Misinformation often spreads not because people lack intelligence, but because they gravitate towards information that confirms their existing beliefs. 

At the same time, it is not just about logic, it is also about emotional intelligence. People don’t change their minds just because they are presented with facts. They change their minds when they feel heard, when they trust the source of information, and when they see the value in questioning their own perspectives. So we need to integrate  both critical thinking and emotional awareness into education if we want to build resilience against disinformation.

OEB Insights: The theme asks how we can ensure that work itself is being redefined in a way that empowers rather than displaces people. What does that mean for learning? 

RS:  The biggest risk in this transition is treating learning as a narrow, transactional process, as if it is just about acquiring technical skills for specific job roles. The reality is that jobs are changing too quickly for that approach to work. What will really matter in the long run is the ability to adapt, to think critically, and to solve problems creatively. 

That is  why lifelong learning needs to also focus on curiosity and adaptability. If someone understands how to ask good questions, explore new fields, and challenge their own thinking, they will always be able to develop new skills. In contrast, if learning is seen as a one-time process, something that happens only in a learning institution – people will struggle to keep up with a rapidly changing world. 

Interdisciplinary thinking is also crucial. Many of the biggest breakthroughs happen when people combine insights from different fields, for example where AI meets ethics to tackle bias in decision-making, where psychology meets data science to understand human behaviour in digital spaces, or where literature meets neuroscience to explore how storytelling affects brain function.

We see this in the intersection of medicine and engineering, where robotic prosthetics are being developed to respond to neural signals, or in biology and computer science, where AI is accelerating drug discovery and genetic research. In education, cognitive science and pedagogy are coming together to reshape how we approach learning, using insights from brain research to design more effective teaching methods. These connections matter because the challenges we face today such as climate change, misinformation, social inequality  don’t fit into just one discipline. In my view, encouraging people to think across fields, to borrow tools and perspectives from different domains, is what will drive the most innovative and impactful solutions in the intelligent age.

OEB Insights: Finally, this year’s OEB theme challenges us to consider how we can ensure that the intelligent age remains a deeply human one. What is the most important thing we can do to achieve this?

RS:  We must refuse to surrender our agency. There is a tendency to talk about technology as though it has an inevitability of its own, as if AI is something that is ‘happening to us.’ But the reality is that we are shaping this future – through our policies, our values, and our choices. The intelligent age will be as human as we choose to make it. 

That means placing human well-being at the heart of digital transformation. It means designing systems that prioritise wisdom over efficiency, curiosity over conformity, and ethics over expedience. It means nurturing leaders who understand that responsibility and power must go hand in hand. And it requires us to see intelligence in its fullest sense – not just as the ability to process data, but as the capacity for empathy, creativity, and meaningful human connection.

In fact, AI is forcing us to rethink what intelligence truly means. While machines are becoming more powerful in computation and pattern recognition, they also highlight the sheer flexibility, depth, and creativity of human intelligence – our ability to imagine, to adapt, to question, to innovate and to create entirely new possibilities. If anything, the rise of AI should remind us to take a step back and marvel at what humans can do. I hope that participants at OEB will be inspired to reflect not just on the future of technology, but on the extraordinary potential of human ingenuity itself.

So, the question is not just whether we can make AI more human, but whether we can make our societies more humane in the process. That, in the end, is the real challenge of the intelligent age.

This interview was conducted for OEB Global 2025. Thank you for your time Rebecca Stromeyer!

Leave a Reply

Your email address will not be published.