{"id":7639,"date":"2017-05-11T16:59:05","date_gmt":"2017-05-11T14:59:05","guid":{"rendered":"http:\/\/newinsights.oeb.global\/?p=7639"},"modified":"2017-05-30T14:18:48","modified_gmt":"2017-05-30T12:18:48","slug":"ethics-in-artificial-intelligence-what-the-future-holds-speaking-to-inge-de-waard","status":"publish","type":"post","link":"https:\/\/oeb.global\/oeb-insights\/ethics-in-artificial-intelligence-what-the-future-holds-speaking-to-inge-de-waard\/","title":{"rendered":"Ethics in Artificial Intelligence: what the future holds &#8211; speaking to Inge de Waard"},"content":{"rendered":"<p><a href=\"https:\/\/oeb.global\/oeb-insights\/wp-content\/uploads\/2017\/05\/AI_Inge-de-Waard-1.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-7642 alignleft\" src=\"https:\/\/oeb.global\/oeb-insights\/wp-content\/uploads\/2017\/05\/AI_Inge-de-Waard-1.jpg\" alt=\"\" width=\"250\" height=\"140\" \/><\/a><a href=\"http:\/\/www.oebmidsummit.com\/speaker\/inge+de+waard\">Dr Inge de Waard<\/a> works as a strategic instructional designer for InnoEnergy Europe and is an avid enthusiast for open science. Her focus and knowledge spans many areas in the digital learning sphere: from MOOCs to individual online learning types, self-directed learning to questions of ethics in Artificial Intelligence. At <a href=\"http:\/\/www.oebmidsummit.com\/about\">OEB MidSummit<\/a> (June 8 &#8211; 9) Inge will discuss the risk of leaving out ethics in machine intelligence development in her interactive session \u201c<a href=\"http:\/\/www.oebmidsummit.com\/programme\/2017-june-8\/32\">Society reclaims Ethics for Education<\/a>\u201d. She has already given us a taste of what she thinks about the use of Artificial Intelligence in education from its potential to the risks.<\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p><strong>You promote using AI in education, but not blindly or at any cost. What are your concerns? Is there a price learners or educators may unknowingly be paying as they use AI? <\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>I would not want to say that I promote using Artificial Intelligence (AI) in education, instead that I accept the fact that it is used, and ever more so. However, I have some critical thoughts accompanying this evolution.<br \/>\nAn algorithm is just a set of rules to be followed, it is a self-contained set of actions. AI is made up out of a complex network of algorithms with the aim to try and mimic human thought. The algorithms are pieces of rules translated into programming code so it can be embedded into larger pieces of software, which in turn can communicate with other big software components also embedding algorithms, making up AI.<\/p>\n<p>&nbsp;<\/p>\n<p>Taking the above into account, my concern with AI is two-fold: First, the brain \u2013 to me \u2013 is not reversibly engineerable. It is something bigger than its parts, just like all emerging complex systems. This also means it is multi-faceted and does not always follow one set of rules linearly or logically. Secondly, only a niche group of people are constructing the algorithms that make up AI. Their cultural assumptions are inevitably translated into the rules that make up the algorithms.<br \/>\nThe so-called effect of the programmer was first seen in filter bubbles. <a href=\"https:\/\/en.wikipedia.org\/wiki\/Eli_Pariser\">Eli Pariser<\/a> coined the term filter bubble, and highlighted the unintentional lock-in that can happen if algorithms filter what they are programmed to filter for you. They are a result of web personalization and programmed on the basis of prior preferences, use, location, interests, etc. A well-known example of a filter bubble compares search results for professional with unprofessional hairstyles and reveals a racial bias. (<a href=\"https:\/\/www.theguardian.com\/technology\/2016\/apr\/08\/does-google-unprofessional-hair-results-prove-algorithms-racist-\">https:\/\/www.theguardian.com\/technology\/2016\/apr\/08\/does-google-unprofessional-hair-results-prove-algorithms-racist-<\/a>)<\/p>\n<p>&nbsp;<\/p>\n<p>In a way, we are building an external set of logic rules outside of our brains. However, the crux is that those rules come out of our human brains. To be precise, they come from a selected number of brains, which do not necessarily represent the complexity of human thought out there. For example, what are difficult activities to translate into logical thought? I\u2019d say, creativity and art. Modelling the universe is easier than modelling what makes art. To me this means we have a clear gap. Although AI is promoted as the new utopia, there only seems to be a handful of people willing to make AIs that resemble Mahatma Ghandi, Shei Shonagon, Cindy Sherman \u2026 So, which brain is AI interested in? The use of AI is currently more about production, efficiency and constructing what some think the brain should be like, than actually providing an additional benefit to the whole of society.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Could a focus on a code of ethics for the use of AI offer a solution to address possible anxieties? Which issues need to be taken into consideration, especially in the education sector?<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>Looking at AI from an educational perspective reveals potential benefits. AI can help us to understand what works and why. For example, we can analyse big data from learners to find out more about learning efficiency. AI also makes it possible to personalize learning easier.<\/p>\n<p>&nbsp;<\/p>\n<p>However, at the same time AI will increase automation, raising the risk of job elimination, especially less complex jobs. This is something that might scare specific groups of learners. I think offering some sort of ethical framework is a great idea. The immediate downside of any ethical framework is that it is built upon cultural norms. This means it is very difficult to construct an ethical framework that appeals to multiple societies since. What decreases anxiety for one group of people can raise it for other groups. Nevertheless, if we \u2013 as a society \u2013 consider well-being to be at the core of life, a code of ethics related to AI might decrease anxiety, by offering predictions on the effect of AI on our overall well-being. Not every student is helped by \u2018improving learning\u2019, and certainly not by loss of income due to job loss. A code of ethics might also include financial, emotional, and social factors that society wants to uphold, or even \u2013 ultimately \u2013 achieve.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>The tech giants are already working to build safeguards into their AI technologies. Which bodies, institutions or associations should be involved in coming up with an ethical framework for the development of machine intelligence?<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>Both big and small companies are using algorithms to support their (software-based) applications. With the increasingly pervasive way in which AI interacts with our lives, ethics become more important. Only by looking at the Internet of Things and its \u00a0expansion into different segments of both our professional and personal lives, we must acknowledge the impact of those algorithms on various aspects of our lives, including education. An important term used frequently is \u2018smart\u2019: smart buildings, smart technologies\u2026 But as we have seen with the unprofessional\/professional hairstyles, algorithms result in unexpected outcomes. Therefore, I\u2019m not sure how good of an idea it is that Facebook answers to fake news by creating new algorithms (<a href=\"http:\/\/www.independent.co.uk\/life-style\/gadgets-and-tech\/news\/facebook-fake-news-feature-uk-election-2017-a7720506.html\">http:\/\/www.independent.co.uk\/life-style\/gadgets-and-tech\/news\/facebook-fake-news-feature-uk-election-2017-a7720506.html<\/a> ).<br \/>\nAs algorithms affect all of us, inevitably this means that all layers of society should be involved in creating early indicators that address the effects of for AI effects. This includes, for instance, civil society, e.g. Fairness, Accountability, Transparency in Machine Learning (<a href=\"http:\/\/www.fatml.org\/\">http:\/\/www.fatml.org\/<\/a>) or the Electronic Privacy Information Centre (<a href=\"https:\/\/epic.org\/privacy\/consumer\/\">https:\/\/epic.org\/privacy\/consumer\/<\/a>), governments, as well as industry. All of these, and other organisations, can support policy standards to make the monitoring of the effects of AI possible.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>How do you assess the chances of a valid and globally applicable code of ethics for AI?<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>Personally, I think implementing an ethical layer is possible but will take an enormous amount of interdisciplinary research (including Arts) and effort. We can see this in simple \u2013 though admittedly negative \u2013 examples such as internet censorship (e.g. location-based restriction of access to online resources).<br \/>\nBut there are multiple difficulties to overcome in order to get to a global ethics code, if this is possible at all. The Deepmind ethics board gives an indication of the difficulty to achieve transparency when it comes to ethics and AI. Many AI companies have ethics boards, yet very little is shared (<a href=\"https:\/\/www.theguardian.com\/technology\/2017\/jan\/26\/google-deepmind-ai-ethics-board\">https:\/\/www.theguardian.com\/technology\/2017\/jan\/26\/google-deepmind-ai-ethics-board<\/a> ).<br \/>\nSo, I think it is possible to create an ethical layer but it takes more than an industry led initiative; it involves opening up AI to broader society and its options. If we look at management options, or institutions like the United Nations, we can see that adding a top-layer that looks at \u2018vision and future\u2019 might be possible &#8211; at least as a theoretical idea. And while working on the implementation of this theoretical idea, all the while debates can be organized to fine-tune the code.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>What is your view on taking some decisions aimed at making the world more \u201cmoral\u201d out of human hands? Can machines help humankind in this effort? <\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>Morality is a personal compass for right and wrong, therefor it is an internal, individual process, unlike ethics which can be expressed as a set of external rules. Looking at whether machines can help humankind to be more ethical\u2026 for the sake of argument, I would say Yes.<\/p>\n<p>&nbsp;<\/p>\n<p>This might expose me as a believer (in constructed technology based thinking) or a fatalist &#8211; if you consider the low degree of confidence I have in humanity. My belief is based on the ability to construct complex societal models based on complex algorithms, and recalibrated algorithms which can estimate the outcome of certain innovations or actions, and see its effect on, for example the earth\u2019s sustainable ecosystem or on mental health.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Do you think we also need to consider establishing guidelines for \u201crights\u201d to be granted to AI? <\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>The need to write such a set of rules still seems far from possible, as even explainable AI (XAI) is still not realized. One of the first steps in order to understand why AI comes up with a specific result is turning AI into XAI, so it can self-explain its functions. So, I think the first hurdle to take in order to come to a set of AI rights (at present) is to look for a code of ethics of the people coding the current algorithms, as chances are that these sets of rules will get written into the AI anyway.<br \/>\nThis idea is of course a long standing sci-fi challenge. I think the most well-known are the three laws of robotics, listed by Asimov in the process of providing \u2018rights\u2019 to AI. But the definitions in these three laws offer an array of interpretations. Essentially, based on these laws, it would be perfectly normal to put humanity in one specific location (a zoo), keep humanity sedated while offering augmented reality (think pods scene from the Matrix), and thus keeping humanity safe from harm. It might prove to be a conundrum.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>What are you looking forward to at the MidSummit conference?<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>I\u2019m definitely looking forward to the network of leaders involved in the online learning and training field. In this quickly evolving line of work, you can no longer wait until a reference book comes out or a set of best practices is established. In a way, we as professionals live in a constant beta world. Change is constant, rapid, and impactful. As a professional you need to be adaptive, which means you need to keep on top of what is happening, who is doing what, with which effect and why. This is why I\u2019m looking forward to the MidSummit. I\u2019m certain it will provide me with additional, sometimes conflicting ideas that will ignite new knowledge.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Dr Inge de Waard works as a strategic instructional designer for InnoEnergy Europe and is an avid enthusiast for open science. Her focus and knowledge spans many areas in the digital learning sphere: from MOOCs to individual online learning types, self-directed learning to questions of ethics in Artificial Intelligence. At OEB MidSummit (June 8 &#8211; [&hellip;]<\/p>\n<div class='heateorSssClear'><\/div><div  class='heateor_sss_sharing_container heateor_sss_horizontal_sharing' data-heateor-sss-href='https:\/\/oeb.global\/oeb-insights\/ethics-in-artificial-intelligence-what-the-future-holds-speaking-to-inge-de-waard\/' data-heateor-sss-no-counts=\"1\"><div class='heateor_sss_sharing_title' style=\"font-weight:bold\" ><\/div><div class=\"heateor_sss_sharing_ul\"><a aria-label=\"Linkedin\" class=\"heateor_sss_button_linkedin\" href=\"https:\/\/www.linkedin.com\/sharing\/share-offsite\/?url=https%3A%2F%2Foeb.global%2Foeb-insights%2Fethics-in-artificial-intelligence-what-the-future-holds-speaking-to-inge-de-waard%2F\" title=\"Linkedin\" rel=\"nofollow noopener\" target=\"_blank\" style=\"font-size:32px!important;box-shadow:none;display:inline-block;vertical-align:middle\"><span class=\"heateor_sss_svg heateor_sss_s__default heateor_sss_s_linkedin\" style=\"background-color:#0077b5;width:20px;height:20px;display:inline-block;opacity:1;float:left;font-size:32px;box-shadow:none;display:inline-block;font-size:16px;padding:0 4px;vertical-align:middle;background-repeat:repeat;overflow:hidden;padding:0;cursor:pointer;box-sizing:content-box\"><svg style=\"display:block;\" focusable=\"false\" aria-hidden=\"true\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"100%\" height=\"100%\" viewBox=\"0 0 32 32\"><path d=\"M6.227 12.61h4.19v13.48h-4.19V12.61zm2.095-6.7a2.43 2.43 0 0 1 0 4.86c-1.344 0-2.428-1.09-2.428-2.43s1.084-2.43 2.428-2.43m4.72 6.7h4.02v1.84h.058c.56-1.058 1.927-2.176 3.965-2.176 4.238 0 5.02 2.792 5.02 6.42v7.395h-4.183v-6.56c0-1.564-.03-3.574-2.178-3.574-2.18 0-2.514 1.7-2.514 3.46v6.668h-4.187V12.61z\" fill=\"#fff\"><\/path><\/svg><\/span><\/a><a aria-label=\"Bluesky\" class=\"heateor_sss_button_bluesky\" href=\"https:\/\/bsky.app\/intent\/compose?text=Ethics%20in%20Artificial%20Intelligence%3A%20what%20the%20future%20holds%20-%20speaking%20to%20Inge%20de%20Waard%20https%3A%2F%2Foeb.global%2Foeb-insights%2Fethics-in-artificial-intelligence-what-the-future-holds-speaking-to-inge-de-waard%2F\" title=\"Bluesky\" rel=\"nofollow noopener\" target=\"_blank\" style=\"font-size:32px!important;box-shadow:none;display:inline-block;vertical-align:middle\"><span class=\"heateor_sss_svg heateor_sss_s__default heateor_sss_s_bluesky\" style=\"background-color:#0085ff;width:20px;height:20px;display:inline-block;opacity:1;float:left;font-size:32px;box-shadow:none;display:inline-block;font-size:16px;padding:0 4px;vertical-align:middle;background-repeat:repeat;overflow:hidden;padding:0;cursor:pointer;box-sizing:content-box\"><svg width=\"100%\" height=\"100%\" style=\"display:block;\" focusable=\"false\" aria-hidden=\"true\" viewBox=\"-3 -3 38 38\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M16 14.903c-.996-1.928-3.709-5.521-6.231-7.293C7.353 5.912 6.43 6.206 5.827 6.478 5.127 6.793 5 7.861 5 8.49s.346 5.155.572 5.91c.747 2.496 3.404 3.34 5.851 3.07.125-.02.252-.036.38-.052-.126.02-.253.037-.38.051-3.586.529-6.771 1.83-2.594 6.457 4.595 4.735 6.297-1.015 7.171-3.93.874 2.915 1.88 8.458 7.089 3.93 3.911-3.93 1.074-5.928-2.512-6.457a8.122 8.122 0 0 1-.38-.051c.128.016.255.033.38.051 2.447.271 5.104-.573 5.85-3.069.227-.755.573-5.281.573-5.91 0-.629-.127-1.697-.827-2.012-.604-.271-1.526-.566-3.942 1.132-2.522 1.772-5.235 5.365-6.231 7.293Z\" fill=\"#fff\"\/><\/svg><\/span><\/a><a aria-label=\"Facebook\" class=\"heateor_sss_facebook\" href=\"https:\/\/www.facebook.com\/sharer\/sharer.php?u=https%3A%2F%2Foeb.global%2Foeb-insights%2Fethics-in-artificial-intelligence-what-the-future-holds-speaking-to-inge-de-waard%2F\" title=\"Facebook\" rel=\"nofollow noopener\" target=\"_blank\" style=\"font-size:32px!important;box-shadow:none;display:inline-block;vertical-align:middle\"><span class=\"heateor_sss_svg\" style=\"background-color:#0765FE;width:20px;height:20px;display:inline-block;opacity:1;float:left;font-size:32px;box-shadow:none;display:inline-block;font-size:16px;padding:0 4px;vertical-align:middle;background-repeat:repeat;overflow:hidden;padding:0;cursor:pointer;box-sizing:content-box\"><svg style=\"display:block;\" focusable=\"false\" aria-hidden=\"true\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"100%\" height=\"100%\" viewBox=\"0 0 32 32\"><path fill=\"#fff\" d=\"M28 16c0-6.627-5.373-12-12-12S4 9.373 4 16c0 5.628 3.875 10.35 9.101 11.647v-7.98h-2.474V16H13.1v-1.58c0-4.085 1.849-5.978 5.859-5.978.76 0 2.072.15 2.608.298v3.325c-.283-.03-.775-.045-1.386-.045-1.967 0-2.728.745-2.728 2.683V16h3.92l-.673 3.667h-3.247v8.245C23.395 27.195 28 22.135 28 16Z\"><\/path><\/svg><\/span><\/a><a aria-label=\"Email\" class=\"heateor_sss_email\" href=\"https:\/\/oeb.global\/oeb-insights\/ethics-in-artificial-intelligence-what-the-future-holds-speaking-to-inge-de-waard\/\" onclick=\"event.preventDefault();window.open('mailto:?subject=' + decodeURIComponent('Ethics%20in%20Artificial%20Intelligence%3A%20what%20the%20future%20holds%20-%20speaking%20to%20Inge%20de%20Waard').replace('&', '%26') + '&body=https%3A%2F%2Foeb.global%2Foeb-insights%2Fethics-in-artificial-intelligence-what-the-future-holds-speaking-to-inge-de-waard%2F', '_blank')\" title=\"Email\" rel=\"noopener\" style=\"font-size:32px!important;box-shadow:none;display:inline-block;vertical-align:middle\"><span class=\"heateor_sss_svg\" style=\"background-color:#649a3f;width:20px;height:20px;display:inline-block;opacity:1;float:left;font-size:32px;box-shadow:none;display:inline-block;font-size:16px;padding:0 4px;vertical-align:middle;background-repeat:repeat;overflow:hidden;padding:0;cursor:pointer;box-sizing:content-box\"><svg style=\"display:block;\" focusable=\"false\" aria-hidden=\"true\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"100%\" height=\"100%\" viewBox=\"-.75 -.5 36 36\"><path d=\"M 5.5 11 h 23 v 1 l -11 6 l -11 -6 v -1 m 0 2 l 11 6 l 11 -6 v 11 h -22 v -11\" stroke-width=\"1\" fill=\"#fff\"><\/path><\/svg><\/span><\/a><a aria-label=\"Teams\" class=\"heateor_sss_button_teams\" href=\"https:\/\/teams.microsoft.com\/share?href=https%3A%2F%2Foeb.global%2Foeb-insights%2Fethics-in-artificial-intelligence-what-the-future-holds-speaking-to-inge-de-waard%2F&msgText=Ethics%20in%20Artificial%20Intelligence%3A%20what%20the%20future%20holds%20-%20speaking%20to%20Inge%20de%20Waard\" title=\"Teams\" rel=\"nofollow noopener\" target=\"_blank\" style=\"font-size:32px!important;box-shadow:none;display:inline-block;vertical-align:middle\"><span class=\"heateor_sss_svg heateor_sss_s__default heateor_sss_s_teams\" style=\"background-color:#5059c9;width:20px;height:20px;display:inline-block;opacity:1;float:left;font-size:32px;box-shadow:none;display:inline-block;font-size:16px;padding:0 4px;vertical-align:middle;background-repeat:repeat;overflow:hidden;padding:0;cursor:pointer;box-sizing:content-box\"><svg width=\"100%\" height=\"100%\" style=\"display:block;\" focusable=\"false\" aria-hidden=\"true\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 32 32\"><path fill=\"#fff\" d=\"M24.63 12.14a2.63 2.63 0 1 0 0-5.26 2.63 2.63 0 0 0 0 5.26Zm2.25.74h-3.29c.18.34.32.72.32 1.12v7.13c0 .74-.11 1.47-.34 2.14 2.01.36 3.94-.93 4.34-2.93.04-.25.09-.52.09-.76V14c0-.63-.52-1.12-1.12-1.12ZM16.75 4.87a3.515 3.515 0 0 0-3.49 3.87h1.73c1.19 0 2.14.97 2.14 2.14v.97c1.75-.2 3.1-1.69 3.1-3.49a3.48 3.48 0 0 0-3.49-3.49h.01Zm4.86 8.01h-4.48v8.01c0 1.19-.97 2.14-2.14 2.14h-3.94c.04.11.07.25.11.36.11.26.22.52.38.74a6.004 6.004 0 0 0 5.2 2.99c3.31 0 5.98-2.68 5.98-6.01v-7.14c0-.61-.49-1.09-1.12-1.09h.01Z\"><\/path><path fill=\"#fff\" d=\"M15 9.86H4.99c-.56 0-.99.45-.99.99v10.02c0 .56.45.99.99.99h10.02c.56 0 .99-.45.99-.99v-9.99c-.01-.56-.44-1.02-1-1.02Zm-2.02 3.74h-2.23v6.01H9.28V13.6H7.03v-1.49h5.96v1.49h-.02.01Z\"><\/path><\/svg><\/span><\/a><a aria-label=\"X\" class=\"heateor_sss_button_x\" href=\"https:\/\/twitter.com\/intent\/tweet?text=Ethics%20in%20Artificial%20Intelligence%3A%20what%20the%20future%20holds%20-%20speaking%20to%20Inge%20de%20Waard&url=https%3A%2F%2Foeb.global%2Foeb-insights%2Fethics-in-artificial-intelligence-what-the-future-holds-speaking-to-inge-de-waard%2F\" title=\"X\" rel=\"nofollow noopener\" target=\"_blank\" style=\"font-size:32px!important;box-shadow:none;display:inline-block;vertical-align:middle\"><span class=\"heateor_sss_svg heateor_sss_s__default heateor_sss_s_x\" style=\"background-color:#2a2a2a;width:20px;height:20px;display:inline-block;opacity:1;float:left;font-size:32px;box-shadow:none;display:inline-block;font-size:16px;padding:0 4px;vertical-align:middle;background-repeat:repeat;overflow:hidden;padding:0;cursor:pointer;box-sizing:content-box\"><svg width=\"100%\" height=\"100%\" style=\"display:block;\" focusable=\"false\" aria-hidden=\"true\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 32 32\"><path fill=\"#fff\" d=\"M21.751 7h3.067l-6.7 7.658L26 25.078h-6.172l-4.833-6.32-5.531 6.32h-3.07l7.167-8.19L6 7h6.328l4.37 5.777L21.75 7Zm-1.076 16.242h1.7L11.404 8.74H9.58l11.094 14.503Z\"><\/path><\/svg><\/span><\/a><\/div><div class=\"heateorSssClear\"><\/div><\/div><div class='heateorSssClear'><\/div>","protected":false},"author":26,"featured_media":7641,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":"","_links_to":"","_links_to_target":""},"categories":[209],"tags":[514,54,671,649],"class_list":["post-7639","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news","tag-artificial-intelligence","tag-education","tag-ethics-and-technology","tag-oeb-midsummit"],"_links":{"self":[{"href":"https:\/\/oeb.global\/oeb-insights\/wp-json\/wp\/v2\/posts\/7639","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/oeb.global\/oeb-insights\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/oeb.global\/oeb-insights\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/oeb.global\/oeb-insights\/wp-json\/wp\/v2\/users\/26"}],"replies":[{"embeddable":true,"href":"https:\/\/oeb.global\/oeb-insights\/wp-json\/wp\/v2\/comments?post=7639"}],"version-history":[{"count":9,"href":"https:\/\/oeb.global\/oeb-insights\/wp-json\/wp\/v2\/posts\/7639\/revisions"}],"predecessor-version":[{"id":7650,"href":"https:\/\/oeb.global\/oeb-insights\/wp-json\/wp\/v2\/posts\/7639\/revisions\/7650"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/oeb.global\/oeb-insights\/wp-json\/wp\/v2\/media\/7641"}],"wp:attachment":[{"href":"https:\/\/oeb.global\/oeb-insights\/wp-json\/wp\/v2\/media?parent=7639"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/oeb.global\/oeb-insights\/wp-json\/wp\/v2\/categories?post=7639"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/oeb.global\/oeb-insights\/wp-json\/wp\/v2\/tags?post=7639"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}