<?xml version="1.0" encoding="UTF-8" ?><!-- generator=Zoho Sites --><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><atom:link href="https://www.nathean.com/blogs/feed" rel="self" type="application/rss+xml"/><title>Nathean Analytics - Blog</title><description>Nathean Analytics - Blog</description><link>https://www.nathean.com/blogs</link><lastBuildDate>Wed, 11 Feb 2026 04:29:51 +0100</lastBuildDate><generator>http://zoho.com/sites/</generator><item><title><![CDATA[Do Robots Dream of an Ethical Future?]]></title><link>https://www.nathean.com/blogs/post/do-robots-dream-of-an-ethical-future</link><description><![CDATA[<img align="left" hspace="5" src="https://www.nathean.com/images/ai.jpeg"/>Science fiction writers write about dystopian futures as warnings to us and include glimpses of what the future might look like socially and technologically. The work being done in AI today will shape tomorrow, and so too will the legislation, regulations, and legal frameworks we set down now...]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_jgTXCdkoSWy1Bs3k3Ms3Pg" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_m15rCw7SQwCWaF6vgZB-7Q" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_YaoUuwrcQrCW_W4Y9i_PPg" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"> [data-element-id="elm_YaoUuwrcQrCW_W4Y9i_PPg"].zpelem-col{ border-radius:1px; } </style><div data-element-id="elm_-Z-Vv51XTbm9vXf9g8hgpg" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_-Z-Vv51XTbm9vXf9g8hgpg"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-center " data-editor="true"><div style="color:inherit;text-align:left;"><span style="font-style:italic;">Article by Maurice Lynch (CEO – Nathean) for HCAIM<br><br></span></div>
<div style="text-align:left;color:inherit;"><div style="color:inherit;"><p style="margin-bottom:9.5px;font-size:15px;"><span style="font-weight:700;"><em>“</em><em>Creating specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of “electronic persons” responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently;”</em></span><em></em><em><br></em><em>– &nbsp;EU Parliament resolution 2017 with recommendations to the Commission on Civil Law Rules on Robotics.</em><em><span style="font-size:11.25px;">[1]</span></em></p><p style="margin-bottom:9.5px;font-size:15px;">In 1948 Harold S. Osborne<span style="font-size:11.25px;">[2]</span>, a senior engineer at AT&amp;T and specialist in wireless technology and telephony, was asked what he thought of the newly invented transistor and its implications for the future – he speculated that “<em>..whenever a baby is born anywhere in the world, they will be given, at birth, a number which will be their telephone number for life. As soon as they can talk, they will be given a watch-like device with ten little buttons on one side and a screen on the other. Thus equipped, at any time when they wish to talk with anyone in the world, they will pull out the device and punch on the keys the number of his friend. Then turning the device over, they will hear the voice of their friend and see their face on the screen, in colour and in three dimensions…”</em>. Quite a prediction and it could be argued that children are now given a “phone number for life” at the age of 13 or younger. The only thing missing in that prophecy from today’s world is the 3D projection on your wrist – but maybe with VR goggles or smart glasses, it is here in some form. What is remarkable is the level of precision, as future predictions are notoriously way off in terms of accuracy, impact, and timeline.</p><p style="margin-bottom:9.5px;font-size:15px;">In a talk<span style="font-size:11.25px;">[3]</span>&nbsp;given by science fiction writer Philip K. Dick (PKD) in 1972, he dismissed Osborne’s prediction – “it’s not going to happen” citing that kids at the time were hacking and bypassing the telephone companies negating the need for such a device. What of PKD’s own predictions of future technologies? Four years previously PKD wrote the novel “Do Robots Dream of Electric Sheep?” better known by its film title “Blade Runner”. It is&nbsp; set in 2021, where androids (or “replicants”) are manufactured to such a high degree of technical sophistication it is near impossible to distinguish them from humans in every aspect. Specific tests would be developed to test the replicants’ emotional response to certain questions involving empathy and monitoring the&nbsp;<em>“capillary dilation of the so-called blush response…”</em>. One of the more interesting themes is how androids feel and view the world. Are they even aware they are androids?&nbsp;</p><p style="margin-bottom:9.5px;font-size:15px;">When you consider the current state of robotics and General AI in comparison to the androids in Blade Runner we are barely at the primordial soup stage, and yet ethics and robotics are entering discussions now albeit from the ethical use of robots and AI.&nbsp; In the EU Parliament resolution<span style="font-size:11.25px;">[1]</span>&nbsp;with recommendations to the Commission on Civil Law Rules on Robotics it recommends creating a specific status for robots as ‘electronic persons’, with specific rights and obligations, and applying it to cases where robots make decisions or interact with third parties. According to the EU Parliament, attributing electronic personhood to robots would resolve legal issues where robots perform functions, such as managing operations, delegating tasks, resolving complex issues and making decisions in real-time. This approach has received a backlash from AI experts and others with an open letter<span style="font-size:11.25px;">[4]</span>&nbsp;to the EU stating that&nbsp;<em>“From a technical perspective, this statement [</em>creating a specific status for robots as ‘electronic persons]<em>&nbsp;offers many biases based on an overvaluation of the actual capabilities of even the most advanced robots, a superficial understanding of unpredictability and self-learning capacities and, a robot perception distorted by science-fiction and a few recent sensational press announcements. The legal status for a robot can’t derive from the Legal Entity model, since it implies the existence of human persons behind the legal person to represent and direct it. And this is not the case for a robot”. Ironically,&nbsp;</em>to sign the open letter you must prove you are not a robot. These are debates at the embryonic phase of AI evolution with popular press articles appearing around robot rights with headlines such as “2020:&nbsp;<em>The Year of Robot Rights”, “Why We Should Show Machines Some Respect” and “Giving Rights Robots is a Dangerous Idea”</em>&nbsp;and so on<span style="font-size:11.25px;">[5]</span>. You would wonder if there will be a “Magna Carta” moment in the future with the decree that “the&nbsp;<em>robot</em>&nbsp;is not above the law” with the implication that the robot has legal status and subject to the law or does this remain solely in the realm of science fiction?</p><p style="margin-bottom:9.5px;font-size:15px;">More pressing, however, are the ethical issues around the use of AI and its impact on people, organisations and society. Therefore, talking about robot rights seems premature if not distracting. There are real concerns about how the pervasiveness of AI will impact people’s lives and rights on a daily basis. Initiatives such as The Human Centred AI Masters programme (<a href="https://humancentered-ai.eu/">HCAIM</a>)<span style="font-size:11.25px;">[6]</span>&nbsp;strives to ensure that human values are central to how AI systems are developed, deployed, used and monitored, by ensuring respect for fundamental rights.&nbsp;</p><p style="margin-bottom:9.5px;font-size:15px;">Science fiction writers write about dystopian futures as warnings to us and include glimpses of what the future might look like socially and technologically.&nbsp; The work being done in AI today will shape tomorrow, and so too will the legislation, regulations, and legal frameworks we set down now.</p><div style="font-size:15px;"><div><p style="margin-bottom:9.5px;"><span style="font-weight:700;">About the Author.</span></p><p style="margin-bottom:9.5px;">Maurice Lynch is CEO of&nbsp;<a href="http://www.nathean.com/">Nathean Analytics</a>, a company which specialises in the development of analytics software with a focus on LifeSciences and Healthcare. An experienced CEO, board member and technical leader Maurice drives the strategic direction of the company and oversees its business operations while playing an active role in the company’s product direction. Nathean is a founding industry member of CeADAR – Ireland’s Centre for Applied AI and served on its board for 5 years.</p><p style="margin-bottom:9.5px;">Maurice holds a B.Sc. in Computer Science from Dublin City University and has completed the Leadership4Growth program at Stanford University.</p></div>
<div><figure><br></figure></div></div><hr style="margin-bottom:19px;font-size:15px;"><p style="margin-bottom:9.5px;font-size:15px;">[1] EU Parliament recommendations to the Commission on Civil Law Rules on Robotics: –&nbsp;<a href="https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html">https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html<br></a>–<a href="https://www.europarl.europa.eu/committees/en/report-with-recommendations-to-the-commi/product-details/20170202CDT01121">https://www.europarl.europa.eu/committees/en/report-with-recommendations-to-the-commi/product-details/20170202CDT01121<br></a><a href="https://www.etui.org/sites/default/files/Foresight_Brief_02_EN.pdf">https://www.etui.org/sites/default/files/Foresight_Brief_02_EN.pdf<br></a>[2] Harold S. Osborne&nbsp; –<a href="https://ethw.org/Harold_Osborne">&nbsp;https://ethw.org/Harold_Osborne<br></a>[3] The Android and the Human&nbsp;<a href="https://genius.com/Philip-k-dick-the-android-and-the-human-annotated">https://genius.com/Philip-k-dick-the-android-and-the-human-annotated<br></a>[4] Open letter to EU<a href="http://www.robotics-openletter.eu/">&nbsp;http://www.robotics-openletter.eu/<br></a>[5] Human rights for robots? A literature review<a href="https://link.springer.com/article/10.1007/s43681-021-00050-7#Sec15">&nbsp;https://link.springer.com/article/10.1007/s43681-021-00050-7#Sec15<br></a>[6] HCAIM –<a href="https://humancentered-ai.eu/">&nbsp;https://humancentered-ai.eu/</a></p></div>
</div></div></div><div data-element-id="elm_vRrphnSfTdmv-RjE-xfKSw" data-element-type="button" class="zpelement zpelem-button "><style> [data-element-id="elm_vRrphnSfTdmv-RjE-xfKSw"].zpelem-button{ border-radius:1px; } </style><div class="zpbutton-container zpbutton-align-center "><style type="text/css"></style><a class="zpbutton-wrapper zpbutton zpbutton-type-primary zpbutton-size-md zpbutton-style-none " href="/blogs"><span class="zpbutton-content">Read More Articles</span></a></div>
</div></div></div></div></div></div>]]></content:encoded><pubDate>Tue, 20 Dec 2022 10:57:00 +0000</pubDate></item><item><title><![CDATA[The Acceleration of Ethics and Governance for Artificial Intelligence]]></title><link>https://www.nathean.com/blogs/post/the-acceleration-of-ethics-and-governance-for-artificial-intelligence</link><description><![CDATA[<img align="left" hspace="5" src="https://www.nathean.com/images/gce3979dd3bf26e8b4fea4caa8fe251c9b250484a65f0f500c7cd1797832f1bfecc45516aed77172115e7d0229177929b928fc400a3a573822c01b697c9675384_1280.jpg"/>Ethics and regulatory compliance are being pushed to the fore for AI such as the EU AI Act, the EU Medical Device Regulation and the UK AI Policy Paper all of which aim to bring a harmonised approach to governance and accountability for AI.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_jgTXCdkoSWy1Bs3k3Ms3Pg" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_m15rCw7SQwCWaF6vgZB-7Q" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_YaoUuwrcQrCW_W4Y9i_PPg" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"> [data-element-id="elm_YaoUuwrcQrCW_W4Y9i_PPg"].zpelem-col{ border-radius:1px; } </style><div data-element-id="elm_-Z-Vv51XTbm9vXf9g8hgpg" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_-Z-Vv51XTbm9vXf9g8hgpg"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-center " data-editor="true"><div><div style="color:inherit;text-align:left;"><span style="font-style:italic;">Article by Maurice Lynch (CEO – Nathean) and Dr Alireza Dehghani (Technical Program Manager – CeADAR)</span></div>
<div style="text-align:left;"><br></div><div style="text-align:left;color:inherit;"><span style="color:inherit;font-weight:bold;">Regulation</span><br></div>
<div style="text-align:left;"><br></div><div style="text-align:left;color:inherit;"> Ethics and regulatory compliance are being pushed to the fore for AI such as the EU AI Act (2021) [1], the EU Medical Device Regulation (2017) [2] and the UK AI Policy Paper [3] all of which aim to bring a harmonised approach to governance and accountability for AI. </div>
<div style="text-align:left;"><br></div><div style="text-align:left;color:inherit;"> The need for regulation is to provide ethics, legal and technical frameworks to help deal with situations where decisions being made by AI (either directly or indirectly) affect the individual, an organisation or society in impactful ways. </div>
<div style="text-align:left;"><br></div><div style="text-align:left;color:inherit;"><span style="font-weight:bold;">Ethics</span></div>
<div style="text-align:left;"><br></div><div style="text-align:left;color:inherit;"> An extreme example in healthcare, where an AI-based application (a Class III Software as a Medical Device under EU MDR[2]) incorrectly recommends the wrong course of treatment which inadvertently causes death or an irreversible deterioration of a patient’s state of health. Other less extreme examples, but impactful nonetheless for the individual, include the declined loan application, the rejected job application, being falsely identified through face recognition software leading to arrest, and so on. All of which accentuates the need for greater clarity, explainability, accountability, trust, fairness and regulation. </div>
<div style="text-align:left;"><br></div><div style="text-align:left;color:inherit;"><span style="font-weight:bold;">Risks</span></div>
<div style="text-align:left;"><br></div><div style="text-align:left;color:inherit;"> With such regulations coming into play across the globe and as AI continues on its maturity curve, companies and organisations large and small need to take a holistic view of their use of AI either in their own AI end-user products or internal use of AI for employees. There are more stakeholders who need to be actively aware of the potential risks and rewards of AI, from the Data Scientist to the CEO who ultimately is responsible for the liabilities and reputation of the company. Financial penalties can be significant, with fines of up to 6% of turnover or €30M[4] under the EU AI Act. Risks of bias within models are also a major concern and minefield for companies such as the case of Amazon’s AI-based recruitment tool which they were forced to scrap due to its bias against female candidates[5]. </div>
<div style="text-align:left;"><br></div><div style="text-align:left;color:inherit;"><span style="color:inherit;font-weight:bold;">Governance</span><br></div>
<div style="text-align:left;"><br></div><div style="text-align:left;color:inherit;"> Meeting regulatory requirements and addressing ethical concerns is complex and requires a multi-disciplinary approach in going from concept to releasing models into production. One of the critical components of developing these robust production models is the quality of the data used to train the models in the first place. Ethics plays a key role in the development of training datasets with new tools and techniques being researched and developed such as Privacy-Preserving Machine Learning (PPML) which is an umbrella term used to describe Privacy Enhancing Technologies (PETs) that can protect individuals’ data privacy in data analysis. With vast amounts of data being collected from online and offline resources, significant challenges in preserving privacy have emerged. Both industry and academia are trying to catch up with these technologies. </div>
<div style="text-align:left;"><br></div><div style="text-align:left;color:inherit;"><span style="color:inherit;font-weight:bold;">Summary</span><br></div>
<div style="text-align:left;"><br></div><div style="text-align:left;color:inherit;"> The new regulations around AI have yet to be tested in the courts. As with GDPR these acts and regulations around AI ultimately aim to protect individuals and to provide recourse to action when they are affected. There is a balancing act in how to maintain the momentum in innovation, protect the end-user of AI and have an ethical focus. </div>
<div style="text-align:left;"><br></div><div style="text-align:left;color:inherit;"> Academia and industry play a vital role in shaping the future of ethical AI. One such EU initiative is the Human Centred AI Masters programme (HCAIM) – a consortium which follows the definition of AI HLEG (European Commission’s High-level expert group on Artificial Intelligence). This definition implies “The human-centric approach to AI strives to ensure that human values are central to how AI systems are developed, deployed, used and monitored, by ensuring respect for fundamental rights”. It puts human values, rights, and privacy in a central place in the AI development lifecycle. The profound consideration and assessment of those aspects, including risks, is necessary to be taken into account at every stage of AI development. </div>
</div><div style="text-align:left;color:inherit;"><br></div><div style="text-align:left;color:inherit;"> ........................................................... </div>
<div style="text-align:left;color:inherit;"><br></div><div style="text-align:left;color:inherit;"><div style="color:inherit;"><ol><li><span style="font-size:12px;">EU – Regulation of The European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence<span style="font-weight:700;">&nbsp;</span>(2021). Available online at:&nbsp;<a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206">https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206</a></span></li><li><span style="font-size:12px;">EU – Regulation 2017/745 of the European Parliament and of the Council on Medical Devices (2017)&nbsp;Available online at:&nbsp;<a href="https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX%3A32017R0745&amp;from=EN">https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32017R0745&amp;from=EN</a></span></li><li><span style="font-size:12px;">UK – Establishing a Pro-innovation Approach to Regulating AI (2022). Available online at:&nbsp;<a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1092630/_CP_728__-_Establishing_a_pro-innovation_approach_to_regulating_AI.pdf">https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1092630/_CP_728__-_Establishing_a_pro-innovation_approach_to_regulating_AI.pdf</a></span></li><li><span style="font-size:12px;">EU – AI fines to 6% of turnover (2021). Available online at:&nbsp;<a href="https://www.reuters.com/technology/eu-set-ratchet-up-ai-fines-6-turnover-eu-document-2021-04-20/">https://www.reuters.com/technology/eu-set-ratchet-up-ai-fines-6-turnover-eu-document-2021-04-20/</a></span></li><li><span style="font-size:12px;">Amazon scraps secret AI recruiting tool that showed bias against women (2018). Available online at:&nbsp;<a href="https://www.rte.ie/news/business/2018/1010/1002144-amazon-ai-bias/">https://www.rte.ie/news/business/2018/1010/1002144-amazon-ai-bias/</a></span></li></ol></div>
</div></div></div><div data-element-id="elm_442m6QV1fKXW1UymavRszQ" data-element-type="button" class="zpelement zpelem-button "><style> [data-element-id="elm_442m6QV1fKXW1UymavRszQ"].zpelem-button{ border-radius:1px; } </style><div class="zpbutton-container zpbutton-align-center "><style type="text/css"></style><a class="zpbutton-wrapper zpbutton zpbutton-type-primary zpbutton-size-md zpbutton-style-none " href="/blogs"><span class="zpbutton-content">Read More Articles</span></a></div>
</div></div></div></div></div></div>]]></content:encoded><pubDate>Thu, 24 Nov 2022 16:03:00 +0000</pubDate></item><item><title><![CDATA[The Designed-In Dangers of AI]]></title><link>https://www.nathean.com/blogs/post/the-designed-in-dangers-of-ai</link><description><![CDATA[<img align="left" hspace="5" src="https://www.nathean.com/images/gc73d4f4565b4711d887ca87ad44cc8bbe19ec4882644b8185b5e007fcb9e71bedf7ae4c4543aa299ef48a9118db0fe88c5f45bd362a9a702c48af1d72a9c8ce5_1280.jpg"/>To address risks with AI, it is important to design AI systems with safety, transparency, and accountability in mind, and to implement appropriate safeguards and regulations to ensure their responsible use.]]></description><content:encoded><![CDATA[<div class="zpcontent-container blogpost-container "><div data-element-id="elm_jgTXCdkoSWy1Bs3k3Ms3Pg" data-element-type="section" class="zpsection "><style type="text/css"></style><div class="zpcontainer-fluid zpcontainer"><div data-element-id="elm_m15rCw7SQwCWaF6vgZB-7Q" data-element-type="row" class="zprow zprow-container zpalign-items- zpjustify-content- " data-equal-column=""><style type="text/css"></style><div data-element-id="elm_YaoUuwrcQrCW_W4Y9i_PPg" data-element-type="column" class="zpelem-col zpcol-12 zpcol-md-12 zpcol-sm-12 zpalign-self- "><style type="text/css"> [data-element-id="elm_YaoUuwrcQrCW_W4Y9i_PPg"].zpelem-col{ border-radius:1px; } </style><div data-element-id="elm_-Z-Vv51XTbm9vXf9g8hgpg" data-element-type="text" class="zpelement zpelem-text "><style> [data-element-id="elm_-Z-Vv51XTbm9vXf9g8hgpg"].zpelem-text { border-radius:1px; } </style><div class="zptext zptext-align-center " data-editor="true"><div style="color:inherit;text-align:left;"><span style="font-style:italic;">Article by Maurice Lynch (CEO – Nathean) for HCAIM</span></div>
<div style="text-align:left;"><br></div><div style="text-align:left;color:inherit;"><div style="color:inherit;"><p style="margin-bottom:9.5px;font-size:15px;"><span style="font-weight:700;"><em>To address risks with AI, it is important to design AI systems with safety, transparency, and accountability in mind, and to implement appropriate safeguards and regulations to ensure their responsible use.</em></span></p><h2 style="margin-bottom:9.5px;font-size:32px;"><span style="font-weight:700;">Introduction</span></h2><p style="margin-bottom:9.5px;font-size:15px;">Henry Ford did not invent the internal combustion engine, gasoline, or steel, but he was able to use these existing technologies to build the first mass-market motor car, the Model-T, and launch a new industry. He also perfected assembly line manufacturing and maximised existing supply chains to execute on his vision of a people’s car. This phenomenon is characteristic of innovators who are in the right environment, with the right vision and the will to succeed. The car industry rapidly expanded with little or no regulations around safety, the priority being affordability and profit. It took around 40 years for safety to be taken seriously, with the introduction of the 3-point safety belt in 1959 by Volvo.</p><p style="margin-bottom:9.5px;text-align:center;font-size:15px;">Source:&nbsp;<a href="https://www.weforum.org/agenda/2015/04/how-can-we-improve-road-safety-in-our-cities/">https://www.weforum.org/agenda/2015/04/how-can-we-improve-road-safety-in-our-cities/</a></p><p style="margin-bottom:9.5px;font-size:15px;">In 1965, Ralph Nadar’s book&nbsp;<em>“Unsafe at Any Speed: The Designed-In Dangers of the American Automobile”</em><span style="font-size:11.25px;">[1]</span>&nbsp;became a bestseller and played a significant role in highlighting the dangers in some American cars, such as the Chevrolet Corvair which had a tendency for the suspension to ‘tuck in’ under the car in particular circumstances. The book and the subsequent public debate led to the establishment of the United States Department of Transportation in 1966. By 1968, seat belts, padded dashboards, and other safety features were mandatory in cars. Interestingly, the death rate per million miles travelled was already decreasing before tighter regulations were put in place, a point argued by industry observers at the time.&nbsp;</p><p style="margin-bottom:9.5px;font-size:15px;">The car industry has evolved to the point where regulation and safety are aligned but can fall out of alignment with the advent of new innovations as was the case with airbags which are now standard.</p><h2 style="margin-bottom:9.5px;font-size:32px;"><span style="font-weight:700;">Designed-In Dangers of AI</span></h2><p style="margin-bottom:9.5px;font-size:15px;">AI has seen rapid growth over the past decade, but it has also raised concerns about the potential dangers of AI systems. To address these concerns, some have proposed safety measures, such as the EU AI Act (2021)<span style="font-size:11.25px;">[2]</span>, which aims to regulate the use of AI in Europe.</p><p style="margin-bottom:9.5px;font-size:15px;">There are several designed-in dangers associated with AI that may prevent it from being ethical, transparent, and trustworthy. For example, AI systems can be prone to bias if they are not trained on diverse and representative data, or they may not be explainable, making it difficult to understand how they reached a particular decision. Additionally, AI systems can be vulnerable to hacking or other forms of malicious attacks, which could compromise their integrity and reliability. To address these risks, it is important to design AI systems with safety, transparency, and accountability in mind, and to implement appropriate safeguards and regulations to ensure their responsible use.</p><ol><li><span style="font-weight:700;">Bias</span></li></ol><p style="margin-bottom:9.5px;font-size:15px;">An example of bias can be demonstrated using the open-source PULSE system<span style="font-size:11.25px;">[3]</span>, a generative model that takes low-resolution images as input and searches for high-resolution images that are perceptually realistic. This system has been shown to generate images with a bias towards features that appear ethnically white. For instance, when given a low-resolution input image of President Barack Obama, the PULSE system tends to generate outputs that depict him with white features, as seen in the image below.</p><p style="margin-bottom:9.5px;text-align:center;font-size:15px;"><img alt="Chart Description automatically generated" src="https://lh6.googleusercontent.com/RukW971ZPp10LO64ZE1A07-ZFWtq-C2-JudNO3_eed0-fMqkuOxoXlwvf42HfRLvI5cLAZ8pjVl-P4r8gZY3fKjgR3j9AvaEEt9P3OPOocWv3o3BZcAnwyUWpe_Cr9zW8-H9keYzePKZkDep0nHi9EtvxNa1lpwpUElQkyRh6Qtbfn5b6zUpAa7RLS5xvd03Uko" width="446" height="233"><span style="font-weight:700;"><br></span>Source<span style="font-weight:700;">:&nbsp;</span><a href="https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias">https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias</a></p><p style="margin-bottom:9.5px;font-size:15px;">It is common for minority groups to be underrepresented in data sets compared to the wider population. This can lead to bias in AI systems that are trained on such data, as the systems may not have enough information about the minority groups to accurately represent them. This is a fundamental issue with AI technology, as the quality and diversity of the data used to train the system can have a significant impact on its performance and fairness. To avoid bias and ensure that AI systems are inclusive and equitable, it is important to use diverse and representative data when training these systems.&nbsp;</p><p style="margin-bottom:9.5px;font-size:15px;">With face recognition even if the probability of an incorrect face match is low, the ethical concerns around AI systems that use facial recognition technology remain. In such cases, individuals must have the right to seek recourse through the legal system to protect their rights and interests. This highlights the need for AI systems that use facial recognition technology to be designed and implemented in a way that respects individuals’ privacy and rights.</p><p style="margin-bottom:9.5px;font-size:15px;">Over time, the level of bias in AI systems can be reduced through the use of better source data, improved algorithms, human feedback, industry input<span style="font-size:11.25px;">[4]</span>, and supporting legislation.</p><ol start="2"><li><span style="font-weight:700;">Trust</span></li></ol><p style="margin-bottom:9.5px;font-size:15px;">The question of how the human concept of trustworthiness can be applied to AI systems is a contentious issue in the field of AI. The European Commission’s High-level Expert Group on AI (HLEG) has proposed that a relationship of trust with AI should be built and strive to create trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI<span style="font-size:11.25px;">[5]</span>). However, in a paper by Mark Ryan (<em>“In AI We Trust: Ethics, Artificial Intelligence, and&nbsp;Reliability”</em><em><span style="font-size:11.25px;">&nbsp;[</span></em><span style="font-size:11.25px;">6]</span><em>)</em>, Ryan argues that AI cannot be considered trustworthy because it is simply a set of software development techniques, and trust is a uniquely human trait. Overall, he proposes that&nbsp;<em>“proponents of AI ethics should abandon the ‘trustworthy AI’ paradigm as it is too fraught with problems, replacing it with the reliable AI approach, instead. The field should instead place a greater emphasis on ensuring that organisations using AI, and individuals within those organisations, are trustworthy”.&nbsp;</em></p><ol start="3"><li><span style="font-weight:700;">Fairness, Transparency and Accountability&nbsp;</span></li></ol><p style="margin-bottom:9.5px;font-size:15px;">Individuals have the right to privacy and the&nbsp;<a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02016R0679-20160504&amp;qid=1532348683434">EU GPDR Regulation</a>&nbsp;helps protect this right by giving people control over their personal data and how it is used by others. Technical methods like Differential Privacy<span style="font-size:11.25px;">[7]</span>&nbsp;can be used to safeguard individuals’ privacy while still allowing their data to be used for research purposes. However, not everyone may be equally concerned about the privacy aspect of their data. They may be more concerned about how their data is being used and whether it is being used for good and in a fair manner.&nbsp;</p><p style="margin-bottom:9.5px;font-size:15px;">One challenge with using AI in certain contexts is the potential for adverse consequences that may affect the willingness of individuals or groups to share their data. For example, farmers who share their data with a central data aggregator may not be willing to do so if that data is subsequently used in a way that harms their income (which may be difficult to prove). Similarly, patients who consent to the use of their data for clinical research may only do so if the data is used for the general good of all patients, and not just to develop expensive drugs that only wealthy patients can afford. This highlights the need for transparency and accountability in the use of AI, to ensure that the data is used in a responsible and ethical manner.</p><ol start="4"><li><span style="font-weight:700;">The Alignment Problem</span></li></ol><p style="margin-bottom:9.5px;font-size:15px;">Human beings have a tendency to anthropomorphise things, including non-living objects and non-human creatures. We are drawn to designing robots that look like humans, even though this is not always necessary or functional. This tendency to anthropomorphise can be an interesting psychological phenomenon, but it is important to consider the practical implications and limitations of this behaviour when designing and using AI systems, especially in terms of the expectation that AI can reflect human values.</p><p style="margin-bottom:9.5px;font-size:15px;">In Brian Christian’s book,&nbsp;<em>“The Alignment Problem: Machine Learning and Human Values”</em><span style="font-size:11.25px;">[8]</span>&nbsp;he explores the mismatch between human goals and behaviours with those of the data-trained automated AI systems complete with biases and blind spots. The Alignment Problem is a crucial issue, as advanced AI systems can make decisions and take actions that can have significant impacts on our lives. Therefore, it is essential that we ensure that these systems align with our goals and values. This can involve carefully designing and training AI systems to reflect our values and goals, as well as incorporating human oversight and feedback into the decision-making process of AI systems. Ensuring that AI systems align with our values is essential for ensuring that they are safe, effective, and beneficial for society.</p><h2 style="margin-bottom:9.5px;font-size:32px;"><span style="font-weight:700;">Conclusion</span></h2><p style="margin-bottom:9.5px;font-size:15px;">AI has the potential to bring significant benefits, but it also carries inherent risks and dangers that must be addressed. Just as the car industry has had to adapt to safety regulations, the use of AI will need to be subject to a growing set of rules and standards that balance the need for innovation with the need for safety. These regulations should define safety measures for AI (the seat belts and airbags as it were) that would be mandatory and essential for all AI systems. These safety measures should focus on protecting humans from harm, rather than protecting the AI itself. By addressing the designed-in dangers of AI and implementing appropriate safeguards, we can ensure that AI technology is used in a responsible and ethical manner.</p><p style="margin-bottom:9.5px;font-size:15px;">As AI is still in its infancy, there are many initiatives and programmes underway to promote the ethical use of AI. The Human Centred AI Masters programme (<a href="https://humancentered-ai.eu/">HCAIM</a>)<span style="font-size:11.25px;">[9]</span>&nbsp;strives to ensure that human values are at the core of how AI systems are developed, deployed, used, and monitored.&nbsp;</p><div style="font-size:15px;"><div><p style="margin-bottom:9.5px;"><span style="font-weight:700;">About the Author.</span></p><p style="margin-bottom:9.5px;">Maurice Lynch is CEO of&nbsp;<a href="http://www.nathean.com/">Nathean Analytics</a>, a company which specialises in the development of analytics software with a focus on LifeSciences and Healthcare. An experienced CEO, board member and technical leader Maurice drives the strategic direction of the company and oversees its business operations while playing an active role in the company’s product direction. Nathean is a founding industry member of CeADAR – Ireland’s Centre for Applied AI and served on its board for 5 years.</p><p style="margin-bottom:9.5px;">Maurice holds a B.Sc. in Computer Science from Dublin City University and has completed the Leadership4Growth program at Stanford University.</p></div>
<div><figure><br></figure></div></div><hr style="margin-bottom:19px;font-size:15px;"><p style="margin-bottom:9.5px;font-size:15px;">[1] New York Times (2015) “50 Years Ago, ‘Unsafe at Any Speed ’&nbsp; Shook the Auto World”<br><a href="https://www.nytimes.com/2015/11/27/automobiles/50-years-ago-unsafe-at-any-speed-shook-the-auto-world.html">https://www.nytimes.com/2015/11/27/automobiles/50-years-ago-unsafe-at-any-speed-shook-the-auto-world.html</a></p><p style="margin-bottom:9.5px;font-size:15px;">[2] EU – Regulation of The European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence<span style="font-weight:700;">&nbsp;</span>(2021)<a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206">https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206</a></p><p style="margin-bottom:9.5px;font-size:15px;">[3] PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models<br><a href="https://github.com/adamian98/pulse#what-does-it-do">https://github.com/adamian98/pulse#what-does-it-do</a><br> The Verge (2020) – What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias. Source:&nbsp;<a href="https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias">https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias</a></p><p style="margin-bottom:9.5px;font-size:15px;">[4] IBM – Mitigating Bias in AI Models (2018). Source:&nbsp;<a href="https://www.ibm.com/blogs/research/2018/02/mitigating-bias-ai-models/">https://www.ibm.com/blogs/research/2018/02/mitigating-bias-ai-models/</a></p><p style="margin-bottom:9.5px;font-size:15px;">[5] EU – Ethics Guidelines for Trustworthy AI (2019)<br><a href="https://www.europarl.europa.eu/cmsdata/196377/AI%20HLEG_Ethics%20Guidelines%20for%20Trustworthy%20AI.pdf">https://www.europarl.europa.eu/cmsdata/196377/AI%20HLEG_Ethics%20Guidelines%20for%20Trustworthy%20AI.pdf</a></p><p style="margin-bottom:9.5px;font-size:15px;">[6] Ryan, M. In AI We Trust: Ethics, Artificial Intelligence, and Reliability.&nbsp;Sci Eng Ethics&nbsp;26, 2749–2767 (2020).&nbsp;<a href="https://doi.org/10.1007/s11948-020-00228-y">https://doi.org/10.1007/s11948-020-00228-y</a></p><p style="margin-bottom:9.5px;font-size:15px;">[7] Stanford University / Apple (2019) – Element Level Differential Privacy: The Right Granularity of Privacy. Source:&nbsp;<a href="https://arxiv.org/abs/1912.04042">https://arxiv.org/abs/1912.04042</a></p><p style="margin-bottom:9.5px;font-size:15px;">[8] Brian Christian, “The Alignment Problem: Machine Learning and Human Values”, W. W. Norton &amp; Company, 2020. Source:&nbsp;<a href="https://brianchristian.org/the-alignment-problem/">https://brianchristian.org/the-alignment-problem/</a></p><p style="margin-bottom:9.5px;font-size:15px;">[9] Human Centred AI Masters programme –<a href="https://humancentered-ai.eu/">&nbsp;https://humancentered-ai.eu/</a></p></div>
</div></div></div><div data-element-id="elm_2UVN4wj7WynOXRUNzDBz3w" data-element-type="button" class="zpelement zpelem-button "><style> [data-element-id="elm_2UVN4wj7WynOXRUNzDBz3w"].zpelem-button{ border-radius:1px; } </style><div class="zpbutton-container zpbutton-align-center "><style type="text/css"></style><a class="zpbutton-wrapper zpbutton zpbutton-type-primary zpbutton-size-md zpbutton-style-none " href="/blogs"><span class="zpbutton-content">Read More Articles</span></a></div>
</div></div></div></div></div></div>]]></content:encoded><pubDate>Tue, 21 Dec 2021 10:54:00 +0000</pubDate></item></channel></rss>