31.01.2024
FAL

FAL

Artificial Intelligence (AI) is quickly being integrated into many industries, from streaming platforms and social media to finance, healthcare and transportation. Some recent AI development trends include: AI learning through observing humans; AI diagnostics for x-Rays; and increasing AI in smartphone apps. This technology is supporting increased productivity and efficiency in many fields, with some even suggesting AI could outperform human capabilities.

However, we must consider significant philosophical implications for the nature of human intelligence and creativity. A very real concern exists that AI robots will progress to become sentient, and as such be able to act beyond the control of humans. A survey found that 42% of CEOs believe AI could potentially destroy humanity in five to ten years, highlighting the serious concerns about the risks associated with AI [1].

Risk Posed by AI

Lack of Human intervention, leading to safety risk

In 1979, a tragic incident at a Ford Motor Company plant in Michigan resulted in the death of a 25-year-old worker, Robert Williams, who was struck by an industrial robot arm while assisting the robot in a parts retrieval task [2]. The lack of safety measures, including the absence of an alarm to notify workers of the robot's presence, was cited as a factor in the accident. Williams' family was later awarded $10 million in damages. This case underscores the importance of implementing stringent safety measures and regulations in the development and use of AI and industrial robots to ensure the protection of human workers. The legal liability was attributed to Unit Handling Systems, a division of Litton Industries, the manufacturer that designed the robot.

Interrupting AI process, leading to human error.

A 22-year-old contractor was tragically killed at a Volkswagen plant in Germany when a robotic arm, being set up by the victim, grabbed and crushed him against a metal plate. The incident, which occurred on July 1, 2015, highlighted the potential dangers of human-robot collaboration and the need for stringent risk assessments before the introduction of new equipment in the workplace [3]. Volkswagen stated that human error was likely to have been the cause, rather than a problem with the robot itself. Prosecutors were considering whether to bring charges in the case. This case emphasises the importance of ensuring proper safety measures and thorough risk assessments when working with industrial robots. It also raises legal questions about liability in such incidents.

Generating misinformation

AI has been used to create deepfake videos and generate misinformation, undermining social trust and potentially causing harm to individuals and society [4]. In 2019, the CEO of an energy firm got a call from someone who sounded just like his boss — the chief executive of his firm’s parent company. His “boss” ordered him to transfer $243,000 to a Hungarian supplier, which he did, as the voice’s tone and “melody” sounded legitimate — even capturing the executive’s subtle German accent [5]. It wasn’t until the fraudster called multiple times requesting more money and the CEO noticed the call was coming from an Austrian number that he began to have his doubts.

Need for Laws and Regulations

Ethical considerations loom large on the horizon as AI becomes an integral part of society, touching various aspects of our lives. The Australian Government's June 2023 'Safe and Responsible AI in Australia' discussion paper underscores the nation's proactive stance. Key regulations encompass data protection, privacy, Australian Consumer Law, competition laws, and online safety.

As the world grapples with the ethical dimensions of AI, Australia's legal framework serves as a foundation. However, the evolving nature of technology demands ongoing vigilance. The delicate balance between innovation and safeguarding societal values requires constant reassessment. In this uncharted terrain, policymakers must remain agile, ready to adapt legal mechanisms to ensure a harmonious coexistence between evolving AI technologies and the ethical principles that define us as humans.

In the realm of AI and robotics, the age-old philosophical concepts of consciousness and human identity face a formidable challenge. René Descartes' proclamation, "I think, therefore I am," takes on new dimensions as AI begins to exhibit capabilities that emulate human thought processes. The profound implications of this technological evolution necessitate a recalibration of our understanding of what it means to be human.

CONCLUSION

Effectively navigating the intricate realm of AI demands a collective commitment to staying abreast of regulatory frameworks and safety protocols. As the pace of AI development accelerates, it becomes increasingly crucial for society to proactively engage with the ethical and safety dimensions of this transformative technology. Recognising the potential pitfalls while fostering an appreciation for the vast opportunities AI presents ensures a holistic and informed approach to its integration into our daily lives.

Ultimately, embracing AI as a valuable tool for the future requires a nuanced understanding of its potential impact, coupled with a steadfast commitment to establishing robust regulations that safeguard against potential harm. By doing so, we can harness the full potential of AI while responsibly addressing the challenges that come with its continued advancement.


[1] https://edition.cnn.com/2023/06/14/business/artificial-intelligence-ceos-warning/index.html

[2] https://www.theatlantic.com/technology/archive/2023/09/robot-safety-standards-regulation-human-fatalities/675231/

[3] https://www.washingtonpost.com/news/morning-mix/wp/2015/07/02/robot-grabs-man-kills-him-in-german-car-factory/

[4] https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/?sh=4dddf08f2706

[5]

Interested to find out more? Feel free to contact us today.