Is Artificial Intelligence safe to use ?

Kishor Keshav
10 min readDec 18, 2022

Hello readers!

Today popularity of Artificial Intelligence is increasing across multiple areas of human existence by incorporating its human like intelligence that can solve many challenging problems of the today’s world. This article gives a brief introduction to the field of Artificial Intelligence and also discusses risks and issues associated with it. This article will also throw some light on whether AI is safe to use.

1 Introduction

Definition: Artificial Intelligence (AI) is a branch of computer science that deals with the development of computer systems that are capable of performing tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. AI technology is used in many areas, including robotics, medical diagnosis, marketing, self-driving cars, game playing, chatbots, etc.

Need of AI: As the human population continues to grow, the relevance of artificial intelligence (AI) is becoming increasingly important. AI can be used to solve a variety of problems in the areas of healthcare, transportation, energy, and other areas. AI can help improve the efficiency of processes, reduce costs, and make life easier for people. AI can also be used to help automate and streamline tasks, allowing humans to focus on more complex tasks. As the population grows, the demand for AI-powered services and products is likely to increase, leading to more opportunities for businesses and individuals to leverage AI for their own benefit.

AI is required in today’s world to help improve efficiency and accuracy in many aspects of our lives. AI can help automate tedious tasks, improve decision-making, and provide more accurate predictions while also allowing us to better understand and analyse large data sets. AI can now be found in home assistant devices, such as Amazon Alexa and Google Home. In healthcare, AI is being used to identify diseases, diagnose illnesses, and develop treatments. In finance, AI is being used to identify fraud, optimize investments, and automate transactions. In retail, AI is being used to personalize customer experiences, product recommendation systems, optimize inventory management, and automate customer services. In education, AI is being used to develop personalized learning plans for students, identify learning gaps in curriculums, and automate grading. AI is being used to help organizations become more efficient, improve customer services, and make better decisions. As AI technology continues to advance, more and more areas of our lives will benefit from its capabilities.

Explainable AI: It is a type of AI that is designed to explain its decisions, results, and processes to humans in a way that they can understand. This type of AI is commonly used in settings where transparency is key, such as in healthcare, financial services, and the public sector. Explainable AI is designed to provide insights that are easy to interpret and understand, and to serve as a bridge between humans and AI-powered systems.

Responsible AI: Responsible AI is a philosophy and set of principles that promotes the use of AI in an ethical manner. Responsible AI seeks to ensure that AI is used in a way that respects human rights, maintains fairness, transparency, and accountability, and avoids potential harms and harms to individuals, communities, and the environment. Responsible AI is also concerned with making sure that AI-powered systems are fair and accessible to all, and that they are designed to be as safe and secure as possible.

Principles of responsible AI:

Transparency — AI systems should be transparent, meaning that the decisions they make and the data they use to make these decisions should be clear and available to all stakeholders. This will allow for better accountability and trust in AI systems.

Privacy — It is an important consideration when using AI systems. This includes protecting user data and ensuring that AI systems do not make decisions that could be used to discriminate against certain people.

Security — AI systems should be designed with security in mind. This includes taking steps to prevent unauthorized access to data, as well as protecting against malicious attacks.

Fairness — AI systems should be designed to make decisions that are fair and unbiased. Techniques such as data validation and data scrubbing can help ensure that AI systems are not making decisions based on biased or incomplete data.

Human Control — AI systems should be designed to be used in conjunction with humans, rather than replacing them. This will ensure that humans remain in control of the decision-making process.

Responsibility — Ultimately, the responsibility for any decisions made by AI systems should be taken by the people creating and using them. This includes taking responsibility for any mistakes made by the AI, as well as using the technology

2 Types of AI:

There are several types of Artificial Intelligence (AI), which can be broadly categorized into the following categories:

Reactive Machines: These are the most basic form of AI, and they can only react to the environment based on their current state. They do not have the ability to form memories or use past experiences to inform their decisions. IBM’s Deep Blue, which defeated chess grandmaster Garry Kasparov in 1997, is an example of a reactive machine.

Limited Memory: These AI systems have a limited memory, and they can use past experiences to inform their current decisions. Autonomous vehicles that use cameras and sensors to detect and respond to their environment are an example of limited memory AI.

Theory of Mind: These AI systems have the ability to understand and represent the mental states of others, such as beliefs, desires, and intentions. They are not yet developed but researchers are working on it.

Self-Aware: These AI systems have a sense of self-awareness and consciousness. They understand themselves and their place in the world. These are not yet developed and it’s a subject of debate among experts whether it’s possible to achieve or not.

Symbolic AI: These AI systems use symbols, logic, and reasoning to make decisions and solve problems. They are based on a set of predefined rules and knowledge. Expert systems and natural language processing systems are examples of symbolic AI.

Statistical AI or machine learning: This is based on mathematical models and algorithms, which are used to learn patterns and relationships in data. It is more flexible and adaptable, but it can struggle with tasks that involve logical reasoning or symbolic manipulation.

Sub-Symbolic AI/Connectionist AI: These AI systems comes under Machine Learning Techniques(statistical AI) . It is a branch of artificial intelligence that focuses on creating intelligent systems that can learn from data and make decisions based on patterns and relationships in that data. Connectionist AI is inspired by the structure and function of the human brain and is based on the idea that intelligence can emerge from the interactions between simple, connected elements.

3 AI applications, Risks and issues

Since last decade, AI success stories continue to excite all of us with its impressive capabilities to mimic human intelligence in various domains of human existence. But at the same time it’s very important to know risks associated with AI based applications, that solve critical real world problems like in healthcare, military, etc. In the next section we will discuss the risks in using AI across different applications.

Judiciary: AI has the potential to revolutionize the judicial system, but it also poses unique risks. First, there is a risk of bias in AI. AI systems are trained on data sets, and if the data that is used to train the AI is biased, the AI itself will be biased. This could lead to decisions that are not based on justice and fairness, but on the bias of the data set. Second, AI systems are not perfect. Even if the data used to train them is unbiased, AI systems can still make mistakes or misinterpret data. In the case of the judiciary, this could lead to wrongful convictions or unjust decisions that would not be accepted in a traditional court system.

Autonomous/Self-driving Vehicles: AI systems are inherently unpredictable and their behaviour can be difficult to predict. This can lead to unexpected outcomes, such as the AI system making decisions that are not in line with human expectations. Additionally, autonomous vehicles are operating in a complex, dynamic environment and must be able to handle unexpected situations, such as sudden changes in road conditions or other drivers on the road. This can make it difficult for AI systems to accurately evaluate the situation and make decisions that are safe and appropriate. Additionally, AI systems are vulnerable to hacking and other malicious attacks, which can cause serious safety issues for the passengers and other drivers on the road. Finally, the use of AI in driving autonomous vehicles can raise ethical and legal questions, such as who is held responsible for any accidents that occur due to the AI system’s decisions.

Industrial IoT and Process automation: The use of Artificial Intelligence (AI) in Industrial Internet of Things (IIoT) offers a tremendous potential for automating and optimizing the industrial processes. However, it also poses some risks. One of the major risks is related to security. Industrial IoT networks are exposed to malicious attacks and data theft due to the large number of connected devices. AI algorithms can be used to detect and identify malicious activity, but they can also be exploited to launch attacks. Another risk is related to the accuracy of AI algorithms. As AI algorithms become more sophisticated, they can become too complex and difficult to understand, making it hard to predict their behaviour. Additionally, AI algorithms can be biased if they are trained on data sets that contain bias. This can lead to incorrect predictions and decisions which can have serious consequences.

The risk associated with using AI in process automation is that it can lead to errors due to its lack of human judgment. AI is programmed to complete tasks in the most efficient way possible, but it can make mistakes if it is not properly trained to recognize errors in data. Additionally, AI can be vulnerable to malicious attacks, as it can be used to manipulate data and create automated processes that are not in line with a company’s objectives. Finally, AI-powered automation can cause job displacement as machines can take over certain tasks that were traditionally done by human workers.

Robotics: The use of AI in robotics carries several risks. Firstly, AI-enabled robots are vulnerable to hacking, which could lead to a range of security issues such as the loss of data or the manipulation of the system. Secondly, AI-enabled robots could be used to create physical harm or damage, either intentionally or unintentionally. Thirdly, AI-enabled robots might be used to carry out activities that are socially or morally wrong due to their lack of understanding of the complexities of human behaviour. Lastly, AI-enabled robots could lead to economic disruption due to the cost of production and development, as well as the potential for job displacement.

Healthcare: Using AI in healthcare can present a number of risks. AI systems may not be able to detect all the medical conditions, leading to missed diagnoses or inaccurate treatments. AI algorithms may be biased if not properly calibrated, leading to different treatments for different patients based on their characteristics. AI systems may be vulnerable to cyber-attacks, which could compromise the privacy and security of patient data. AI-based systems may be difficult to interpret, making it difficult to explain decisions made by the system or to correct mistakes. Finally, AI systems may require a great deal of data to achieve accuracy, which may not be available in certain contexts or could be too expensive to collect in the healthcare domain.

Military :Artificial intelligence is playing a significant role in military warfare. Several AI applications are already being developed by the US and other countries for various military uses. Some of the risks are involved, in using AI for military applications are presented here.

AI systems are capable of making decisions independently, and unpredictable behaviour may occur when these decisions are made. This could lead to serious consequences in a military context. Autonomous weapons could be used to target and attack without human intervention. This raises serious ethical and legal questions about the use of AI in military applications. AI-powered systems can be vulnerable to cyber-attacks, which could compromise security and cause significant damage to military operations. AI systems will require access to large amounts of data, which could include sensitive personal information. This could lead to privacy concerns if the data is not safeguarded appropriately. AI-powered systems could cause unintended damage if they malfunction or misfire. This could lead to harm to civilians and property, as well as harm to military personnel.

Business: AI systems are vulnerable to security threats and cyber-attacks, which can result in the theft or destruction of sensitive customer data or proprietary business information. AI systems can make decisions that have legal implications for businesses. Companies can be held liable for the decisions made by their AI systems. As AI technology advances, governments may create new regulations to address potential risks. Companies must stay up-to-date on relevant regulations and be prepared to comply with them. AI systems can generate inaccurate results that can damage a company’s reputation. AI systems can be difficult to maintain and can cause unexpected operational delays. AI systems can make biased decisions or be used to promote unethical practices. Companies must be mindful of the ethical implications of their AI decisions.

4 Summary and conclusion:

To summarize, AI is about mimicking human intelligence in a particular domain using complex computer programs. Although the development in AI over past decade is really mesmerizing with advancements in machine learning and deep learning , the developers and the users should be aware of the risks in using it for solving the real world problems.

To conclude the use of AI for mankind will be safe only if the AI developers follow ethical practices and takes utmost care in selecting the data on which it is trained. Also an AI model must be thoroughly tested for any lapses before it is deployed for the use by human beings. In future, Governments across the globe need to device regulations and safety standards for such AI based applications and gadgets and also have some checks and controls on the developers.

Thank you for reading this article. I hope this article has thrown some light on this aspect.

Thanks to Openai: This article has been written with the help of AI itself using text-davinci-003 model (with temperature of 0.7) with Openai playground, to show its capabilities to the readers of medium. You may play with this tool at https://beta.openai.com/playground

--

--