AI Ethics: Are ethics needed to process data? Aditya Abeysinghe
Artificial Intelligence (AI) is used as a public keyword today in many digital products. With AI, human behavior is simulated, processes are made faster and used for dangerous and risky tasks. Over the past few years, AI has changed the way users use products and services. AI has been used for both good and bad and is used by people who know and do not know about AI. Therefore, when, and where to use AI should be vital considering the wide use of it today.
What are AI ethics?
Ethics are principles people consider when justifying whether something is good or bad. Similarly, AI ethics are rules people should comply with to provide a better functionality of a product when using AI. Usually, in AI and machine learning, data are the basis on which all models are processed, and testing is performed. Therefore, AI ethics should consider ethics in data, models built using AI, and decisions made with AI models.
Biasness is the main concern when considering ethics of AI models. Biasness causes errors in training models, processing models and the overall output. Biasness in AI could be due to various reasons such as errors in data preparation, errors in processing models, or due to intentional errors intended to make biased decisions using data. With data being the building unit of various small- and large-scale products and services used in tech, biasness during processing or data gathering may cause unintended events and inaccurate output.
Why are AI ethics useful?
Abusive content generated using AI is a trend currently. By using AI with other techniques such as image editing and content modifications, fake contents have been generated to mislead people. This trend does not limit to just misleading people. It is currently used to spread rumors, generate profit, and even cause theft and other forms of illegal transactions. Therefore, using ethics in AI could help organizations and users to identify such issues and reduce abuse when using AI and other techniques.
When data or models used in AI are less accurate, output is also less accurate. AI models which are used for critical tasks such as fraud identification, cyber security, and detection of diseases need to be highly accurate in their process of providing the output. Errors in either data or the AI model could cause inappropriate detections and risks of health for people. Using AI ethics when issues in either of these ends could be reduced by sharing facts with similar users who may use it for other models.
How can AI ethics be insisted?
AI models are typically trained using datasets obtained either using publicly available data or data obtained for that model. If a publicly available dataset is used biasness is often not an issue as it has been used in other models and its biasness has been commented by other users. Moreover, users could analyze whether such data have issues by using data analysis methods before using for training and testing. Data can be checked for any issues by measuring their quality if data are generated for that research to build models. Therefore, analyzing data during data processing before model training and that after deployment is a common method to insist ethics of AI models.
Another drawback of many AI algorithms is the inability to explain their inner functionality. AI models that are based on multiple layer models are often opaque to both users who use such models and to users who train models. Inner processes used in such models are difficult to be explained as to why a specific accuracy or biasness in a model’s output(s) was detected. Therefore, many users using these models often highlight such output are biased and often do not result in desired outcome. Techniques such as Explainable AI can be used to explain the output of such hidden functions.
Image Courtesy: https://www.cxotoday.com/