Ethical software development
Challenge artificial Inteligence
12 March, 2021 by
Ethical software development
manaTec GmbH, Sophia Grünig
 

In the last blog post on the topic of ethical software deveopment , we already looked at the relationship between software and its development and ethics. Due to the rapid integration of technology and software in all areas of our society, it quickly became apparent that the ethical dimension in development cannot be neglected. In order to reduce the risk of misuse, fraud or other negative effects, there is a need for clear ethical guidelines to guide software developers so that they are not saddled with all the responsibility that comes with developing software. Although some codes already exist in the world of IT, it needs complex discussion and integration in society, business and law.

Artificail intelligence and robots are trend setting technologies.
Artificail intelligence and robots are trend setting technologies.

The development of artificial intelligence (AI) offers a particularly great challenge with regard to the integration of ethical values and principles. Artificial intelligence, or AI, is concerned with the automation of intelligent behavior and machine learning - the methods that enable a computer to solve tasks that require intelligence when solved by humans. Although AI is a branch of computer science, its development requires knowledge from biology, computational linguistics, mechanical engineering, psychology, and neuroscience.

The application or use of AI brings many benefits - it promises companies greater efficiency in work processes and thus cost savings, and individuals a higher quality of life and more time as robots and machines take over monotonous and repetitive tasks. AI is already being applied in medicine, when robots assist in complex operations as well as in the evaluation of medical results, for example in cancer detection. Robots also take over a large part of manufacturing in the automotive industry, contributing to error-free and more efficient workflows. Furthermore, systems based on AI can already make a preliminary selection in the application process, categorize customer inquiries and in some cases carry out consultations, or even determine which person will receive a loan and which will not.

The possibilities offered by the use and further development of AI have great potential, which is by no means exhausted and is viewed positively by many. Nevertheless, some concerns, uncertainty and lack of confidence in the forward-looking technology of AI systems exist.

Some studies show that acceptance and social trust in AI are not particularly well developed worldwide. A survey commissioned by the World Economic Forum, in which more than 20,000 people in Germany, China, the USA, Saudi Arabia and a further 23 countries took part, illustrates this: The question "Do you worry about the use of artificial intelligence?" was answered in the affirmative by 41 percent of respondents, according to a dpa report on Zeit Online. As many as 19 percent of respondents were in favor of banning the use of artificial intelligence.

But what concerns lead to such a high rejection of AI? A major factor is the lack of control by the users of AI, which follows with increasingly independent action by AI systems, as well as the resulting question of responsibility in the event of damage. There are also risks with regard to data security and privacy - because training self-learning algorithms requires large amounts of data, and data misuse can also result in data abuse.

Above all, however, the insufficient transparency of the procedures of AI systems brings great criticism. Because new technologies based on machine learning are often a kind of black box, decisions are often difficult to understand. Artificial intelligence only recognizes patterns and structures within data, which do not always necessarily have to be in a causal relationship, and thus, according to critics, there is an increased risk of discrimination. An example of this would be unequal treatment in terms of a person's creditworthiness, because the information with which the AI is trained would show that white men have a higher income on average than women, people with disabilities or dark-skinned people. Similarly, the use of erroneous and/or outdated data that forms the basis for decision-making could have far-reaching and serious consequences.

When developing artificial intelligence, it is important to consider risks and chances.
When developing artificial intelligence, it is important to consider risks and chances.

Due to their special characteristics, AI systems pose a challenge from an ethical perspective and are often subject to criticism. In order to increase acceptance and trust in society, it is highly important and necessary, especially in the case of artificial intelligence, to define and implement ethical guidelines. These must not only be taken into account within the company and presented transparently to the outside world, but must also be discussed across society.

Principles such as quality assurance with regard to the data processed, protection of the privacy of those affected, reliability of the technologies used, traceability of the result determination and verification, and the definition of responsibilities must be anchored in a suitable framework so that self-learning systems such as AI can contribute to a value-oriented development of society.


Sources: www.plattform-lernende-systeme.de, www.wikipedia.org, www.wirtschaftslexikon.gabler.de, www.srd-rechtsanwaelte.de, www.zeit.de, www.informatik-aktuell.de, www.retresco.de

 
Odoo 14 - Modul Customer Relationship Management (CRM)
Tracking leads and opportunities