What is the current problem with AI?

Tihana Rajnović
/ 23 Apr 2024
    what-is-the-current-problem-with-ai

    TABLE OF CONTENTS:



    After the launch of ChatGPT in November of 2022, AI has emerged as a new technological milestone on the global stage.


    As new models rapidly develop, the use of AI is expanding daily across every conceivable industry.

    google-search-for-ai-over-time-search-trend

    This widespread application and the haste in development and implementation has left little room for addressing the challenges that current AI models face.


    In this blog, we will explore the current main problems with artificial intelligence.


    Ethics and artificial intelligence


    Ethical concerns with AIare becoming more relevant and a frequent topic of discussion, largely due to problems arising from unregulated use and deployment.


    Numerous ethical issues have been identified by global experts, who caution against the social issues of artificial intelligenceand the potential for harm.


    The most prevalent ethical concerns in AI include:


    Job displacement

    A rapid, unregulated shift towards replacing human workers with AI could lead to significant job loss and other unforeseen consequences.


    Bias and Discrimination

    AI models inherit biases present in the training data, potentially leading to discriminatory outcomes.


    Transparency and Lack of Informed Consent

    Issues arise not only with disclosing the use of AI in products and services but also from a general lack of understanding about how it is applied.


    Exploitation of intellectual property

    Various digital media, including text and images, have been used to train large language models (LLMs), with artists particularly vocal about their work being used without permission.


    Ethical auditing

    There are no universally accepted standards for auditing AI systems, complicating the process of identifying and correcting ethical issues.


    Accuracy and Precision

    AI generated content fully depends on the quality of the training data. It is not possible to vet or guarantee data quality, bringing the response accuracy into question.


    Accountability

    When an AI makes an autonomous decision that causes damage or a mishap, determining responsibility remains a challenge.


    Privacy and Misuse of Personal data

    With AI processing vast data sets, there is a higher risk of privacy breaches and unnotarized data use.


    Security

    Weak security measures, lack of transparency, and possibility of AI being tricked into revealing sensitive information can pose serious security threats to organizations and governments.


    Use in the Health Sector

    We have all heard how AI can help diagnose cancer, but unregulated use of AI could have serious health implications.


    Use in Law

    Not only is AI generated content a hot button issue in the court rooms, but its use by law professionals can complicate court proceedings.


    Military Use and Autonomous Weapons

    The deployment of AI in military settings raises significant ethical concerns, from creating power imbalances to the catastrophic consequences of software errors.

    AI bias examples


    Bias has long been a challenge in research, science, and broader society.


    Since artificial intelligence systems are trained on human-generated data, they often inherit biases that can distort the text and images they produce.


    There were quite a few examples form the last decade, but we will name just a few.


    Amazon’s hiring algorithm penalized resumes of female applicants, broadening the gender gap in technical fields. Due to these issues, Amazon modified the algorithm and eventually discontinued its use in 2017.


    Mortgage-approval algorithms unfairly discriminated against non-white applicants, wrongly considering race as a factor in lending decisions.


    Such biases in AI can block individuals and entire groups from job opportunities and obtaining mortgages, significantly impacting their lives.

    What are AI Hallucinations?


    AI hallucination is a phenomenon that happens when an AI model incorrectly identifies a pattern or object, generating a nonsensical or inaccurate response.


    Essentially, the chatbot or computer vision tool hallucinates a response, fabricating facts and images that users might mistakenly accept as fact.

    What is AI Overfitting?


    AI overfitting is a phenomenon that happens when an algorithm generates text or images that are too close or exactly the same as the data it was trained on.


    AI models remembering certain training data and basically recreating it, raises concerns about privacy, security, and copyright infringement.

    What are Deepfakes?


    Deepfakes are pictures, videos, and audio recordings that have been digitally manipulated to convincingly replicate the appearance and voice of real or fictional individuals.


    The ease of creating accurate representations of politicians, celebrities, or even ordinary individuals is so dangerous that deepfakes were cited as a global security threat by the US and other nations.

    barack-obama-deepfake

    How does AI cause misinformation and disinformation?


    AI can mislead users when fed poor training data, through hallucinations, and with deepfakes.


    Even more concerning, individuals, including trolls, scammers, and various organizations, can use AI to spread disinformation more efficiently than ever before.

    AI security issues


    This all brings us to the issue of security.


    There have already been incidents where Nvidia’s AI software was manipulated into leaking data and ChatGPT was tricked into disclosing personal data from its training set.


    Being able to manipulate and trick current AI models into providing access to sensitive information poses a question – how safe is our data given the companies and governments that use AI?

    ai-security-risks

    Discussion of AI issues, concerns, and ethical considerations will become more common as the technology rapidly develops.


    Hopefully we can find more ethical approach to AI development, continually improving models that have endless potential for good if properly used.