Fresh Scoop Today

Navigating the AI Startup Maze: Legal Risks and Regulatory Challenges


Navigating the AI Startup Maze: Legal Risks and Regulatory Challenges

There are reportedly around 67,200 artificial intelligence (AI) companies in the world, and about 25% of them are based in the United States.

Also,

According to PwC's Global Artificial Intelligence Study, the global AI market is expected to reach $15.7 trillion by 2030. As a result, many investors are eager to tap into the potential of AI and support innovative start-ups and scale-ups that are developing and deploying AI solutions.

However, says PWC,

AI is a complex and dynamic field that raises various ethical, social, and legal issues, such as data protection, privacy, security, bias, accountability, transparency, and human rights. These issues are attracting increasing attention and scrutiny from regulators, policymakers, civil society and the public, who are demanding more responsible and trustworthy AI.

Therefore, investors need to be aware of the existing and emerging AI regulations that might affect their portfolio companies, as well as the potential liabilities and penalties that might arise from non-compliance or misconduct. Moreover, investors need to conduct a thorough and comprehensive legal, regulatory risk management, and due diligence process before and after making their investments to ensure that they are not exposed to unforeseen or unacceptable risks.

Companies don't need much proprietary IP to enter the AI business. They can just use a generative AI (GAI) tool like ChatGPT, Scribe, or Dall-E2 to build a new application.

GAI definitions vary, but the EU AI Act defines GAI as "foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video."

For example, GAI can "write" an article, essay, poem, or movie review. It can "compose" a song in the style of any musical artist or do a mash-up of several. It can "create" artwork in the style of any artist - although it often adds extra fingers and a creepy number of teeth to its human subjects.

With barriers to entry so low, many entrepreneurs are rushing into the AI field with little knowledge of IP law and other legal issues involved.

This blog briefly introduces some of those issues, focusing on GAI.

AI can process vast quantities of data and, without much noteworthy human intervention, transform it into an AI-generated output. The discussion on how to treat any intellectual property rights arising in both the materials used to train the AI (input) and the results created by the AI (output) is still in its early days.

We've written several blogs about litigation and licensing involving GAI, including:

CALIFORNIA COURT RULES ON MOTION TO DISMISS IN AI TRAINING CASE

REDDIT AND GOOGLE ENTER INTO AI CONTENT LICENSING AGREEMENT

NEW YORK TIMES SUES OPENAI AND MICROSOFT FOR COPYRIGHT INFRINGEMENT

Since 2019 [as of December 2023], 17 states have enacted 29 bills focused on regulating the design, development, and use of artificial intelligence. These bills primarily address two regulatory concerns: data privacy and accountability. Legislatures in California, Colorado and Virginia have led the way in establishing regulatory and compliance frameworks for AI systems.

A variety of federal AI legislation is also in the works.

Other countries, states, and governmental entities are likely to follow suit.

In this environment, AI-related business activities that are legal (or of unknown legality) today may be illegal or give rise to IP infringement liability or other causes of action tomorrow, next month, or next year.

AI hallucination is a phenomenon wherein a large language model (LLM) -- often a generative AI chatbot or computer vision tool -- perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.

Generally, if a user makes a request for a generative AI tool, they desire an output that appropriately addresses the prompt (i.e., a correct answer to a question). However, sometimes AI algorithms produce outputs that are not based on training data, are incorrectly decoded by the transformer, or do not follow any identifiable pattern. In other words, it "hallucinates" the response.

The New York Times reported that Google's latest A.I. search feature "has erroneously told users to eat glue and rocks, provoking a backlash among users."

As the Times notes,

The incorrect answers in the AI Overview feature have undermined trust in a search engine that more than two billion people turn to for authoritative information. While other A.I. chatbots tell lies and act weirdly, the backlash demonstrated that Google is under more pressure to safely incorporate A.I. into its search engine.

What if someone ACTS on bad AI-generated advice and gets hurt (or killed) as a result?

As The Wall Street Journal reports, every company that uses GAI not only faces a risk to its reputation but also could be liable under laws that govern defective products or speech that introduces bias in hiring, gives terrible advice, or makes up information that might inflict financial damage on someone.

In short, while AI might seem like a potential gold mine to some, there's also the risk of legal Balrogs down in those caverns.

Previous articleNext article

POPULAR CATEGORY

entertainment

8988

discovery

4068

multipurpose

9475

athletics

9362