top of page
Writer's pictureSonali Mishra

“AI: surrounding Legal issues and Tech companies’ role in creating responsible use of AI”



We have lived through the era of witnessing the content industry coming out to be the most flourishing of all. Soon it was realized that being creative or having talent can pay off in more ways or rather on many platforms than just getting an opportunity on the big screen. Various technologies such as OTT, Social-Media and Digital media, created a lucrative space for content creators around the world. Just while content creators were at their peak relishing it all fully, the launch of generative Artificial Intelligence (AI) made us leap through the years and rendered the world fascinated by the quality and speed of creation in a variety of fields (including content) as compared to average human being's performance.


The capabilities of AI tools promise the world economic revolution in all kinds of industries, nevertheless, this would be only possible if the regulatory authorities manage to create a strong foundation and policies for the usage of AI applications by the general public for commercial and non-commercial use. To start on this, the first step would be to settle various legality questions around AI such as ownership of the out- put created by AI or if output created in a particular style of an artist by AI, violates Artist's publicity right and the most imperative one is- if scraping of data by tech companies without permission of the authors for training AI is constituted as an infringement of copyright.


Recently the United States Copyright Office affirmed the registration of the work “Zarya the Dawn” and determined that arrangement of original texts combined with images produced with the assistance of AI is protectable as a compilation. However, sole images or works which are a visual representation of human-authored prompts are not eligible for copyright protection. For the time being, this decision has put rest to the question of copyright registration of AI-created products but going forward this might not solidify because if AI creates an original work without including any expressive content of the training data or the input, then it could pass for a new original work capable of deriving monetary value and given the scenario that nobody can own such work, challenges will be posed in terms of reaping its economic benefits in the market. Also, in view of this decision, it has become important to define the author's liability in the writer’s agreement, covering the consequences if the original work is found to have been created by AI or aided by AI.


The heart of the debate i.e., if processing of copyrighted works by AI for training is fair use, remains open to be adjudged. One of the cases where this question is the part of issue is a lawsuit filed by Getty Images for copyright and trademark infringement against Stable Diffusion created by Stability AI, alleging that Stability AI has copied millions of photos from Getty Images without license and compensation and generated output by altering Getty Images, some of which also reflects distorted watermark of Getty Images.


As the legal battles continue pertaining to the usage of data for training AI; the issue would be analyzed under the lens of the fair use doctrine under U.S Code. If we strictly apply all the factors of the fair use doctrine to evaluate the question, then the answer would be affirmative to say that it’s illegal to train AI with an unauthorized set of data. The predominant factor in the determination of fair use is likely to be transformative use. For the purpose of transformative use, it shall be required to be determined if the new work generated by AI serves a different purpose from the original work which was utilized as training data.


In this regard, Open AI, the developer of Chat GPT has issued its comments stating that usage of data (which is inclusive of copyrighted works) for the training of AI is fair use as the “…purpose of the process is to develop useful generative AI system…” which is different than the object of original work i.e., human consumption and that the output is highly transformative too as compared with the original work. The transformative purpose could mean changing the original work by altering and/or adding new elements, meaning, and expressions to the extent that the new work denotes different functionality from the original work (Campbell vs. Acuff-Rose Music ). The difference of character or purpose has to be weighed against other factors like commercialism and if new works act as a market substitute for the original work etc. In order to determine fair use, the Courts will also have to evaluate whether an altered version of the original work is to be considered derivative work or passing off, based on substantial similarity between the input and the output and other factors.


Amid unresolved issues around AI, from legal and ethical to privacy concerns, the leading tech companies are taking strides to ensure the responsible use of AI and also cooperating with policymakers to create regulatory mechanisms for AI. Some examples of the same are:

  • Stability.ai has taken the initiative by proposing an open model AI aiming to promote public scrutiny of the technology for quality, fairness, and bias, driving national competitiveness and transparency. It has also outlined its perspective regarding various areas of challenges of the underlying technology and factors to be considered while developing a future oversight mechanism of AI.

  • Stability AI is also in the process of implementing standards for content authenticity, the content created by its hosted servers will include metadata to indicate that the content was created with the assistance of AI. As the origin of the content would become identifiable, it will help to resolve ongoing important issues such as the dissemination of misleading information and passing off the content on social media platforms.

  • Stability AI has solicited and given the creators options to opt-out from AI training and going forward there shall be a machine-readable function of opt-out. So, in the future it shall be prudent for organizations to outline policy for the automated data collection on their websites to opt-out of getting their data become a part of training AI.

  • Shutterstock, an ai platform, has taken the initiative to compensate the creators if their work is included in the new content produced by Shutterstock AI, by setting up a contributors fund mechanism.

Given that tech companies are putting forth efforts to mitigate associated risk factors, prevent misuse, create accountability, and co-operating for the development of regulatory policies, the full potential of AI can soon be realized, once legality questions around usage are resolved by the courts, which shall lead to financial benefits and increased human productivity, thus opening up many economic avenues. Ironically, the benefits of the usage of AI shall continue to remain appealing as long as it remains an extension of human creativity and not a replacement.


On a lighter note, as per reports, it is claimed that AI requires gallons of water during training on account of massive data processing. This has me wondering: if proper hydration can elevate us, humans too, allowing us to sustain a living in the age of tech, since we also consume a large amount of information through different forms of media, so proper hydration in our case could be just as important? just a thought.

 

#lawfirmstrategy#lawfirmmarketing#lawfirmmanagement#lawfirmgrowth#lawfirmleadership#legaltech#legalindustry#legal#legalmanager#legalknowledge#legalmarketing#legaleducation#legaladvice#legalcommunity#legalmatters#legalcounsel#legaldocuments#legalconsulting#legalai#legalcloud#lawtech#lawyers#lawpractice#lawupdates#laws#lawnews#lawenforcement

 

Follow LexTalk World for more news and updates from International Legal Industry

 

21 views

Comments


bottom of page