Our lives have practically been transformed with the advent of technology. Now, every basic need can be fulfilled at the click of a button. With the increase in the number of transactions that happen online, there is also an increase in the amount of data being collected in online transactions.
Since, there are quintillions of bytes of data, being generated every day, there is also a need to develop a programme to process the data.
This is where Artificial Intelligence comes into play. Artificial Intelligence (hereafter “AI”) refers to a group of multiple technologies that allow machines to detect, understand, act and learn on their own. AI is more efficient than humans in terms of the work as they are able to process in a much lesser time, which is why it is gaining popularity gradually.
An example of the use of AI in our daily life can be seen in the product suggestions of e-commerce websites like Amazon or the advertisements we see on various social media platforms, based on the interactions and searches we make on other apps.
Like every coin has a flip side, AI too, has its own concerns and disadvantages from the perspective of a user. The data stored and processed by AI is in such a manner that can breach privacy of the users. This article analyses the methods of AI based data collected tools and its impact on privacy of the users.
WHAT IS AN AI BASED DATA COLLECTION SYSTEM?
Data collection refers to collecting information from multiple sources and storing at a single place for processing or other purposes. In order to save time and costs, several entities have come up with AI driven data collection systems, which are automated to collect data, using an algorithm.
These systems can perform tasks without any supervision. AI based data collection can either collect data from the users directly, for example while signing up on an app, and can also collect data through other means such as tracking and identification, voice and facial recognition, prediction, profiling, etc.
AI can track the users’ personal data across multiple devices and gather such personal data in a large data set maintained by the company monitoring the user. In the recent times, users have become more aware about the ‘prediction and profiling’ feature used by several companies and data giants.
The companies use algorithms to infer or predict sensitive information using keyboard typing patterns or the searches made by a user. This data can be collected and stored to classify, evaluate and rank people, without the consent of the people who are categorised.
This kind of data collection raises privacy concerns for users of apps and platforms, who do not consent to their private information being collected and processed for unauthorised purposes.
BREACH OF PRIVACY THROUGH AI BASED SYSTEMS
These incidents which happened in the public domain will help in understanding what risks users entail through such AI based data processing systems:
1. The Cambridge Analytica- Facebook Case
It was discovered in 2018, that Cambridge Analytica was using the information and data of Facebook users, such as their likes, to target them with advertisement campaigns for the 2016 United States Presidential Elections.
A developer had created an app called FB Quiz App that took advantage of a loophole in the Facebook Application Programming Interface (API), to collect information and data of users and their friends as well. Later, the developer sold this data to Cambridge Analytica.
2. Clearview Face Recognition Case
Clearview is an AI based programme which is a facial recognition system, to help police officers in identifying criminals. However, in 2020, it was discovered through a New York Times Report, that this software was scraping around three billion users of Facebook, Twitter, Instagram, YouTube and Venmo, to create its AI systems, breaching the privacy of billions of users.
Later, Google, Facebook, YouTube and Twitter had sent ‘cease and desist’ letters to the platform to prevent them from scrapping photos from their platform.
STEPS TO PREVENT THE MISUSE OF DATA
Now that users have become aware about the threat to their personal data, by companies and AI based technologies, we must assess the possible solutions, to limit and restrict the information collected online, and its misuse.
Currently, the Indian regime does not have a specialised law on data protection, Thus, we continue to be governed by the provisions of the Information Technology Act, 2000 (IT Act), which deals with cybercrime. Section 43A of the IT Act provides that compensation shall be levied on the body corporate or organization that fails to protect the data of the users.
The Personal Data Protection Bill, 2019 (the Bill) was introduced in the Lok Sabha in 2019. The Bill aims to provide protection of personal data of individuals and also establishes a Data Protection Authority to carry on this task.
The Bill imposes rules on organizations to conduct minimum data processing of individuals. It also classifies certain types of data as ‘sensitive personal data’, which cannot be processed by Data Fiduciaries without the consent of Data Principals, who are the users. In addition to the aforementioned, the Bill also recognises protection of data from state as well as non-state actors, thus widening its scope to Government agencies.
The Handbook on Data Protection and Privacy for Developers of Artificial Intelligence (AI) in India, Practical Guidelines for Responsible Development of AI enumerates that the Bill is based on the following principles:
- Personal data collected for a particular purpose should not be used for another new and incompatible purpose.
- Consent of the Data Principals is absolutely essential before collection of personal data by the Data Fiduciaries.
- Holds organization responsible for unlawfully processing data.
- The data collected should not be stored for longer than what is needed.
- Collection of data must be restricted to only that which is adequate, relevant and necessary.
The increase in AI based software and programmes collecting data is inevitable, with the growth in digitalisation and the need to process big data. However, there is also a need to develop an accountability mechanism to hold the organizations responsible, for breaching the privacy of the citizens.
The Apex Court has already recognised the Right to Privacy as a fundamental right, in the judgement delivered in Justice K.S. Puttaswamy (Retd.) v. UOI (2018) 1 SCC 809. Thus, having a policy in place to govern the use and processing of data, without the consent of users, will play a significant role in preventing misuse of personal data of the citizens.
Moreover, there is a need to increase awareness among the citizens as well, regarding unlawful collection of data. The users must consciously read the terms of the platform while submitting their personal and sensitive data, and in cases of unlawful collection of personal sensitive data, the users must raise complaints with the appropriate authorities.
With the benefits attached to the AI ecosystem, it is nearly impossible to completely ban the use of AI. However, a more reasonable approach would be to strengthen the regulatory system in our country, to balance privacy and utility of data.
Applying limitations on the collection of data and mandating using such data only for the stipulated purposes will help in strengthening the security of data. Moreover, using AI to detect data leaks or creating algorithms to prevent security breaches can also assist in achieving the goal.
Thus, the future course for companies is to balance use of AI with the privacy of users to avoid repercussions from the State.
-Team AMLEGALS assisted by Ms. Gahna Rajani (Intern)
For any query or feedback, please feel free to connect with email@example.com or firstname.lastname@example.org.
Leave a Reply