Last week The New York Times released an article focussed on facial recognition technology. The piece examined Clearview AI, a company which whilst being virtually unknown, is, according to the New York Times, already being used by over 600 law enforcement agencies internationally. The company is reported to have already scraped over 3 billion images from across the internet.
A draft Bill, set to deal with the expected influx of facial recognition software, has now been presented here in Australia. The Identity-Matching Services Bill 2019 seeks to create broad rights to enable mass government surveillance through the use of facial recognition technology.
The still-relatively-new facial recognition technology used by Clearview AI allows government authorities to upload a photograph of a human target, in order for Clearview AI to match the target's face to the software's extensive database of public images. The software then provides users with links to any websites on which the photographs appear.
Whilst facial recognition has already been utilised by law enforcement agencies for some time, the technology has often been seen as faulty, and would generally be limited to government data (e.g. mugshots or passport photos). Now, private companies are scraping photos from social media platforms such as Facebook, LinkedIn and YouTube, and are gaining an increased ability to identify subjects from unclear images.
Despite considerable public outcry, police have emphasised the program's benefits, with the technology having already helped authorities solve a wide range of cases, from shoplifting to child abuse. In 2017, Indiana State Police were able to utilise Clearview AI technology to identify a killer who had been caught on video shooting a man in the stomach.
To the technology's credit, the killer's identity was uncovered within just 20 minutes of the Clearview AI search taking place.
Should we be concerned - does the law need to step in?
On one hand, this technology is just a faster and more convenient way to undertake tasks already being done manually. Comparing photographs, using newspapers and school pictures, and having victims work through books of 'mug shots' are all tactics which have been utilised by law enforcement authorities for decades.
So, if new technology is simply expediting and streamlining processes already in place, and if the technology is of notable assistance in terms of crimes being solved and guilty parties being identified, need we be concerned?
On the other hand, government use of technologies such as Clearview AI does mean that a private company is being trusted with highly-sensitive information provided by government agencies. Such information is stored on private servers, giving the company an inside look into confidential government investigations, as well as potential access to high-level private data.
Whilst these companies do state that they can implement measures limiting access to sensitive information, they say nothing with respect to how they will prevent abuse.
Additionally, it is prudent to question whether the information received from the program can actually be trusted. In the past, facial recognition programs have had issues with identifying and differentiating people with darker skin-tones - and there have been similar issues surrounding two-dimensional images (photographs) being used to mislead the technology.
With this in mind, we wonder whether independent organisations should be set up to review facial recognition results, and provide some level of accountability for those collecting and using this facial recognition technology.
Recently, the American Civil Liberties Union ('ACLU') rejected Clearview AI's claim that the technology is 100% accurate. This claim was originally made by an 'Independent Review Panel', whose report was included in a pitch to the Miami Police Department. As reported by Buzzfeed News, the ACLU stated that:
"The report is absurd on many levels and further demonstrates that Clearview simply does not understand the harms of its technology in law enforcement hands."
Beyond issues linked to inaccurate results, these companies are private businesses with private interests - so, as with any organisation, what prevents them from using their data to protect their supporters, or harm their opponents? Similarly, is there anything stopping these commercial entities from allowing willing consumers to purchase access to the data that they hold for political, commercial and/or personal reasons?
Few laws currently exist to regulate such advanced facial recognition software, and the international community has mixed views on what should be done.
As discussed, Australia's Identity-Matching Services Bill 2019 seeks to create broad rights enabling mass government surveillance through the use of facial recognition technology.
If passed, the Bill will:
- Authorise the sharing of facial images and other identifiable data between government agencies, and in some cases, private organisations; and
- Provide a legislative basis for the 'secure, automated and accountable' exchange of identity information to help prevent identity crime and promote a range of law enforcement, community safety and service delivery objectives.
At the same time, the Bill authorises the Department of Foreign Affairs and Trade to disclose information in order to participate in identity-matching services, and provides for computerised decision-making.
Australian Human Rights Commissioner Edward Santow and the Parliamentary Joint Committee on Intelligence and Security have recommended that the Bill be withdrawn due to fears relating to human rights and privacy. Whilst the Bill has not yet been withdrawn, it has not been formally debated as of the date of this article's publication
It should be noted that no part of the Bill responds to fears over private use, or the sharing of such technology.
It has already been reported that both Google and YouTube, and now Facebook have sent cease-and-desist letters to Clearview AI after the company scraped billions of images from the social media sites without permission. In response to the letters, Clearview AI's CEO Hoan Ton-That used a recent CBS television interview to argue that the company has a "First Amendment right" to access data in the public domain
In addition to the cease-and-desist letters sent by the group of tech-giants, Clearview AI is now being sued over facial recognition privacy concerns. Additionally, as reported by CNET:
"Clearview AI...is facing another lawsuit. The class action, filed Feb. 13 and spotted earlier Thursday by BuzzFeed, is seeking $5 million in damages, alleging willful, reckless or negligent violations of biometrics laws in Illinois by Clearview and CDW. It's fighting Clearview's collection, storage and use of biometric information without written consent, which it alleges is illegal.
...The lawsuit also takes aim at CDW, which it alleges has licensed the Clearview app to law enforcement agencies including the Chicago Police Department. That department paid almost $50,000 on Jan. 1, 2020, for a two-year contract to use the facial recognition software, the lawsuit alleges, following a free trial of Clearview."
In light of the recent widespread concern surrounding facial-recognition technology, a new Bill was recently introduced to the US Senate, which seeks to temporarily ban law enforcement agencies from using facial recognition technology until relevant guidelines and limitations are set.
It will be interesting to see how the matter is dealt with moving forward. Will personal and human rights thwart the technology and stop the move toward greater surveillance of the population - or will the assertions that this type of facial recognition technology promotes public safety, and is needed to ensure that law enforcement can work efficiently in an increasing complex world, prevail?
It remains a 'watch this space'.