Monday, October 28, 2019

Facebook builds an artificial intelligence tool for fox facial recognition systems

The "de-identification" system, which also works in live video, uses machine learning to modify the main facial features of a subject in a video.
This disidentification technology previously worked primarily for still images

San Francisco: The artificial intelligence (AI) research team in Facebook has developed a tool that allows the facial recognition system to mislead the incorrect identification of a person in a video, the media reported.

The "de-identification" system, which also works in live video, uses machine learning to change the facial characteristics of a subject in a video, according to a report published in VentureBeat on Friday.

"Face recognition can lead to a loss of privacy and face replacement technology can be misused to create misleading videos," reads an article explaining the company's approach, cited by VentureBeat.

This disidentification technology previously worked primarily for still images, reported The Verge.

"Recent global developments in facial recognition technology advances and abuse point to the need to understand de-identification methods, and our contribution is the only one that is suitable for video, including live video, and has a quality that exceeds by far the documentary methods., "said the paper.

The work is expected to be presented at the International Conference on Computer Vision (ICCV) in Seoul, South Korea next week.

The development comes at a time when Facebook is facing a $ 35 billion class action lawsuit for alleged misuse of facial recognition data in Illinois. A US court rejected Facebook's request to quash the lawsuit.

A panel of three Judges of the Ninth Circuit Circuit in San Francisco rejected Facebook's request to quash the lawsuit. The case would now go to trial unless the Supreme Court intervenes, TechCrunch reported last week.

This story was published from a news agency thread without text modification. Only the title has been changed.

Friday, October 4, 2019

Decoding artificial intelligence: 10 steps to protect human rights

The Council of Europe Commissioner for Human Rights published a very interesting document a few months ago entitled Decoding Artificial Intelligence: 10 Measures to Protect Human Rights. This is a recommendation on AI and human rights, which puts forward 10 sets of measures. Back on this document.

The effects of artificial intelligence on human rights is one of the most important factors that will define the period in which we live. Technologies using AI are increasingly present in everyone's life, through home automation or social networks, for example.

They are also increasingly used by the public authorities to assess the personality or skills of individuals, to allocate resources and to make other decisions that can have serious and concrete consequences for human rights. As the Commissioner for Human Rights pointed out in an article in the Human Rights Journal, it is therefore urgent to find the right balance between technological progress and the protection of human rights.

As the document states, "AI offers new opportunities but also risks; human rights should not be weakened but reinforced by AI. This Recommendation on AI and Human Rights provides guidance on how the negative effects of AI systems on human rights can be avoided or mitigated by distinguishing 10 main areas of action. ".

This recommendation was based on the work done in this field by the Council of Europe, in particular the European Code of Ethics for the Use of Artificial Intelligence in Judicial Systems, the Guidelines on Artificial Intelligence and the data protection, the Declaration of the Committee of Ministers on the manipulation capabilities of algorithmic processes and the study on the human rights dimensions in automated data processing techniques and the possible regulatory implications, as well as on the report in which the UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression examines the impact of artificial intelligence technologies on human rights in cyberspace.

It is based on the international human rights system, which constitutes a universal and binding framework, and in particular on the Council of Europe's human rights instruments. The recommendation is addressed to the Member States, but the stated principles should concern anyone who has a significant influence - direct or indirect - on the development, implementation or effects of an AI system. AI developed in the private sector should be subject to the same standards as AI developed in the public sector if there is an intention to work with public agencies or services.

Server management systems

Enterprises receive the services and functions they need (databases, e-mail, website hosting, work applications, etc.) for their corporate I...