Computer vision has developed to a point where machines using artificial intelligence are better and faster than humans at performing many vision-related tasks. For example, we are now often processed through customs based solely on face recognition software. Add to this the fact that the average Australian is photographed on CCTV cameras around 75 times per day. Commercial applications of face recognition technology include Microsoft's Face Application Programming Interface that can be used to classify face images based on gender, age and mood and can recognise the same face in other images. Each person is thus identified and their image stored with their individual identifier. The legal implications for computer vision and automation arise around their use in smart cities, home life, security, public or personalised advertising, solving or predicting crime, and law enforcement. There are also enormous privacy considerations in how this data might be used in the future. How should regulators act in response to the range of legal and regulatory risks arising in these circumstances?
This research will investigate a possible framework for risk analysis, specific to the range of risks presented by artificial intelligence (AI), automation, and computer vision.
Please contact the Research Services Team at firstname.lastname@example.org for more information. Topic and supervisory availability is subject to change without notice.