Vector Institute Affiliated AI Trust And Safety Experts Available For Commentary Related To The AI Global Forum In Seoul, Republic Of Korea The Week Of May 20, 2024


(MENAFN- GlobeNewsWire - Nasdaq) TORONTO, May 17, 2024 (GLOBE NEWSWIRE) -- The second AI Global Forum will take place in South Korea next week gathering government officials, corporate leaders, civil societies, and academics from around the world to discuss the future of AI.

The Vector Institute is affiliated to a significant number of world leading researchers working on AI Trust and Safety available to provide comment in the lead up and during the AI Global Forum.

In addition, Vector's President and CEO, Tony Gaffney will participate in person at the Forum in South Korea on Wednesday May 22nd, 2024 and will be available for comment while on site.

Media availability:

Experts will be available for commentary on AI Trust and Safety in the lead up and during the AI Global Forum.

Vector Institute Affiliated AI Trust and Safety Experts:

Jeff Clune

Jeff focuses on deep learning, including deep reinforcement learning. His research also focuses on AI Safety, including regulatory recommendations and improving the interpretability of agents

Roger Grosse

Roger's research examines training dynamics in deep learning. He is applying his expertise to AI alignment, to ensure the progress of AI is aligned with human values. Some of his recent work has focused on better understanding how large language models work in order to head off the potential for risk in their deployment.

Rahul G. Krishnan

Rahul's research focuses on building robust and generalizable machine learning algorithms to advance computational medicine. His recent work has developed new algorithms for causal decision making, built risk scores for patients on the transplant waitlist, and created automated guardrails for predictive models deployed in high-risk settings.

Xiaoxiao Li

Xiaoxiao specializes in the interdisciplinary field of deep learning and biomedical data analysis. Her primary mission is to make AI more reliable, especially when it comes to sensitive areas like healthcare.

Nicolas Papernot

Nicolas's work focuses on privacy preserving techniques in deep learning, and advancing more secure and trusted machine learning models.

About the Vector Institute
Launched in 2017, the Vector Institute works with industry, institutions, startups, and governments to build AI talent and drive research excellence in AI to develop and sustain AI-based innovation to foster economic growth and improve the lives of Canadians. Vector aims to advance AI research, increase adoption in industry and health through programs for talent, commercialization, and application, and lead Canada towards the responsible use of AI. Programs for industry, led by top AI practitioners, offer foundations for applications in products and processes, company-specific guidance, training for professionals, and connections to workforce-ready talent. Vector is funded by the Province of Ontario, the Government of Canada through the Pan-Canadian AI Strategy, and leading industry sponsors from across multiple sectors of Canadian Industry.

This availability is for media only.

For more information or to speak with an AI expert, contact: ...


MENAFN17052024004107003653ID1108226502


GlobeNewsWire - Nasdaq

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.