Entradas

Mostrando entradas de septiembre, 2019

2019-09 Contributing Data to Deepfake Detection Research

Deep learning has given rise to technologies that would have been thought impossible only a handful of years ago. Modern generative models are one example of these, capable of synthesizing hyperrealistic images, speech, music, and even video. These models have found use in a wide variety of applications, including making the world more accessible through text-to-speech, and helping generate training data for medical imaging. https://ai.googleblog.com/2019/09/contributing-data-to-deepfake-detection.html

AI can’t protect us from deepfakes, argues new report

Un nuevo informe de Data and Society suscita dudas sobre soluciones automatizadas para videos modificados para engañar. Apoarse en la IA podría incluso empeorar las cosas al concentrar más daos y poder en manos de corporaciones privadas. https://www.theverge.com/2019/9/18/20872084/ai-deepfakes-solution-report-data-society-video-altered Adaptado por Aniceto Pérez y Madrid, Filósofo de las Tecnologías y Editor de Actualidad Deep Learning (@forodeeplearn)

2019-09 NHS trusts sign first deals with Google

Five National Health Service trusts have signed partnerships with Google to process sensitive patient records, in what are believed to be the first deals of their kind.  The deals came after DeepMind, the London-based artificial intelligence company, transferred control of its health division to its Californian parent. DeepMind had contracts to process medical data from six NHS trusts in Britain to develop its Streams app, which alerts doctors and nurses when patients are at risk of acute kidney injury, and to conduct artificial intelligence research. https://www.ft.com/content/641e0d84-da21-11e9-8f9b-77216ebe1f17

2019-09 Amazon Releases Data Set of Annotated Conversations to Aid Development of Socialbots

Today I am happy to announce the public release of the Topical Chat Dataset, a text-based collection of more than 235,000 utterances (over 4,700,000 words) that will help support high-quality, repeatable research in the field of dialogue systems.  The goal of Topical Chat is to enable innovative research in knowledge-grounded neural response-generation systems by tackling hard challenges that are not addressed by other publicly available datasets. Those challenges, which we have seen universities begin to tackle in the Alexa Prize Socialbot Grand Challenge, include transitioning between topics in a natural manner, knowledge selection and enrichment, and integration of fact and opinion into dialogue. https://developer.amazon.com/es/blogs/alexa/post/885ec615-314f-425f-a396-5bcffd33dd76/amazon-releases-data-set-of-annotated-conversations-to-aid-development-of-socialbots

2019-09 Andrew Ng at Amazon re:MARS 2019

In eras of technological disruption, leadership matters.  Andrew Ng speaks about the progress of AI, how to accelerate AI adoption, and what’s around the corner for AI at Amazon re:MARS 2019 in Las Vegas, California. https://www.deeplearning.ai/blog/andrew-ng-at-amazon-remars-2019/?utm_campaign=BlogAndrewReMarsSeptember12019&utm_content=100648184&utm_medium=social&utm_source=linkedin&hss_channel=lcp-18246783 *****, AI  for Good, AI FATE (fairness accuracy transparency ethics), AI Forecast, AI Techology advance, AI Training, Business, Data, PersonOfInterest,

Facebook expands use of face recognition

https://nakedsecurity.sophos.com/2019/09/06/facebook-expands-use-of-face-recognition/ AI FATE (fairness accuracy transparency ethics), AI Techology advance, Company, Eth Privacy,

2019-09 Announcing Two New Natural Language Dialog Datasets

Today’s digital assistants are expected to complete tasks and return personalized results across many subjects, such as movie listings, restaurant reservations and travel plans. However, despite tremendous progress in recent years, they have not yet reached human-level understanding. This is due, in part, to the lack of quality training data that accurately reflects the way people express their needs and preferences to a digital assistant. This is because the limitations of such systems bias what we say—we want to be understood, and so tailor our words to what we expect a digital assistant to understand. In other words, the conversations we might observe with today’s digital assistants don’t reach the level of dialog complexity we need to model human-level understanding. https://ai.googleblog.com/2019/09/announcing-two-new-natural-language.html