Entradas

Mostrando las entradas etiquetadas como Redes Adversarias GAN

El Reinforcement Learning es vulnerable

Se ha descubierto que el RL es vulnerable a ataques adversarios lo que podría hacer los modelos vulnerables a aprendizaje erróneo. https://www.technologyreview.com/s/615299/reinforcement-learning-adversarial-attack-gaming-ai-deepmind-alphazero-selfdriving-cars/ Adaptado por Aniceto Pérez y Madrid, Filósofo de las Tecnologías y Editor de Actualidad Deep Learning (@forodeeplearn)

La inteligencia artificial ya puede crear vídeos inventados a partir de audios

La herramienta creada por SenseTime, el gigante tecnológico de Hong Kong, detecta emociones en el audio y las asocia a expresiones faciales que representa en un vídeo. https://retina.elpais.com/retina/2020/01/22/innovacion/1579693307_555687.html Adaptado por Aniceto Pérez y Madrid, Filósofo de las Tecnologías y Editor de Actualidad Deep Learning (@forodeeplearn)

COR-GAN: Correlation-Capturing Convolutional Neural Networks for Generating Synthetic Healthcare Records

https://arxiv.org/abs/2001.09346 Deep learning models have demonstrated high-quality performance in areas such as image classification and speech processing. However, creating a deep learning model using electronic health record (EHR) data, requires addressing particular privacy challenges that are unique to researchers in this domain. This matter focuses attention on generating realistic synthetic data while ensuring privacy. In this paper, we propose a novel framework called correlation-capturing Generative Adversarial Network (corGAN), to generate synthetic healthcare records. In corGAN we utilize Convolutional Neural Networks to capture the correlations between adjacent medical features in the data representation space by combining Convolutional Generative Adversarial Networks and Convolutional Autoencoders. To demonstrate the model fidelity, we show that corGAN generates synthetic data with performance similar to that of real data in various Machine Learning settings such as cl...