<?xml version="1.0" encoding="utf-8" ?><rss version="2.0" xml:base="https://www.talp.upc.edu/academy/term/76/all" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:og="http://ogp.me/ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:sioc="http://rdfs.org/sioc/ns#" xmlns:sioct="http://rdfs.org/sioc/types#" xmlns:skos="http://www.w3.org/2004/02/skos/core#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#">
  <channel>
    <title>Opinion articles</title>
    <link>https://www.talp.upc.edu/academy/term/76/all</link>
    <description></description>
    <language>en</language>
     <atom:link href="https://www.talp.upc.edu/taxonomy/term/76/all/feed" rel="self" type="application/rss+xml" />
      <item>
    <title>El País Economía - CaixaBank extiende el uso de la inteligencia artificial a todas sus oficinas, cajeros y actividad </title>
    <link>https://www.talp.upc.edu/content/el-pais-economia-caixabank-extiende-el-uso-de-la-inteligencia-artificial-todas-sus-oficinas</link>
    <description>&lt;div class=&quot;field-item even&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; src=&quot;https://www.talp.upc.edu/sites/default/files/1571855444_355505_1571855747_noticia_normal_recorte1.jpg&quot; width=&quot;1024&quot; height=&quot;475&quot; title=&quot;Gonzalo Gortázar, consejero delegado de CaixaBank EFE&quot; /&gt;&lt;/div&gt;&lt;p&gt;ÁNGELES GONZALO ALCONADA&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Personaliza así las previsibles necesidades comerciales de cada cliente y logra Identificar al cliente que quiere dejar el banco&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;La banca busca nuevas fórmulas para reducir costes y aumentar ingresos, que compensen la cada vez mayor presión de los negativos tipos de interés. La reducción de los riesgos es otra vía a destacar. CaixaBank ha comenzado a aplicar la inteligencia artificial en todo su proceso bancario como una herramienta destinada, precisamente, a reducir el riesgo con sus clientes.&lt;/p&gt;
&lt;p&gt;La nueva herramienta que ha puesto en marcha está diseñada para identificar las necesidades financieras de cada cliente y personalizar tanto la atención de los gestores de la entidad, como la información comercial que reciben los usuarios a través de los diversos canales.&lt;/p&gt;
&lt;p&gt;Se trata de un sistema pensado para un uso integrado en todas las oficinas de CaixaBank, que incluye el servicio de gestores personales a distancia inTouch, el canal de banca online CaixaBankNow, e incluso sus cajeros.&lt;/p&gt;
&lt;p&gt;La función de la aplicación de las técnicas de inteligencia artificial “es que se puede ajustar la información y la oferta de productos y servicios a lo que el cliente necesita en ese momento. Ello facilita a los gestores identificar a las personas que demandan un determinado producto, aunque todavía no se hayan dirigido directamente a ellos”, explican desde la entidad financiera.&lt;/p&gt;
&lt;p&gt;Las mismas fuentes señalan que también permite racionalizar la información comercial que el cliente recibe por sus canales digitales. El lanzamiento del proyecto ha empezado por inTouch, donde la herramienta de inteligencia artificial ya se encuentra en pleno funcionamiento para algo más de 600 gestores del servicio. CaixaBank también ha iniciado el despliegue en modo piloto en más de 300 oficinas y su puesta en marcha en los procesos y campañas de los canales de banca online, banca móvil y cajeros.&lt;/p&gt;
&lt;p&gt;Durante los próximos meses, se completará su implementación en el resto de la red. La entidad financiera creó en 2016 una filial, CaixaBank Business Intelligence, exclusivamente dedicada a la gestión de la información desde un punto de vista científico para dar apoyo a la actividad comercial. Esta filial cuenta con un equipo cercano los 100 especialistas, todos ellos especializados en las últimas técnicas de big data, IA y machine learning: predicción con GBM, xGBoosty Random Forest; Word2Vec para text mining; código de APIs de Google, cookies de geolocalización; programación en R, ODM, Python y SAS, entre otras.&lt;/p&gt;
&lt;p&gt;La sociedad, presidida por Cristina Lázaro, está integrada en la dirección general de negocio, dirigida por Juan Antonio Alcaraz.&lt;/p&gt;
&lt;p&gt;Esta herramienta de inteligencia artificial se basa en la potencia del data pool de la entidad, con más de 900 terabytes de información y capaz de gestionar 12.000 transacciones por segundo en hora punta, explican fuentes de CaixaBank.&lt;/p&gt;
&lt;p&gt;A partir de los modelos realizados con la herramienta, la entidad diseña el plan de acción comercial centrado en el perfil de cada cliente y en los canales que más emplea. “Así se consigue identificar la mejor manera de informar al cliente y se evita saturarle con la repetición de ofertas”, afirma la entidad. &lt;/p&gt;
&lt;p&gt;Con la nueva herramienta CaixaBank identifica los tres productos más relevantes para el cliente de todo el catálogo de la entidad y los alinea con las distintas campañas comerciales. De esta forma, los gestores pueden recibir cada día un listado “inteligente”, capaz de detectar los clientes más afines a los productos que se están comercializando en ese momento.&lt;/p&gt;
&lt;p&gt;El listado también recomienda a los gestores contactar con usuarios con los que no ha habido interacción en los últimos tres meses. La información que reciben los gestores se completa con una segunda herramienta predictiva, que a partir de un modelo basado en 1.300 variables, identifica a los clientes que han iniciado el proceso de abandonar la entidad. &lt;/p&gt;
</description>
     <pubDate>Wed, 30 Oct 2019 07:14:49 +0000</pubDate>
 <dc:creator>Gemma Thomas</dc:creator>
 <guid isPermaLink="false">1085 at https://www.talp.upc.edu</guid>
  </item>
  <item>
    <title>The New York Times - Warnings of a Dark Side to A.I. in Health Care</title>
    <link>https://www.talp.upc.edu/content/new-york-times-warnings-dark-side-ai-health-care</link>
    <description>&lt;div class=&quot;field-item even&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; src=&quot;https://www.talp.upc.edu/sites/default/files/22adversarial-jumbo.jpg&quot; width=&quot;1024&quot; height=&quot;683&quot; title=&quot;Scientists worry that with just tiny tweaks to data, neural networks can be fooled into committing “adversarial attacks” that mislead rather than help.Credit...Joan Cros/NurPhoto, via Getty Images&quot; /&gt;&lt;/div&gt;&lt;p&gt;By Cade Metz and Craig S. Smith - March 21, 2019&lt;/p&gt;
&lt;p&gt;Last year, the Food and Drug Administration approved a device that can capture an image of your retina and automatically detect signs of diabetic blindness.&lt;/p&gt;
&lt;p&gt;This new breed of artificial intelligence technology is rapidly spreading across the medical field, as scientists develop systems that can identify signs of illness and disease in a wide variety of images, from X-rays of the lungs to C.A.T. scans of the brain. These systems promise to help doctors evaluate patients more efficiently, and less expensively, than in the past.&lt;/p&gt;
&lt;p&gt;Similar forms of artificial intelligence are likely to move beyond hospitals into the computer systems used by health care regulators, billing companies and insurance providers. Just as A.I. will help doctors check your eyes, lungs and other organs, it will help insurance providers determine reimbursement payments and policy fees.&lt;/p&gt;
&lt;p&gt;Ideally, such systems would improve the efficiency of the health care system. But they may carry unintended consequences, a group of researchers at Harvard and M.I.T. warns.&lt;/p&gt;
&lt;p&gt;In a paper published on Thursday in the journal Science, the researchers raise the prospect of “adversarial attacks” — manipulations that can change the behavior of A.I. systems using tiny pieces of digital data. By changing a few pixels on a lung scan, for instance, someone could fool an A.I. system into seeing an illness that is not really there, or not seeing one that is.&lt;/p&gt;
&lt;p&gt;Software developers and regulators must consider such scenarios, as they build and evaluate A.I. technologies in the years to come, the authors argue. The concern is less that hackers might cause patients to be misdiagnosed, although that potential exists. More likely is that doctors, hospitals and other organizations could manipulate the A.I. in billing or insurance software in an effort to maximize the money coming their way.&lt;/p&gt;
&lt;p&gt;Samuel Finlayson, a researcher at Harvard Medical School and M.I.T. and one of the authors of the paper, warned that because so much money changes hands across the health care industry, stakeholders are already bilking the system by subtly changing billing codes and other data in computer systems that track health care visits. A.I. could exacerbate the problem.&lt;/p&gt;
&lt;p&gt;“The inherent ambiguity in medical information, coupled with often-competing financial incentives, allows for high-stakes decisions to swing on very subtle bits of information,” he said.&lt;/p&gt;
&lt;p&gt;The new paper adds to a growing sense of concern about the possibility of such attacks, which could be aimed at everything from face recognition services and driverless cars to iris scanners and fingerprint readers.&lt;/p&gt;
&lt;p&gt;An adversarial attack exploits a fundamental aspect of the way many A.I. systems are designed and built. Increasingly, A.I. is driven by neural networks, complex mathematical systems that learn tasks largely on their own by analyzing vast amounts of data.&lt;/p&gt;
&lt;p&gt;By analyzing thousands of eye scans, for instance, a neural network can learn to detect signs of diabetic blindness. This “machine learning” happens on such an enormous scale — human behavior is defined by countless disparate pieces of data — that it can produce unexpected behavior of its own.&lt;/p&gt;
&lt;p&gt;In 2016, a team at Carnegie Mellon used patterns printed on eyeglass frames to fool face-recognition systems into thinking the wearers were celebrities. When the researchers wore the frames, the systems mistook them for famous people, including Milla Jovovich and John Malkovich.&lt;/p&gt;
&lt;p&gt;A group of Chinese researchers pulled a similar trick by projecting infrared light from the underside of a hat brim onto the face of whoever wore the hat. The light was invisible to the wearer, but it could trick a face-recognition system into thinking the wearer was, say, the musician Moby, who is Caucasian, rather than an Asian scientist.&lt;/p&gt;
&lt;p&gt;Researchers have also warned that adversarial attacks could fool self-driving cars into seeing things that are not there. By making small changes to street signs, they have duped cars into detecting a yield sign instead of a stop sign.&lt;/p&gt;
&lt;p&gt;Late last year, a team at N.Y.U.’s Tandon School of Engineering created virtual fingerprints capable of fooling fingerprint readers 22 percent of the time. In other words, 22 percent of all phones or PCs that used such readers potentially could be unlocked.&lt;/p&gt;
&lt;p&gt;The implications are profound, given the increasing prevalence of biometric security devices and other A.I. systems. India has implemented the world’s largest fingerprint-based identity system, to distribute government stipends and services. Banks are introducing face-recognition access to A.T.M.s. Companies such as Waymo, which is owned by the same parent company as Google, are testing self-driving cars on public roads.&lt;/p&gt;
&lt;p&gt;Now, Mr. Finlayson and his colleagues have raised the same alarm in the medical field: As regulators, insurance providers and billing companies begin using A.I. in their software systems, businesses can learn to game the underlying algorithms.&lt;/p&gt;
&lt;p&gt;If an insurance company uses A.I. to evaluate medical scans, for instance, a hospital could manipulate scans in an effort to boost payouts. If regulators build A.I. systems to evaluate new technology, device makers could alter images and other data in an effort to trick the system into granting regulatory approval.&lt;/p&gt;
&lt;p&gt;In their paper, the researchers demonstrated that, by changing a small number of pixels in an image of a benign skin lesion, a diagnostic A.I system could be tricked into identifying the lesion as malignant. Simply rotating the image could also have the same effect, they found.&lt;/p&gt;
&lt;p&gt;Small changes to written descriptions of a patient’s condition also could alter an A.I. diagnosis: “Alcohol abuse” could produce a different diagnosis than “alcohol dependence,” and “lumbago” could produce a different diagnosis than “back pain.&quot;&lt;/p&gt;
&lt;p&gt;In turn, changing such diagnoses one way or another could readily benefit the insurers and health care agencies that ultimately profit from them. Once A.I. is deeply rooted in the health care system, the researchers argue, business will gradually adopt behavior that brings in the most money.&lt;/p&gt;
&lt;p&gt;The end result could harm patients, Mr. Finlayson said. Changes that doctors make to medical scans or other patient data in an effort to satisfy the A.I. used by insurance companies could end up on a patient’s permanent record and affect decisions down the road.&lt;/p&gt;
&lt;p&gt;Already doctors, hospitals and other organizations sometimes manipulate the software systems that control the billions of dollars moving across the industry. Doctors, for instance, have subtly changed billing codes — for instance, describing a simple X-ray as a more complicated scan — in an effort to boost payouts.&lt;/p&gt;
</description>
     <pubDate>Tue, 29 Oct 2019 08:31:41 +0000</pubDate>
 <dc:creator>Gemma Thomas</dc:creator>
 <guid isPermaLink="false">1081 at https://www.talp.upc.edu</guid>
  </item>
  <item>
    <title>El País Economía RETINA - &#039;Machine learning&#039; El español sigue siendo una lengua extranjera para la inteligencia artificial</title>
    <link>https://www.talp.upc.edu/content/el-pais-economia-retina-machine-learning-el-espanol-sigue-siendo-una-lengua-extranjera-para</link>
    <description>&lt;div class=&quot;field-item even&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; src=&quot;https://www.talp.upc.edu/sites/default/files/1571218870_674350_1571220766_noticia_normal_recorte1.jpg&quot; width=&quot;1024&quot; height=&quot;510&quot; /&gt;&lt;/div&gt;&lt;p&gt;&lt;strong&gt;Por José Ángel Plaza López&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Crear y entrenar algoritmos con datos en nuestro idioma abriría las puertas a un mercado global de 580 millones de hispanohablantes. El español aún representa menos del 30% del mercado mundial de las tecnologías de procesamiento de lenguaje natural.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Siri, Cortana, Alexa y el asistente de Google hablan nuestro idioma. Pero no es su lengua materna. Como si compartieran aula con los 22 millones de alumnos que actualmente estudian español como idioma extranjero, según el Instituto Cervantes, a los asistentes digitales se le atragantan partes de esta asignatura. “A las máquinas les cuesta entender los acentos de las diferentes partes del país y las variedades del español en América, mientras que funcionan mejor en inglés porque esa es la lengua de la mayor parte de los ensayos, investigaciones y publicaciones científicas”, comenta Elena González-Blanco, directora general de Coverwallet en Europa y experta en inteligencia artificial (IA) y tecnología lingüística.&lt;/p&gt;
&lt;p&gt;En su opinión, responder cuestiones que implican subjetividad y un conocimiento previo del contexto es uno de los principales escollos de la IA que reconoce, interpreta e imita la voz humana. “Un idioma no se aprende únicamente con clases de gramática, sino que además se requiere saber usarlo en determinados contextos y registros”, asegura.&lt;/p&gt;
&lt;p&gt;Pero si a los robots se les aporta mucha información y casuística con baterías ingentes de preguntas y sus posibles respuestas, aunque no sean capaces de emular emocionalmente un contexto, sí tendrán información para al menos reproducir situaciones similares. “La cosa mejora a medida que la información contextual se suple con entrenamiento, pero para ello necesitamos una cantidad enorme de datos, sobre todo si existen distintos registros, dialectos y variedades lingüísticas”, apunta González-Blanco.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Sobresaliente en inglés y chino&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Roberto Carreras, fundador y director de Voikers, compañía especializada en tecnología de voz, recuerda que, además del inglés, otro de los principales idiomas de la IA es el chino, debido a “su capacidad de penetración, la apuesta del Gobierno por el desarrollo de esta tecnología y el impacto en millones de habitantes”. No en vano, se trata del idioma más hablado en el planeta.&lt;/p&gt;
&lt;p&gt;¿Pero qué ocurre con el español, la segunda lengua materna del mundo por número de hablantes? “Los conjuntos de datos en español usados para entrenar a la IA son todavía pequeños y cuando desarrollamos proyectos que requieren una cantidad ingente de información debemos tirar del inglés”, apunta Carreras.&lt;/p&gt;
&lt;p&gt;Por eso, no es extraño que, según las cifras que maneja González-Blanco, el español aún represente menos del 30% del mercado mundial de las tecnologías PLN (Procesamiento del Lenguaje Natural), que según la consultora Credence Research crecerá a un ritmo anual cercano al 12% entre 2018 y 2026, año en el que alcanzará los 28.600 millones de dólares.&lt;/p&gt;
&lt;p&gt;Así las cosas, González-Blanco asegura que nuestra lengua puede convertirse en uno de los “catalizadores” de la competitividad de España en el ámbito de la inteligencia artificial, ya que empresas de todos los sectores cuentan con muchísima información histórica en español con la que pueden entrenar a las máquinas. “El reto es usar correctamente esos datos con fines de investigación para crear y mejorar nuestros propios algoritmos y después comercializarlos en un mercado potencial con 580 millones de hispanohablantes”, puntualiza la experta, que vaticina un mayor éxito de aquellos proyectos “de nicho” específicamente destinados a cubrir una necesidad muy concreta, ya que la IA solo funciona bien cuando tiene una finalidad delimitada a partir de una información muy controlada.&lt;/p&gt;
&lt;p&gt;Se trata de una opinión compartida por Carreras, que subraya los “grandes esfuerzos” que se están haciendo en la actualidad por destacar la importancia del español en el futuro de la IA. Como ejemplo, el director de Voikers saca a relucir el Plan de Impulso de las Tecnologías del Lenguaje, una iniciativa de la Secretaría de Estado para el Avance Digital: “Sin lugar a dudas, es una de las grandes apuestas de nuestro país por conectar el mundo universitario de la investigación en tecnologías del habla con el mundo corporativo, que adopta a velocidad de vértigo estas soluciones en procesos internos y externos”.&lt;/p&gt;
&lt;p&gt;Según Carreras, todos los sectores de actividad podrán verse beneficiados por la implementación de las tecnologías de voz, que supone un nuevo contexto de relación entre las instituciones y sus comunidades, las empresas y sus clientes y las compañías y sus diferentes grupos de interés. En su opinión, las bondades de aplicar la IA a la voz ya son palpables en la salud, la banca, la automoción, los seguros, la educación, el turismo o la accesibilidad a la tecnología de colectivos como discapacitados, mayores y niños.&lt;/p&gt;
&lt;p&gt;“Pero hay que estar preparados para lo que viene. En los próximos años veremos cómo los asistentes virtuales y los asistentes personales que emplean la voz como interfaz modificarán la forma que entendemos hasta hoy de construir marcas, de crear relaciones en un entorno conversacional, de generar experiencias y contenidos o de vender y atender a los clientes”, concluye Carreras.&lt;/p&gt;
</description>
     <pubDate>Thu, 24 Oct 2019 09:27:13 +0000</pubDate>
 <dc:creator>Gemma Thomas</dc:creator>
 <guid isPermaLink="false">1078 at https://www.talp.upc.edu</guid>
  </item>
  <item>
    <title>La Vanguardia - Las nuevas herramientas con las que Alexa ha aprendido español de EEUU y portugués de Brasil</title>
    <link>https://www.talp.upc.edu/content/la-vanguardia-las-nuevas-herramientas-con-las-que-alexa-ha-aprendido-espanol-de-eeuu-y</link>
    <description>&lt;p&gt;REDACCIÓN  14/10/2019 10:22&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Alexa ha introducido recientemente el portugués de Brasil y el español de Estados Unidos a la lista de idiomas que puede hablar, después de seguir un proceso de aprendizaje que no depende de las interacciones de los usuarios con el asistente, sino que emplea técnicas como la inducción gramatical o el muestreo guiado.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;MADRID, 14 (Portaltic/EP) Alexa ha introducido recientemente el portugués de Brasil y el español de Estados Unidos a la lista de idiomas que puede hablar, después de seguir un proceso de aprendizaje que no depende de las interacciones de los usuarios con el asistente, sino que emplea técnicas como la inducción gramatical o el muestreo guiado. &quot;Cuando una versión de Alexa en un nuevo idioma está en desarrollo, los datos de capacitación para sus sistemas de comprensión de lenguaje natural (NLU) son escasos&quot;, explica la compañía en una entrada en su blog oficial. En estos casos, el equipo de desarrollo de Alexa emplea lo que se denomina enunciados dorados (&#039;golden utterances&#039;), comandos propuestos por los desarrolladores a modo de ejemplo, y trabaja con ellos siguiendo distintas técnicas para potenciar el entrenamiento. Así, los desarrolladores siguen una técnica denominada inducción gramatical, que, como explican, &quot;analiza un puñado de enunciados dorados para aprender patrones sintácticos y semánticos&quot;, con los que es capaz de generar &quot;miles de nuevas oraciones similares&quot;. Esta técnica lo que hace es acelerar el aprendizaje de un nuevo idioma cuando no hay un gran número de ejemplos procedentes de usuarios con los que trabajar. Así, &quot;dada una lista de 50 enunciados dorados, un lingüista computacional podría generar una gramática representativa en un día, y podría ser operacionalizada al final del siguiente día&quot;, señalan en el blog de Alexa. Los desarrolladores de Alexa mencionan también el enfoque conocido como fusión de modelo bayesiano, que &quot;identifica patrones lingüísticos en listas de enunciados dorados y los usa para generar reglas candidatas para diferentes plantillas de oraciones&quot;, con la que pueden establecer si, en contextos definidos, dos palabras son intercambiables, por ejemplo. Otra técnica, muestreo guiado, permite generar nuevas oraciones mediante la &quot;combinación de palabras y frases de los ejemplos disponibles en los datos&quot; y está enfocada a &quot;maximizar la precisión de los modelos NLU resultantes&quot;. Esta técnica emplea también datos de entrenamiento disponibles, tanto de otros idiomas disponibles en Alexa como de fuentes de medios digitales, para dotar de contexto.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
</description>
     <pubDate>Wed, 23 Oct 2019 06:16:02 +0000</pubDate>
 <dc:creator>Gemma Thomas</dc:creator>
 <guid isPermaLink="false">1075 at https://www.talp.upc.edu</guid>
  </item>
  <item>
    <title>El País Economía-Retina - España se apunta al reconocimiento facial y a la vigilancia con inteligencia artificial </title>
    <link>https://www.talp.upc.edu/content/el-pais-economia-retina-espana-se-apunta-al-reconocimiento-facial-y-la-vigilancia-con</link>
    <description>&lt;div class=&quot;field-item even&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; src=&quot;https://www.talp.upc.edu/sites/default/files/1568965024_919905_1568965263_noticia_normal_recorte1.jpg&quot; width=&quot;1024&quot; height=&quot;683&quot; title=&quot;Getty Images&quot; /&gt;&lt;/div&gt;&lt;p&gt;Guillermo Vega   23 SEP 2019 - 07:33 CEST&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Vigilar a sus ciudadanos con inteligencia artificial no es algo exclusivamente chino. Un informe cifra en al menos 75 los países que están utilizando activamente herramientas de IA como el reconocimiento facial para la vigilancia. España está ente ellos.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Uno oye vigilancia mediante inteligencia artificial y piensa en China. Las recientes protestas en Hong Kong son buena muestra de ello. El uso de esta tecnología (Gran Hermano, según el tópico) es sin embargo un asunto cada vez más global y cada vez más países siguen la estela china en el despliegue de inteligencia artificial para rastrear a los ciudadanos, según un informe del grupo de investigación Carnegie Endowment for International Peace.&lt;/p&gt;
&lt;p&gt;La organización cifra en al menos 75 los países que están utilizando activamente herramientas de IA como el reconocimiento facial para la vigilancia. España está ente ellos, así como Estados Unidos o aliados como Francia o Alemania.&lt;/p&gt;
&lt;p&gt;Carnegie Endowment for International Peace es un think tank con sede en Washington D.C. creado en 1910 por el filántropo y empresario Andrew Carnegie. Es el editor de la prestigiosa revista Foreign Policy. CEIP se ha basado en registros públicos e informes de los medios en 176 países para llevar a cabo su investigación, y recalca que no realiza distinción alguna entre usos legítimos e ilegítimos de la inteligencia artificial.&lt;/p&gt;
&lt;p&gt;La institución se plantea estas preguntas.&lt;/p&gt;
&lt;p&gt;¿Qué países están adoptando tecnología de vigilancia basada en IA?&lt;/p&gt;
&lt;p&gt;¿Qué tipos específicos de tecnología de vigilancia basada en IA están desplegando los gobiernos?&lt;/p&gt;
&lt;p&gt;¿Qué países están desplegando esta tecnología?&lt;/p&gt;
&lt;p&gt;En el caso de España, CEIP detecta que el país usa tanto tecnología china y de EE UU en ámbitos como el reconocimiento facial, vigilancia inteligente y en seguridad de las ciudades. “Resulta cuántos casos de estudio de vigilancia en municipios de Alemania, Italia, Holanda y España se citan en la página web de Huawei”, asegura el informe.&lt;/p&gt;
&lt;p&gt;La compañía china, de hecho, es uno de los principales proveedores de tecnología de inteligencia artificial del Estado español, junto con las españolas Herta Security y SICE y la estadounidense IBM.&lt;/p&gt;
&lt;p&gt;A nivel global, el informe sostiene que son las chinas lideradas por Huawei y Hikvision las que están suministrando gran parte de la tecnología de vigilancia de IA a países de todo el mundo. Otras compañías, como NEC de Japón, además de IBM, Palantir y Cisco de EE UU. Ninguna de estas compañías ha querido comentar el informe con la agencia Associated Press.&lt;/p&gt;
&lt;p&gt;Porque introducir inteligencia artificial para vigilar a los ciudadanos no es exclusiva de regímenes autoritarios o semiautoritarios. De hecho, las democracias liberales como la española son los principales usuarios de la vigilancia de la IA. El 51% de estos países implementan sistemas de vigilancia de IA.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Preguntas&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&quot;Espero que los ciudadanos hagan preguntas más difíciles sobre cómo se usa este tipo de tecnología y qué tipo de impactos tendrá&quot;, asegura el autor del informe, Steven Feldstein, miembro de Carnegie Endowment y profesor asociado en la Universidad Estatal de Boise.&lt;/p&gt;
&lt;p&gt;Muchos de los proyectos citados en el informe de Feldstein son sistemas de &quot;ciudades inteligentes&quot; en los que un gobierno municipal instala una serie de sensores, cámaras y otros dispositivos conectados a Internet para recopilar información y comunicarse entre sí. Estos sistemas se suelen usar para administrar el tráfico o ahorrar energía, pero que también tienen utilidades en tareas de vigilancia y seguridad públicas, sostiene el propio científico.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;El caso chino&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Evidentemente, China es el líder absoluto en este sentido, y cuenta no solo con las citadas Huawei y Hikvision, sino con otras muchas compañías de menor tamaño que cuentan con algoritmos capaces de detectar rostros y enviar la información al instante a la policía. Es el caso de Dragonfly Eye, el algoritmo de inteligencia artificial (IA) que la empresa china Yitu ha desarrollado para detectar a malhechores a través de un sofisticado sistema de reconocimiento facial. “La IA es una revolución mayor y más rápida que la industrial”, aseguró recientemente a EL PAÍS Retina el cofundador de Yitu, Zhu Long. “La gente está enfrascada en un debate sobre si es algo real o una burbuja, pero los avances en reconocimiento facial confirman su enorme potencial”.&lt;/p&gt;
</description>
     <pubDate>Mon, 30 Sep 2019 07:23:04 +0000</pubDate>
 <dc:creator>Gemma Thomas</dc:creator>
 <guid isPermaLink="false">1072 at https://www.talp.upc.edu</guid>
  </item>
  <item>
    <title>The New York Times - When the A.I. Professor Leaves, Students Suffer, Study Says</title>
    <link>https://www.talp.upc.edu/content/new-york-times-when-ai-professor-leaves-students-suffer-study-says</link>
    <description>&lt;div class=&quot;field-item even&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; src=&quot;https://www.talp.upc.edu/sites/default/files/merlin_160248942_c93f5794-e5dd-4a7d-85c5-c358d286ed6a-jumbo.jpg&quot; width=&quot;1024&quot; height=&quot;683&quot; title=&quot;Dimitris Konstantinidis, left, Ellen Cappo and Arjav Desai placing batteries into drones at the Robust Adaptive Systems Lab, part of Carnegie Mellon University’s Robotics Institute.CreditCreditKristian Thacker for The New York Times&quot; /&gt;&lt;/div&gt;&lt;p&gt;By Cade Metz  Sept. 6, 2019&lt;/p&gt;
&lt;p&gt;SAN FRANCISCO — For years, big tech companies have used huge salaries, bonuses and stock packages to lure artificial intelligence experts out of academia. Now, a study released on Friday says that migration has hurt the post-college prospects of students.&lt;/p&gt;
&lt;p&gt;The study, the first of its kind, was conducted by researchers at the University of Rochester. They found that over the last 15 years, 153 artificial intelligence professors in North American universities left their posts for industry. An additional 68 moved into industry while retaining part-time roles with their universities.&lt;/p&gt;
&lt;p&gt;This migration has greatly increased in recent years, the study said. From 2004 to 2009, 26 university professors moved into industry. In 2018 alone, 41 professors made the move. The steep rise in departures over the last decade and a half indicates that the trend will continue.&lt;/p&gt;
&lt;p&gt;The talent shift could accelerate the development of artificial intelligence inside tech giants like Google, Microsoft, Amazon and Apple.&lt;/p&gt;
&lt;p&gt;But at the universities the professors left, graduating students were less likely to create new A.I. companies. When they did, they attracted smaller amounts of funding, according to the study. The effect was most pronounced in the field of “deep learning,” a technology that has become a crucial part of new A.I. systems.&lt;/p&gt;
&lt;p&gt;In time, the brain drain from academia could hamper innovation and growth across the economy, the study argued. “The knowledge transfer is lost, and because of that, so is innovation,” said Michael Gofman, a finance professor at the University of Rochester and one of the authors of the study.&lt;/p&gt;
&lt;p&gt;Deep learning is driven by “neural networks,” complex mathematical systems that can learn tasks by analyzing vast amounts of data. By pinpointing patterns in thousands of dog photos, for instance, a neural network can learn to recognize a dog.&lt;/p&gt;
&lt;p&gt;Until 2010, the tech industry largely ignored the idea. But as the internet generated more data and new computer chips reduced the time needed to analyze it, the technique started to make more sense for the companies that had all that information.&lt;/p&gt;
&lt;p&gt;Machine learning helps A.I. systems perform tasks like recognizing photos, identifying spoken words and translating languages. It also helps self-driving cars recognize objects and make decisions.&lt;/p&gt;
&lt;p&gt;Big tech companies have hired many of the academics who specialized in the technique. Three longtime academics recently won the Turing Award — often called the Nobel Prize of computing — for their work on neural networks. Two have moved into industry, one to Google and the other to Facebook.&lt;/p&gt;
&lt;p&gt;The tech and automobile industries have aggressively pursued the idea of a driverless car, drawing another wave of academics out of the universities. In 2015, Uber hired 40 people from a Carnegie Mellon robotics lab, including research professors.&lt;/p&gt;
&lt;p&gt;Since then, industry interest in artificial intelligence of all kinds has increased, according to the study. Google and DeepMind, both owned by Alphabet, have hired 23 professors. Amazon has hired 17, Microsoft has hired 13, and Uber, Nvidia and Facebook have each hired seven.&lt;/p&gt;
&lt;p&gt;Tech companies disagree with the notion that they are plundering academia. A Google spokesman, for example, said the company was an enthusiastic supporter of academic research.&lt;/p&gt;
&lt;p&gt;“We’ve given over $250 million to academic research since 2005, and every year we host over 30 visiting faculty, dozens of Ph.D. students and thousands of interns,” said the spokesman, Jason Freidenfelds. He said many professors went to work at Google and returned to their university positions.&lt;/p&gt;
&lt;p&gt;The study found that students most affected by the departures were those who graduated four to six years later, meaning they probably had little interaction of the departing professors. At any given university, a significant increase in the number of departing professors reduced the number of A.I. entrepreneurs by 13 percent.&lt;/p&gt;
&lt;p&gt;The impact was particularly notable at top 10 universities and with entrepreneurs who completed Ph.D.s.&lt;/p&gt;
&lt;p&gt;Some experts worry that as top professors move into industry, the education of the next generation of students will suffer.&lt;/p&gt;
&lt;p&gt;“If industry keeps hiring the cutting-edge scholars, who will train the next generation of innovators in artificial intelligence?” Ariel Procaccia, a computer science professor at Carnegie Mellon, said in a recent op-ed piece for Bloomberg.&lt;/p&gt;
&lt;p&gt;At Carnegie Mellon, 17 professors, all of them tenured, have moved into industry, the study showed. Some were part of that mass hire by Uber. The University of Washington and the University of California, Berkeley, each saw 11 make the move.&lt;/p&gt;
&lt;p&gt;The Rochester study did not directly analyze the quality of the training students received after these professors departed. Instead, it focused on the start-up economy, showing that departures led to fewer student start-ups. When professors left universities and were replaced by professors from lower-ranked schools, students were even less likely to create a start-up, the study said.&lt;/p&gt;
&lt;p&gt;Experts are split on whether a decline in the start-up economy will harm the progress of A.I.&lt;/p&gt;
&lt;p&gt;“Just because students are not starting their own firms does not mean they are not going to work in A.I.,” said Joshua Graff Zivin, a professor of economics at the University of California, San Diego. “They just may be doing this in other ways.”&lt;/p&gt;
&lt;p&gt;But many agree that university funding should be increased to ensure that the next generation is properly educated.&lt;/p&gt;
&lt;p&gt;“Machine learning and artificial intelligence is one of the most exciting and rapidly expanding fields in science,” said Scott Stern, a professor at the M.I.T. Sloan School of Business. “We need to make sure we are investing in the seed corn.”&lt;/p&gt;
</description>
     <pubDate>Thu, 26 Sep 2019 07:46:09 +0000</pubDate>
 <dc:creator>Gemma Thomas</dc:creator>
 <guid isPermaLink="false">1069 at https://www.talp.upc.edu</guid>
  </item>
  <item>
    <title>TechExplore - Japan roboticists predict rise of the machines</title>
    <link>https://www.talp.upc.edu/content/techexplore-japan-roboticists-predict-rise-machines</link>
    <description>&lt;div class=&quot;field-item even&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; src=&quot;https://www.talp.upc.edu/sites/default/files/howcomfortab.jpg&quot; width=&quot;800&quot; height=&quot;480&quot; title=&quot;How comfortable will we feel surrounded by autonomous humanoids?&quot; /&gt;&lt;/div&gt;&lt;p&gt;by Alastair Himmer SEPTEMBER 23, 2019&lt;/p&gt;
&lt;p&gt;Set in 2019, cult 80s movie &quot;Blade Runner&quot; envisaged a neon-stained landscape of bionic &quot;replicants&quot; genetically engineered to look just like humans.&lt;/p&gt;
&lt;p&gt;So far that has failed to materialise, but at a secretive research institute in western Japan, wild-haired roboticist Hiroshi Ishiguro is fine-tuning technology that could blur the line between man and machine.&lt;/p&gt;
&lt;p&gt;Highly intelligent, self-aware and helpful around the house—the robots of the future could look and act just like humans and even become their friends, Ishiguro and his team predict.&lt;/p&gt;
&lt;p&gt;&quot;I don&#039;t know when a &#039;Blade Runner&#039; future will happen, but I believe it will,&quot; the Osaka University professor told AFP.&lt;/p&gt;
&lt;p&gt;&quot;Every year we&#039;re developing new technology—like deep learning, which has improved the performance of pattern recognition,&quot; he added.&lt;/p&gt;
&lt;p&gt;&quot;Now we&#039;re focusing on intention and desire, and if we implement them into robots whether they become more human-like.&quot;&lt;/p&gt;
&lt;p&gt;Robots are already widely used in Japan—from cooking noodles to helping patients with physiotherapy.&lt;/p&gt;
&lt;p&gt;Marketed as the world&#039;s first &quot;cyborg-type&quot; robot, HAL (hybrid assistive limb)—developed by Tsukuba University and Japanese company Cyberdyne—is helping people in wheelchairs walk again using sensors connected to the unit&#039;s control system.&lt;/p&gt;
&lt;p&gt;Scientists believe service robots will one day help us with household chores, from taking out the garbage to making the perfect slice of toast.&lt;/p&gt;
&lt;p&gt;Stockbrokers in Japan and around the world are already deploying AI bots to forecast stock market trends and science fiction&#039;s rapid advance towards science fact owes much to the likes of Ishiguro.&lt;/p&gt;
&lt;p&gt;He previously created an android copy of himself—using complex moving parts, electronics, silicone skin and his own hair—that he sends on business trips in his place.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&#039;Wake up, time to die&#039;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;But Ishiguro believes recent breakthroughs in robotics and artificial intelligence will accelerate the synthesis of man and machine.&lt;/p&gt;
&lt;p&gt;&quot;As a scientist, I hope to develop self-conscious robots like you see in &#039;Blade Runner&#039; to help me understand what it is to be human,&quot; he said. &quot;That&#039;s my motivation.&quot;&lt;/p&gt;
&lt;p&gt;The point at which that line between humans and machines converges has long been a source of anxiety for some, as depicted in popular culture.&lt;/p&gt;
&lt;p&gt;In &quot;Blade Runner&quot;, Harrison Ford plays a police officer who tracks down and kills replicants that have escaped and are living among the population in Los Angeles.&lt;/p&gt;
&lt;p&gt;The &quot;Terminator&quot; series starring Arnold Schwarzenegger centres on a self-aware computer network which initiates a nuclear holocaust and, through autonomous military machines, wages war against human survivors.&lt;/p&gt;
&lt;p&gt;&quot;I can&#039;t understand why Hollywood wants to destroy robots,&quot; shrugged Ishiguro, who in 2007 was named one of the top 100 living geniuses by global consultants firm Synectics.&lt;/p&gt;
&lt;p&gt;&quot;Look at Japanese cartoons and animations—robots are always friendly. We have a totally different cultural background,&quot; noted the professor.&lt;/p&gt;
&lt;p&gt;It&#039;s not just Hollywood that has concerns over AI.&lt;/p&gt;
&lt;p&gt;Tesla&#039;s Elon Musk has called for a global ban on killer robots, warning technological advances could revolutionise warfare and create new &quot;weapons of terror&quot; that target innocent people.&lt;/p&gt;
&lt;p&gt;But Ishiguro insists there is no inherent danger in machines becoming self-aware or surpassing human intelligence.&lt;/p&gt;
&lt;p&gt;&quot;We don&#039;t need to fear AI or robots, the risk is controllable,&quot; he said. &quot;My basic idea is that there is no difference between humans and robots.&quot;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&#039;Uncanny valley&#039;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The ultimate goal, according to Ishiguro&#039;s colleague Takashi Minato, is &quot;to bring robots into society as human companions—it&#039;s possible for robots to become our friends.&quot;&lt;/p&gt;
&lt;p&gt;But will they look like us, as Ishiguro believes, and how comfortable will we feel surrounded by autonomous humanoids?&lt;/p&gt;
&lt;p&gt;Japanese roboticist Masahiro Mori suggested in 1970 that the more robots resemble people, the creepier we find them—a phenomenon he called the &quot;uncanny valley&quot;.&lt;/p&gt;
&lt;p&gt;Ishiguro&#039;s first attempt at creating an android clone was based on his daughter and its &quot;jerky movements&quot; reduced her to tears.&lt;/p&gt;
&lt;p&gt;He has since perfected the template, including a creation he claimed was the world&#039;s first news-reading android and a robot priest at a Kyoto temple unveiled earlier this year.&lt;/p&gt;
&lt;p&gt;Minato shares his boss&#039;s visionary ideas.&lt;/p&gt;
&lt;p&gt;&quot;Hopefully remote-control technology will develop to allow our alter egos to lead regular lives,&quot; he said.&lt;/p&gt;
&lt;p&gt;&quot;Like in the movie &#039;Surrogates&#039;— that would make life more convenient,&quot; he added, referencing the sci-fi Bruce Willis hit in which people cocooned at home experience lives through robotic avatars.&lt;/p&gt;
&lt;p&gt;While he won&#039;t put a date on a real-life &quot;Blade Runner&quot; future, Ishiguro claims the rise of the machines has already begun.&lt;/p&gt;
&lt;p&gt;&quot;Already computers are more powerful than humans in some cases,&quot; he said. &quot;Technology is just another means of evolution. We are changing the definition of what it is to be human.&quot;&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
</description>
     <pubDate>Mon, 23 Sep 2019 07:01:18 +0000</pubDate>
 <dc:creator>Gemma Thomas</dc:creator>
 <guid isPermaLink="false">1063 at https://www.talp.upc.edu</guid>
  </item>
  <item>
    <title>Medium - Artificial Inteligence - The arrival of the artificial employee </title>
    <link>https://www.talp.upc.edu/content/medium-artificial-inteligence-arrival-artificial-employee</link>
    <description>&lt;p&gt;Kate Darmody&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The inception of the artificial employee is on the horizon and workplace diversity will soon include artificial intelligence.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The rise of AI in all areas of life is inevitable and it is set to reshape the way we think about consciousness and our personal identity. In our current economic climate it is no longer a futuristic dream where workplaces include artificial intelligence (AI) as part of their workforce. From The Matrix to Black Mirror, we have seen freakishly-smart, artificial intelligent robots bleed into popular media for the last two decades. While terrifying, these films have captivated and inspired our collective consciousness impacting the way we see a future that is influenced by AI.&lt;/p&gt;
&lt;p&gt;In a world where we can barely imagine what our lives will look like 5 years time, it is useless to try to predict how exactly robots are going to change the way we live. But taking into account our fascination with technology, it is fair to assume AI is here to create change. With the support and investment of large companies such as Google, Amazon, Apple, Uber and Microsoft who are committed to the development of AI, the sci-fi future we always saw in films is much closer than we once predicted.&lt;/p&gt;
&lt;p&gt;As a creative, an AI enthusiast and a product designer at Solstice, I have had the opportunity to work with emerging technologies across various sectors. Over the last couple of years I have witnessed the exciting, and sometimes scary, landscape of technology change and I believe the realities of AI will not simply impact the products that we build but our entire existence. Living and working with AI in the future will compel us to reassess our ethics as a society as well as our personal morals and sense of self.&lt;/p&gt;
&lt;p&gt;Take a moment now and imagine walking into your office building for a meeting and as you enter the foyer you are greeted by an AI robot, that behaves like a human.&lt;br /&gt;
	This is already part of my everyday routine and this is the future. The simple fact that we will interacting with the appearance of consciousness in things that are clearly not biological will be enough for us to (at least unconsciously) revise what we think consciousness is.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;So what is AI?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;AI, also known as Artificial intelligence, is a form of computer science that falls under the discipline of robotics. AI is an automated process that imitates the human learning process in the brain by creating what are called artificial neural networks.&lt;br /&gt;
	For example, when teaching AI new skills such as visual recognition, it is provided with an image and then tasked with finding images with the same subject matter. It trawls through vast numbers of images to find other photos that share similarities. Each correct answer reinforces the neural pathways, so just as humans do, it learns from experience and positive reinforcement. As the software algorithm begins to learn and evolve, its accuracy increases. We see this type of AI in government agencies, cyber security, search engines and much more, as it can effectively filter through large amounts of data.&lt;/p&gt;
&lt;p&gt;While a future with humanistic AI robots may not be as inconceivable anymore, it is important to acknowledge that AI and humans are very different. Firstly, the decision making processes of humans in comparison to AI is significantly different.&lt;br /&gt;
	While humans rely heavily on intuition, emotion and instinct, AI can be programmed to rapidly filter through enormous numbers of possibilities to calculate the most accurate answer. While the computational data-crunching can automate many otherwise-mundane tasks, the processing power is substantial. This can become problematic in the advancement of AI and historically has reduced AI to specialize in one function.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;How will this impact our lives?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;While we will continue to see media perpetuating the image of sophisticated AI minds, humans won’t be dismissed as evolutionary landfill just yet. Until we can better understand our own brains, the combination of an the emotionally intelligent robot is still a very distant possibility.&lt;br /&gt;
	We can find comfort in knowing a robot won’t be replacing our colleagues in the immediate future though AI has been shifting into various industries since the 90’s. From agriculture to vehicles, finance, advertising, science and medical technology; AI is constantly advancing and becoming more sophisticated. Mover over, with each improvement, we are finding people to be more open to it and more dependant upon it.&lt;/p&gt;
&lt;p&gt;Over the last two years we have seen drastic growth in the evolution of artificial intelligence. Today we have virtual assistants, smart vehicles, energy-efficient devices, automated supply chains and microchips that mirror the neural structure of the brain. Personalized digital entertainment experiences now provide movie or music recommendations and automated customer support services are among many AI services that we increasingly take for granted. As practical as neural networks might be for efficiency, automation, data interpretation and pattern recognition, they lack long term memory.&lt;/p&gt;
&lt;p&gt;Neuroscience has informed ongoing research that explores the relationship between human thinking, artificial intelligence and society. Having spent extensive time working within brain injury I have gained insight into the way our brains handle acute trauma and learning. I believe it is essential to understand human cognition in order to drive forward AI advancement — and vice versa.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Where does AI fail us?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;AI succeeds best when being tasked with finding black and white answers. Before we can program AI to understand how to make the correct decision for us we need to understand human consciousness. We have to ask questions about what drives people’s decisions making process, what impact do their decisions have upon others, the economy, the environment and policy. To take it further, the lack of empathy in AI will mean humans will always be better at making complex decisions as they can better account for unforeseen variables.&lt;/p&gt;
&lt;p&gt;Beyond the computational power of AI that is infinitely beneficial to society, the potential of a future in which we will interact with AI personalities regularly is a rather exciting pondering. It is particularly thrilling to consider the diversity of viewpoints that this implies. AI, by nature, will have opinions, whether it was developed this was consciously or unconsciously. Ultimately, AI answers questions that we believe are correct however bias can cause answers to be varying across different groups of people.&lt;/p&gt;
&lt;p&gt;Looking through the lense of the workplace, diversity is a theme we see more and more commonly. In the future we can expect diversity to account for Al too. Furthermore, the evolution of our workforce will cause review of major societal infrastructure, such as tax collection procedures.&lt;br /&gt;
	We can expect that AI will change the workforce; but it won’t just be another cog in the machine. AI is already being used in law and medicine, not only to read and assess documents but to make recommendations. While advances in robotics are allowing doctors to perform surgeries remotely. Imminently, AI may even be able to perform simple surgeries.&lt;/p&gt;
&lt;p&gt;So while the pervading fear that technology will innovate entire careers out of existence still remains; we can seek reprieve in knowing humans are never going to entirely design themselves out of the picture. Those who are open to learning and who are adaptable will remain in high demand and it is exciting to think of the dynamic workforce we will be among. I believe the biggest differentiator for those who intend to remain employed in the future, will be those who poses the willingness to be agile within the roles they take.&lt;/p&gt;
</description>
     <pubDate>Thu, 12 Sep 2019 10:34:39 +0000</pubDate>
 <dc:creator>Gemma Thomas</dc:creator>
 <guid isPermaLink="false">1057 at https://www.talp.upc.edu</guid>
  </item>
  <item>
    <title>Catalunya Press - La inteligencia artificial y sus prejuicios</title>
    <link>https://www.talp.upc.edu/content/catalunya-press-la-inteligencia-artificial-y-sus-prejuicios</link>
    <description>&lt;p&gt;Pablo Rodríguez Canfranc&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Los algoritmos de inteligencia artificial cada vez están más presentes en nuestras vidas. &lt;/strong&gt;Muchos procesos relacionados con las personas se automatizan, como pueden ser la preselección de candidatos para ocupar un puesto de trabajo o la concesión de créditos bancarios, por poner tan solo dos ejemplos. Un programa informático recibe toda la información sobre cada individuo y, en función de los parámetros con los que ha sido programado, realiza la evaluación.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;El problema es que según para qué sean utilizados, estos algoritmos pueden tener la responsabilidad de la toma de decisiones importantes que afectan a la vida de la gente. Pueden determinar que consigamos o no un trabajo, que podamos estudiar o no en un colegio o universidad solicitado, e incluso –como hemos visto más arriba en el caso de COMPAS- que se nos conceda la libertad provisional. La automatización de estos procesos persigue lograr una evaluación de cada tema mucho más objetiva, eliminando los prejuicios propios de los humanos, pero, paradójicamente, aparecen sesgos y fallos en los algoritmos, que llevan a discriminar a determinadas personas y colectivos.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;Es por ello que la experta en sistemas inteligentes Cathy O´Neil los define como armas de destrucción matemática en su libro del mismo nombre. Se trata de programas que pueden hacer mucho daño a mucha gente. Y lo peor es que las víctimas de sus decisiones no saben bajo de criterios se les ha evaluado, pues el funcionamiento de los algoritmos es demasiado complejo y solamente es conocido por los técnicos que los diseñan.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Los algoritmos no desarrollan sesgos discriminatorios por sí mismo, sino que reproducen los prejuicios de sus educadores. Como en el caso de un niño, su aprendizaje depende en gran medida del maestro que tenga y del libro de texto que utilice.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;En esta metáfora, concebida por el investigador del MIT Rahul Bhargava, el libro de texto para la inteligencia artificial serían los datos con los que se le entrenan para que aprenda a tomar decisiones. Se trata de grandes cantidades de datos que constituyen ejemplos de situaciones que se han resuelto a nuestra conformidad o que responden correctamente a la pregunta que queremos que el algoritmo aprenda a contestar.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;Por ejemplo, si estamos entrenando a un sistema para que establezca la probabilidad que presentan los solicitantes de un crédito de devolver la deuda contraída a tiempo, nutriremos a la inteligencia artificial con información sobre casos de créditos cancelados correctamente, para que pueda extraer de ellos un patrón que describa al prestatario más proclive a cumplir sus obligaciones y, de esta manera, poder clasificar a los solicitantes en función de su riesgo de insolvencia.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;El segundo elemento que interviene en el aprendizaje de la máquina es el maestro&lt;/strong&gt;, es decir, la persona que hace las preguntas y que determina qué conjunto de datos debe considerar el algoritmo para elaborar una respuesta. En el ejemplo anterior, se puede indicar al sistema que tenga en cuenta datos del solicitante como la cantidad solicitada, el tiempo establecido para su devolución o su nivel de ingresos, pero también se pueden incluir otros, como su situación familiar, género o raza.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;De esta forma, los sesgos discriminatorios que presentan los algoritmos son reflejos de nuestros propios prejuicios, dado que dependen de los datos con los que alimentamos al sistema y de las preguntas que le hacemos.&lt;/p&gt;
&lt;p&gt;Imaginemos, siguiendo con el mismo ejemplo, que alimentamos el algoritmo creado para evaluar la solvencia de los solicitantes de crédito con historiales crediticios mayormente de personas de raza blanca. Seguidamente, le indicamos entre los parámetros que debe utilizar para la toma de decisiones la etnia del solicitante.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;El patrón que genera el sistema sobre cómo es una persona solvente podría considerar que los prestatarios negros no lo son, dado que no encuentra entre la información que ha recibido suficientes ejemplos de ciudadanos de piel oscura que cancelan sus deudas y, lo que es peor, su maestro le ha indicado que la raza es un factor importante de cara a establecer un juicio. En consecuencia, dictamina un elevado riesgo de prestar fondos a solicitantes que no son blancos.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Afortunadamente, algunas de las instituciones más punteras en el campo de la inteligencia artificial están cobrando consciencia de los fallos que acarrean los algoritmos que desarrollan&lt;/strong&gt;, y, en consecuencia, tratan de poner medios para poder corregirlos. IBM anunció el pasado año el lanzamiento de una herramienta que analiza en tiempo real cómo y por qué los algoritmos toman decisiones. Se denomina Fairness 360 y es capaz de rastrear signos de que el sistema produce sesgos e incluso recomendar ajustes para corregirlos. También Google presentaba What-IfTool, una aplicación que explora el funcionamiento a lo largo del tiempo de los sistemas de inteligencia artificial. &lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
</description>
     <pubDate>Thu, 05 Sep 2019 07:17:52 +0000</pubDate>
 <dc:creator>Gemma Thomas</dc:creator>
 <guid isPermaLink="false">1054 at https://www.talp.upc.edu</guid>
  </item>
  <item>
    <title>The conversation - Artificial intelligence in medicine raises legal and ethical concerns</title>
    <link>https://www.talp.upc.edu/content/conversation-artificial-intelligence-medicine-raises-legal-and-ethical-concerns</link>
    <description>&lt;div class=&quot;field-item even&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; src=&quot;https://www.talp.upc.edu/sites/default/files/file-20190830-166001-1mzagz8.jpg&quot; width=&quot;600&quot; height=&quot;395&quot; title=&quot;The hope is that AI will be able to read radiological images more efficiently than a human.  AP Photo/David Goldman&quot; /&gt;&lt;/div&gt;&lt;p&gt;Sharona Hoffman - Professor of Health Law and Bioethics, Case Western Reserve University.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The use of artificial intelligence in medicine is generating great excitement and hope for treatment advances.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;AI generally refers to computers’ ability to mimic human intelligence and to learn. For example, by using machine learning, scientists are working to develop algorithms that will help them make decisions about cancer treatment. They hope that computers will be able to analyze radiological images and discern which cancerous tumors will respond well to chemotherapy and which will not.&lt;/p&gt;
&lt;p&gt;But AI in medicine also raises significant legal and ethical challenges. Several of these are concerns about privacy, discrimination, psychological harm and the physician-patient relationship. In a forthcoming article, I argue that policymakers should establish a number of safeguards around AI, much as they did when genetic testing became commonplace.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Potential for discrimination&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;AI involves the analysis of very large amounts of data to discern patterns, which are then used to predict the likelihood of future occurrences. In medicine, the data sets can come from electronic health records and health insurance claims but also from several surprising sources. AI can draw upon purchasing records, income data, criminal records and even social media for information about an individual’s health.&lt;/p&gt;
&lt;p&gt;Researchers are already using AI to predict a multitude of medical conditions. These include heart disease, stroke, diabetes, cognitive decline, future opioid abuse and even suicide. As one example, Facebook employs an algorithm that makes suicide predictions based on posts with phrases such as “Are you okay?” paired with “Goodbye” and “Please don’t do this.”&lt;/p&gt;
&lt;p&gt;This predictive capability of AI raises significant ethical concerns in health care. If AI generates predictions about your health, I believe that information could one day be included in your electronic health records.&lt;/p&gt;
&lt;p&gt;Anyone with access to your health records could then see predictions about cognitive decline or opioid abuse. Patients’ medical records are seen by dozens or even hundreds of clinicians and administrators in the course of medical treatment. Additionally, patients themselves often authorize others to access their records: for example, when they apply for employment or life insurance.&lt;/p&gt;
&lt;p&gt;Data broker industry giants such as LexisNexis and Acxiom are also mining personal data and engaging in AI activities. They could then sell medical predictions to any interested third parties, including marketers, employers, lenders, life insurers and others. Because these businesses are not health care providers or insurers, the HIPAA Privacy Rule does not apply to them. Therefore, they do not have to ask patients for permission to obtain their information and can freely disclose it.&lt;/p&gt;
&lt;p&gt;Such disclosures can lead to discrimination. Employers, for instance, are interested in workers who will be healthy and productive, with few absences and low medical costs. If they believe certain applicants will develop diseases in the future, they will likely reject them. Lenders, landlords, life insurers and others might likewise make adverse decisions about individuals based on AI predictions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lack of protections&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Americans with Disabilities Act does not prohibit discrimination based on future medical problems. It applies only to current and past ailments. In response to genetic testing, Congress enacted the Genetic Information Nondiscrimination Act. This law prohibits employers and health insurers from considering genetic information and making decisions based on related assumptions about people’s future health conditions. No law imposes a similar prohibition with respect to nongenetic predictive data.&lt;/p&gt;
&lt;p&gt;AI health prediction can also lead to psychological harm. For example, many people could be traumatized if they learn that they will likely suffer cognitive decline later in life. It is even possible that individuals will obtain health forecasts directly from commercial entities that bought their data. Imagine obtaining the news that you are at risk of dementia through an electronic advertisement urging you to buy memory-enhancing products.&lt;/p&gt;
&lt;p&gt;When it comes to genetic testing, patients are advised to seek genetic counseling so that they can thoughtfully decide whether to be tested and better understand test results. By contrast, we do not have AI counselors who provide similar services to patients.&lt;/p&gt;
&lt;p&gt;Yet another concern relates to the doctor-patient relationship. Will AI diminish the role of doctors? Will computers be the ones to make predictions, diagnoses and treatment suggestions, so that doctors simply implement the computers’ instructions? How will patients feel about their doctors if computers have a greater say in making medical determinations?&lt;/p&gt;
&lt;p&gt;These concerns are exacerbated by the fact that AI predictions are far from infallible. Many factors can contribute to errors. If the data used to develop an algorithm are flawed – for instance, if they use medical records that contain errors – the algorithm’s output will be incorrect. Therefore, patients may suffer discrimination or psychological harm when in fact they are not at risk of the predicted ailments.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;A call for caution&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;What can be done to protect the American public? I have argued in past work for the expansion of the HIPAA Privacy Rule so that it covers anyone who handles health information for business purposes. Privacy protections should apply not only to health care providers and insurers, but also to commercial enterprises. I have also argued that Congress should amend the Americans with Disabilities Act to prohibit discrimination based on forecasts of future diseases.&lt;/p&gt;
&lt;p&gt;Physicians who provide patients with AI predictions should ensure that they are thoroughly educated about the pros and cons of such forecasts. Experts should counsel patients about AI just as trained professionals do about genetic testing.&lt;/p&gt;
&lt;p&gt;The prospect of AI can over-awe people. Yet, to ensure that AI truly promotes patient welfare, physicians, researchers and policymakers must recognize its risks and proceed with caution.&lt;/p&gt;
</description>
     <pubDate>Thu, 05 Sep 2019 06:58:39 +0000</pubDate>
 <dc:creator>Gemma Thomas</dc:creator>
 <guid isPermaLink="false">1051 at https://www.talp.upc.edu</guid>
  </item>
  <item>
    <title>Artificial intelligence and bionic eyes help contain raging wildfires</title>
    <link>https://www.talp.upc.edu/content/artificial-intelligence-and-bionic-eyes-help-contain-raging-wildfires</link>
    <description>&lt;div class=&quot;field-item even&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; src=&quot;https://www.talp.upc.edu/sites/default/files/edremf0x4aacr7a.jpg&quot; width=&quot;150&quot; height=&quot;150&quot; /&gt;&lt;/div&gt;&lt;p&gt;BY BRIAN K. SULLIVAN&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The technology is helping companies and weather services better predict natural disasters due to climate change.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;On a tower in the Brazilian rain forest, a sentinel scans the horizon for the first signs of fire.&lt;/p&gt;
&lt;p&gt;Only these eyes aren’t human. They don’t blink or take breaks, and guided by artificial intelligence they can tell the difference between a dust cloud, an insect swarm and a plume of smoke that demands quick attention. In Brazil, the devices help keep mining giant Vale working, and protect trees for pulp and paper producer Suzano.&lt;/p&gt;
&lt;p&gt;The equipment includes optical and thermal cameras, as well as spectrometric systems that identify the chemical makeup of substances. By linking them to artificial intelligence, a small Portugal-based company working with IBM Corp. believes it can help tame the often unpredictable affects of climate change. Others are using AI to predict dangerous hail storms, and studying how it can help find victims in bad weather.&lt;/p&gt;
&lt;p&gt;“Climate change is dramatically changing the way we look at this problem of wildfires,” said Vasco Correia, chief business officer at Compta Emerging Business Solutions, which builds the devices. “Two years ago we started looking to artificial intelligence and machine learning because we believe those can be game changers.”&lt;/p&gt;
&lt;p&gt;In 2019, weather and climate events killed more than 4,000 people worldwide, and caused around $42 billion in insured losses, according to the insurer Munich Re. Compta’s goal is to limit losses with warnings that can help keep a small blaze from becoming a conflagration.&lt;/p&gt;
&lt;p&gt;The Compta system was first used in Brazil in a pilot program designed to test its effectiveness, but now is “available globally and operational,” Correia said by telephone.&lt;/p&gt;
&lt;p&gt;The company recently opened an office in California, which had its deadliest wildfire last year when a blaze blamed on PG&amp;amp;E Corp. power lines consumed the town of Paradise, killing 85. Liabilities from that fire helped push the utility into bankruptcy in January.&lt;/p&gt;
&lt;p&gt;Along with analyzing the data from on site, the device’s artificial intelligence will weigh similar events captured by the system over time. It will also use IBM’s Watson supercomputer to visually evaluate what it sees and forecasts from its Weather Company to predict how the fire might spread.&lt;/p&gt;
&lt;p&gt;“The landscape of wildfires and how they ignite and how they evolve today is much different from what we had in past decades,” Correia said. “The fire seasons are two months longer than they were in the past decades, and wild fires today are burning six times the land area than they did before and lasting five times longer.”&lt;/p&gt;
&lt;p&gt;Wildfires are just one of many threats as extreme weather costs keep rising.&lt;/p&gt;
&lt;p&gt;The trail of destruction through the first half of this year included flooded farm fields across the Midwest and carnage along the African coast from Cyclone Idai, which killed 1,013 people, according to Munich Re.&lt;/p&gt;
&lt;p&gt;One threat artificial intelligence could help humans prepare better for is hail, said David Gagne, a machine learning scientist at the National Center for Atmospheric Research in Boulder, Colorado. Since 2008, hail and severe thunderstorms damage has jumped to $19 billion a year adjusted for inflation and has stayed there since.&lt;/p&gt;
&lt;p&gt;Gagne said the group is researching a tool that the National Weather Service can use to help meteorologists better predict where and when hail is going to happen a day in advance. So far the hail program has improved accuracy by 10 percent, and there is work underway to test it real time within the weather service.&lt;/p&gt;
&lt;p&gt;A surprising outcome is that the program has taught itself to recognize between supercell storms, which produce a lot of hail, from linear and pulse storms, which are less destructive. “You get a storm type classifier for free,” Gagne said. “It just kind of figured out storm type by looking at all the features of the storms. It is a really neat result.”&lt;/p&gt;
&lt;p&gt;Artificial intelligence and machine learning programs are being used in other weather and climate related applications as well. For instance, defense and aerospace giant Raytheon has begun to explore AI’s use to coordinate rescues in the aftermath of disasters, said Todd Probert, a vice president with the company.&lt;/p&gt;
&lt;p&gt;“Imagine that the National Guard could quickly pinpoint survivors hidden inside literally the terabyte satellite data that is coming down,” Probert said. “What if those fire responders could make sense out of literally the thousands of texts and tweets and calls desperately seeking emergency services and vector into where the most critical need was.”&lt;/p&gt;
&lt;p&gt;Humans do this by pouring over images. Imagine a computerized tool that searches for flooding, he said, adding, That’s a pretty good first order.”&lt;/p&gt;
&lt;p&gt;Raytheon has been working with the National Guard to pull data off of Facebook to pinpoint people who need help in the aftermath of several recent storms. But these have been episodic and there aren’t machine learning tools on the back end to create a product rescue crews could use in all situations.&lt;/p&gt;
&lt;p&gt;“The whole game here is about putting the data to work for us,” Probert said.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
</description>
     <pubDate>Thu, 05 Sep 2019 06:30:06 +0000</pubDate>
 <dc:creator>Gemma Thomas</dc:creator>
 <guid isPermaLink="false">1048 at https://www.talp.upc.edu</guid>
  </item>
  <item>
    <title>Il Sole 24 Ore - Interagire con gli assistenti virtuali? Un&#039;abitudine per un italiano su dieci</title>
    <link>https://www.talp.upc.edu/content/il-sole-24-ore-interagire-con-gli-assistenti-virtuali-unabitudine-un-italiano-su-dieci</link>
    <description>&lt;div class=&quot;field-item even&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; src=&quot;https://www.talp.upc.edu/sites/default/files/co-kdec-1020x533ilsole24ore-web.jpg&quot; width=&quot;1020&quot; height=&quot;533&quot; /&gt;&lt;/div&gt;&lt;p&gt;di Gianni Rusconi&lt;/p&gt;
&lt;p&gt;Una fotografia inedita, unica o quasi nel suo genere in Italia, sulla diffusione degli smart speaker. La ricerca realizzata da Celi, società di proprietà al 100% di H-Farm (dal 2017) e attiva da diversi anni sul fronte delle tecnologie di riconoscimento vocale per il mondo automotive, in collaborazione con l&#039;istituto di ricerca Kkienn ci dice in altre parole come e quanto sia sviluppato il rapporto fra gli italiani e i dispositivi intelligenti che sanno riconoscere i comandi vocali impartiti dall&#039;utente e agire di conseguenza. Il dato forse più importante che emerge dall&#039;indagine, svolta nella seconda metà di giugno e che il Sole24ore ha avuto modo di leggere in anteprima, è il seguente: il 13% del panel online di 700 consumatori preso a campione ha in casa uno smart speaker. Una penetrazione già significativa, dicono gli autori dello studio, &lt;br /&gt;
	sullo stesso livello dei gadget indossabili e superiore a quello delle bici elettriche, per quanto ancora molto lontana dalle percentuali di adozione di smartphone e personal computer (rispettivamente al 95% e all&#039;88%) e di altri prodotti smart come le Tv (al 53%) o le cuffie wireless (al 32%). Ben delineato è il profilo dell&#039;utente medio di questi prodotti, e quindi persone fra i 25-45 anni, benestanti, in molti casi con in tasca una laurea e dipendenti full time presso aziende medio grandi.&lt;br /&gt;
	Le applicazioni sono “basic” &lt;/p&gt;
&lt;p&gt;Chi possiede un dispositivo “parlante” lo utilizza una o più volte al giorno nel 70% dei casi ed è anche diffuso in modo rilevante l&#039;uso cross-device degli assistenti vocali, con il 79% del campione che interagisce con essi direttamente tramite smartphone. Per il momento, ed è una tendenza nota, le applicazioni appoggiate ai maggiordomi virtuali sono però molto semplici: in testa alle richieste impartite a Siri, Alexa o Google Assistant svettano la riproduzione di brani musicali (nel 64% dei casi) e le previsioni meteo (54%) e sono altrettanto gettonate l&#039;ascolto delle Internet radio e la notifica dei promemoria. Il controllo degli altri dispositivi connessi presenti in casa è invece una prerogativa solo del 26% degli utenti mentre solo il 12% del campione si affida agli assistenti vocali per fare acquisti online. Ciò che emerge dall&#039;indagine è in definitiva un quadro in cui si evidenzia grande curiosità per questa tecnologia, la certezza di ripeterne l&#039;acquisto (lo conferma un terzo degli utenti), un livello di soddisfazione buono e la consapevolezza di una qualità dello strumento che può migliorare ulteriormente.&lt;br /&gt;
	Apple pioniera &lt;br /&gt;
	“Oggi siamo su un crinale, pronti ad attraversare una frontiera: da una parte abbiamo i sistemi per la navigazione a menu con la voce, dall&#039;altra la tecnologia in grado di soddisfare un bisogno. A fare da ponte fra questi due mondi ci sono strumenti, ormai consolidati e funzionanti, di text-to-speech e speech-to-text. La sfida da vincere è nota: arrivare a soluzioni con componenti di semantica integrata in grado di comprendere i comandi, contestualizzarli e interpretarne il significato per poter rispondere all&#039;esigenza che esprimono”. L&#039;analisi di Vittorio Di Tomaso, Presidente e Ceo di Celi, che abbiamo incontrato in sede di presentazione della ricerca, traccia una direzione ben precisa sui futuri sviluppi delle interazioni uomo-macchina attraverso la voce. Se le prime tracce di applicazione della tecnologia text-to-speech su un computer risalgono al 1984, con l&#039;Apple Macintosh, l&#039;accelerazione è arrivata in tempi più recenti, nel 2008, con i servizi di voice search sui device mobili, nel 2008 con Google, è proseguita con i comandi vocali di Siri per iPhone e iPad (nel 2011) per arrivare alle interazioni con gli apparecchi connessi introdotte nel 2014 da Amazon, con Alexa.&lt;/p&gt;
&lt;p&gt;La verbalizzazione del pensiero il prossimo traguardo &lt;br /&gt;
	Lo scenario di riferimento per analizzare l&#039;impatto potenziale dell&#039;intelligenza artificiale applicata agli assistenti virtuali, secondo Di Tomaso, è rappresentato da tre universi: i due miliardi di smartphone oggi dotati di tecnologie vocali (di questi circa un miliardo hanno a bordo Google Assistant, e quindi tutta la galassia di terminali Android, e 800 milioni Siri), le auto connesse (l&#039;automobile è l&#039;ambiente, dopo il telefono, dove i consumatori usano più spesso interfacce vocali) e i circa 200 milioni di smart speaker installati nelle case di tutto il mondo. Se l&#039;industria automotive è partita prima nel fare proprio queste tecnologie, è emblematico come – a detta del manager di Celi – le grandi aziende tech abbiamo progressivamente accelerato per farne un punto di forza delle rispettive strategie di sviluppo. &lt;br /&gt;
	“Language is the new interface”, ebbe a dire nel 2016 il numero uno di Microsoft, Satya Nadella, una dichiarazione di intenti che si riflette nelle parole pronunciate dal noto futurista Ray Kurzwei in occasione della Ted Conference 2018, secondo cui il linguaggio naturale è il “Santo Graal” dell&#039;intelligenza artificiale. Gli sforzi delle aziende che operano in questo settore convergono non a caso nel costruire sistemi di voice recognition che possano rendere l&#039;interazione vocale riconoscibile e personalizzabile. L&#039;orizzonte di questa tecnologia, come conferma Di Tomaso, è insomma quello di affidare alla macchina la verbalizzazione del pensiero, e per questo si tenderà a sviluppare e realizzare dispositivi in grado di equivalere in tutto e per tutto (o quasi) il comportamento del cervello umano. A quando questo ulteriore strappo nel processo di rivoluzione segnato dall&#039;AI? Non troppo lontano. Nel 2022, secondo le ultime predizioni della società di ricerca Idc, il 30% delle imprese di classe enterprise su scala mondiale utilizzerà tecnologie vocali conversazionali per attività di customer engagement. Ed è una percentuale destinata a crescere esponenzialmente.&lt;br /&gt;
	uno smart speaker&lt;/p&gt;
</description>
     <pubDate>Wed, 04 Sep 2019 08:07:29 +0000</pubDate>
 <dc:creator>Gemma Thomas</dc:creator>
 <guid isPermaLink="false">1045 at https://www.talp.upc.edu</guid>
  </item>
  <item>
    <title>Forbes - AI Is Like Encryption: It Can’t Be Regulated Out Of Existence</title>
    <link>https://www.talp.upc.edu/content/forbes-ai-encryption-it-cant-be-regulated-out-existence</link>
    <description>&lt;div class=&quot;field-item even&quot;&gt;&lt;img typeof=&quot;foaf:Image&quot; src=&quot;https://www.talp.upc.edu/sites/default/files/960x0_1.jpg&quot; width=&quot;960&quot; height=&quot;435&quot; title=&quot;Getty Images&quot; /&gt;&lt;/div&gt;&lt;p&gt;Kalev Leetaru - Contributor &lt;/p&gt;
&lt;p&gt;As the public becomes increasingly aware of the dangers of AI algorithmic bias and concerned over surveillance and militaristic applications of deep learning, there have been a growing number of calls for AI regulation. Whether new laws governing AI fairness or policies constraining the use of autonomous weapons systems, the challenge confronting policymakers is that AI is very much like encryption: it is not a single controlled algorithm that can be regulated, it is a portfolio of techniques that no single country controls and which are being advanced every day by researchers all across the world.&lt;/p&gt;
&lt;p&gt;The almost unimaginably rapid progression of deep learning over the past half-decade into every corner of modern life has ushered in profoundly existential questions about how to ensure accurate, fair and beneficial use of this rapidly evolving technology.&lt;/p&gt;
&lt;p&gt;When it comes to biased algorithms, the fundamental fairness of current AI systems has been largely left to market forces. In turn, basic economics has ensured that free but heavily biased data wins over costly but minimally biased data. How precisely could legislators mandate “fair” AI systems in a way that it could be quantitatively tested for compliance? Mandating that systems cannot exhibit differing accuracy across demographics is one possibility, but leaves open the door to myriad other ways in which algorithms can discriminate.&lt;/p&gt;
&lt;p&gt;As AI systems encroach on regulated industries like the financial and housing sectors it is likely that existing anti-discrimination laws will begin to influence AI design. A mortgage lending algorithm that systematically denies applications according to protected characteristics like race will find itself afoul of the law without any further legislative action. As companies adapt their algorithms to the landscape of these regulated fields, it is likely that the practices and development workflows they develop will find their way into the rest of the AI landscape.&lt;/p&gt;
&lt;p&gt;Regulating surveillance use of AI will encounter far more obstacles. As biometric authentication becomes more popular in consumer devices like smartphones and as global security conditions increase the use of biometrics as a counter-terrorism tool, societies will face increasing pressures against tighter regulation. Even those governments that have moved to regulate facial recognition in some way have all yielded laws that still explicitly or indirectly permit its use for a wide swath of applications.&lt;/p&gt;
&lt;p&gt;Any attempt to regulate military use of AI will run headfirst into the simple fact that governments across the world are already building autonomous capabilities into their weaponry. A nation today that passes legislation absolutely outlawing nuclear weapons has no impact on its adversaries that possess or are working to possess such weapons. Similarly, countries which restrict military use of AI will find themselves at an existential disadvantage should conflict arise.&lt;/p&gt;
&lt;p&gt;In reality, most harmful applications of AI are merely inverses of beneficial applications. The police facial recognition that surveils dissidents is the same that prevents those police from unlocking dissidents’ phones without them being present. The autonomous flight and targeting controls of a lethal drone are the same that allows a mapping drone to catalog the devastation after a natural disaster to direct emergency aid.&lt;/p&gt;
&lt;p&gt;Most importantly, however, like encryption, deep learning does not refer to any single algorithm or process. The field of deep learning encompasses a broad swath of approaches and algorithms and myriad variants, developed by countless individuals across many different countries. Attempts to regulate the field of AI in one country will have no impact on other countries. Moreover, the economic forces propelling AI development means even if governmental funding were to be restricted by legislation, private funding would more than make up the difference.&lt;/p&gt;
&lt;p&gt;In the end, the simple fact is that AI cannot be regulated. Legislation can help curb specific applications in regulated industries like housing and the financial sector with regards to their impact in a single country, but when looking globally, AI will simply evolve wherever market and military forces take it.&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
</description>
     <pubDate>Wed, 04 Sep 2019 06:47:55 +0000</pubDate>
 <dc:creator>Gemma Thomas</dc:creator>
 <guid isPermaLink="false">1042 at https://www.talp.upc.edu</guid>
  </item>
  <item>
    <title>Maresme 360 - Mataró participa en el projecte Gavius d’intel·ligència artificial</title>
    <link>https://www.talp.upc.edu/content/maresme-360-mataro-participa-en-el-projecte-gavius-dintelligencia-artificial</link>
    <description>&lt;p&gt;Josep Lluís Vidal  10 Agost, 2019&lt;/p&gt;
&lt;p&gt;L’Ajuntament de &lt;strong&gt;Mataró&lt;/strong&gt; ha anunciat la seva participació en el &lt;strong&gt;projecte Gavius&lt;/strong&gt; d’&lt;strong&gt;intel·ligència artificial&lt;/strong&gt; que lidera l’Ajuntament de Gavà (Baix Llobregat). El projecte, que forma part del programa Urban Innovative Actions (UIA), ha estat un dels 20 seleccionats d’entre 175 que s’han presentat procedents de 23 estats de la Unió Europea.&lt;/p&gt;
&lt;p&gt;Gavius és un assistent virtual que ha de servir per comunicar als ciutadans els ajuts socials que els pertoquen i automatitzar el procés de sol·licitud i cobrament. Les institucions seleccionades per participar en el programa -entre elles l’Ajuntament de Mataró- tindran l’oportunitat d’experimentar solucions que hauran d’afrontar els reptes relacionats amb la transició digital, l’ús sostenible de solucions basades en la terra i la natura, la pobresa urbana i la seguretat urbana.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Projecte Gavius&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;L’eina està basada en un sistema d’intel·ligència artificial que aprèn constantment a mesura que es va utilitzant. Ha de permetre automatitzar procesos i la identificació de l’usuari es farà a través de reconeixement facial o biomètric. També es dissenyarà un assistent per als treballadors dels serveis socials i d’atenció ciutadana perquè puguin informar dels ajuts als que pot optar cada persona i procedir al seu pagament de manera automàtica.&lt;/p&gt;
&lt;p&gt;L’assistent està pensat inicialment per aplicar-se a l’àmbit dels serveis socials però es podria fer extensiu a altres àrees per tal de millorar la relació entre ciutadania i administracions públiques. També es persegueix que les dades que s’utilitzin serveixin per planificar de forma més efectiva les partides pressupostàries destinades a cada finalitat.&lt;/p&gt;
&lt;p&gt;Gavius requereix una inversió de 5,3 milions d’euros, dels quals el programa UIA finançarà el 80%. El projecte s’ha treballat amb representants dels sectors empresarial, administració, ciutadania i recerca.&lt;/p&gt;
</description>
     <pubDate>Mon, 02 Sep 2019 09:56:28 +0000</pubDate>
 <dc:creator>Gemma Thomas</dc:creator>
 <guid isPermaLink="false">1039 at https://www.talp.upc.edu</guid>
  </item>
  <item>
    <title>Aldia.cat - La Comissió Europea selecciona Gavà entre les 20 ciutats que rebran Fons Feder per fer &quot;accions urbanes innovadores&quot;</title>
    <link>https://www.talp.upc.edu/content/aldiacat-la-comissio-europea-selecciona-gava-entre-les-20-ciutats-que-rebran-fons-feder-fer</link>
    <description>&lt;p&gt;Editat per europa press&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;El projecte Gavius vol crear un assistent virtual per conèixer i tramitar ajudes socials des del mòbil&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;ACN&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Barcelona.-La Comissió Europea (CE) ha seleccionat Gavà entre les vint ciutats que rebran Fons europeus de desenvolupament regional (Feder) del total de 175 que s&#039;havien presentat a la convocatòria. L&#039;Executiu comunitari reserva 82 milions d&#039;euros per finançar vint projectes urbans innovadors. Gavà forma part de les 7 ciutats que tiraran endavant solucions en matèria de transició digital, en concret, el projecte Gavius, que vol crear un assistent virtual per comunicar als ciutadans els ajuts socials que els pertoquen i automatitzar el procés de sol·licitud i de cobrament. La iniciativa requereix una inversió de 5,3 milions d&#039;euros, dels quals el fons europeu del programa Urban Innovative Actions (UIA) finançarà el 80% amb una ajuda de 4,2 milions.&lt;/p&gt;
&lt;p&gt;El projecte Gavius és una eina que permetrà una automatització de processos, ja que es dissenya amb un sistema d&#039;intel·ligència artificial que aprèn constantment de les diverses tipologies d&#039;ús. Permetrà una identificació digital, a través de reconeixement facial o biomètric, per tal de salvaguardar la privacitat de l&#039;usuari, i serà de fàcil d&#039;usar.De forma paral·lela, es dissenya també un assistent per als treballadors dels Serveis Socials i d&#039;Atenció Ciutadana perquè puguin informar dels ajuts als quals es pot optar segons la casuística de cada persona i, alhora, procedir al pagament de manera automàtica. Altres ciutats com Getafe també rebran Fons Feder, en aquest cas, per dur a terme accions en l&#039;àmbit de la pobresa energètica. A més, el Pireu (Grècia), Tampere (Finlàndia) i Torí (Itàlia) rebran subvencions per dur a terme projectes que redueixin la vulnerabilitat dels espais públics. &quot;Ningú està més ben preparat per dissenyar solucions transformadores de les zones urbanes que les ciutats mateixes&quot;, ha dit en declaracions recollides en un comunicat el comissari de Política Europea de Veïnatge i responsable també de la Política Regional, Johannes Hahn.&lt;/p&gt;
</description>
     <pubDate>Mon, 02 Sep 2019 09:48:25 +0000</pubDate>
 <dc:creator>Gemma Thomas</dc:creator>
 <guid isPermaLink="false">1036 at https://www.talp.upc.edu</guid>
  </item>
  </channel>
</rss>
