Watching in the Media 2018
Ready for the ViSmedia conference 2018? This year’s theme is “Watching in the Media”, and you are invited to a day filled with exciting news on media and new technologies.
At the conference, Watching in the Media will be highlighted from many perspectives. We will learn about digital surveillance in the attention economy, and explore hidden implications of Snap-mapping and outdoor VR experiments. We will also get an investigative glimpse into image sharing and revenge porn.
Moreover, we will get insights into automated content synthesis and its implications for journalism: What happens when images, videos and audios are put together not by humans, but by robots? These issues will be accompanied by a session on automating empathy in human-robot communication.
In an interactive panel debate, watching in the media will be highlighted in surprising ways. This session will be moderated by Kjetil Dale, host in TV2 News Channel.
”By illuminating polarities of watching and being watched, we want to prompt some important new debates,” says Professor Astrid Gynnild, leader of the research project ViSmedia at Department of Information Science and Media Studies, University of Bergen.
The conference is a free, collaborative event of ViSmedia and NCE Media; everyone interested in visual technologies, surveillance and the news is welcome to attend — in particular, journalists, researchers, tech developers, students, and media teachers. The conference will last from 0900 to 1600. More info to come.
Snap-Mapping – Network Buildups or Spies in the Skies? Journalism student at UiB, Maren Myrseth, will give a talk about the Snapchat Snap Map, and how it’s like to see and be seen by everyone in the world.
Håvard Kristoffersen Hansen
What happens to a person when intimate images are shared in school and with friends and family? VG journalist Håvard Kristoffersen Hansen investigates the hidden consequences of illegal image sharing.
Date: Tuesday March 20th, 2018
Venue: Aula, University of Bergen
Conference Host: Øyvind Vågnes
09.00 — 10.00
Registration, coffee/tea, fruits and vitamin shots
10.00 — 10.10
Opening Session by Dag Rune Olsen, Rector, University of Bergen (EN)
10.10 — 10.30
Why Watching in the Media? By Astrid Gynnild, Professor, University of Bergen and Principal Investigator, ViSmedia (EN)
10.30 — 11.00
Digital Surveillance in the Attention Economy by Paul C. Adams, Professor, University of Texas, Austin, USA (EN)
11.00 — 11.15
Coffee break with fruits
11.15 — 11.30
Snap-Mapping — Network Buildups or Spies in the Skies by Maren Myrseth, Journalism Student, University of Bergen (EN)
11.30 — 12.00
Image Sharing as Revenge Porn by Håvard Kristoffersen Hansen, Journalist, VG (NO)
12.00 — 13.00
13.00 — 13.30
Neural Networks and Automated Content Synthesis (Image-Video-Sound) — Implications for Journalism by Nicholas Diakopoulos, Associate Professor, Northwestern University, USA (EN)
13.30 — 13.45
VR-Experiments in Journalism Education by Joakim Vindenes, PhD-student, University of Bergen (EN)
13.45 — 14:00
Coffee/tea break with nuts and dried fruits
14.00 — 14.30
Automating Empathy and some Human Implications by Deborah G. Johnson, Professor, University of Virginia, USA (EN)
14:30 — 15.00
Watch and be surprised
15.00 — 16.00
Hot topics: Interactive Panel with Nicholas Diakopoulos, Lars Nyre, Deborah G. Johnson and Maren Myrseth. Discussion Host: Kjetil H. Dale, TV 2 (EN)
Deborah G. Johnson
Professor Deborah G. Johnson will talk about the ethical and social implications of affective computing, which is a new field that seeks to detect, measure, and quantify emotions in people and to produce the appearance of emotional expression in machines.
Joakim Vindenes will present some of what they do in the bachelor course, Virtual Reality (VR) Journalism, held at the @University of Bergen. The course features training in VR programming, 360°, video production, aesthetics and ethics.
Paul C. Adams
Journalism is caught at the intersection of two increasingly important economies. At the conference, Professor Paul C. Adams will offer a critical response to the capture of data about new audiences and how they engage with online content.
Neural network technology is advancing in its ability to synthesize new content, that is at times almost indistinguishable from human-authored content. This talk will provide an overview of state-of-the-art technical capabilities.