Computer-based visualization (vis) systems provide visual representations of datasets designed to help people carry out tasks more effectively. Visualization is suitable when there is a need to augment human capabilities rather than replace people with computational decision-making methods. The design space of possible vis idioms is huge, and includes the considerations of both how to create and how to interact with visual representations. Vis design is full of trade-offs, and most possibilities in the design space are ineffective for a particular task, so validating the effectiveness of a design is both necessary and difficult. Vis designers must take into account three very different kinds of resource limitations: those of computers, of humans, and of displays. Vis usage can be analyzed in terms of why the user needs it, what data is shown, and how the idiom is designed. I will discuss this framework for analyzing the design of visualization systems.
Tamara Munzner is a professor at the University of British Columbia Department of Computer Science, and holds a PhD from Stanford. She has been active in visualization research since 1991 and has published over 65 papers and book chapters. Her book Visualization Analysis and Design appeared in 2014. She co-chaired InfoVis in 2003 and 2004, co-chaired EuroVis in 2009 and 2010, and is chair of the VIS Executive Committee. Her research interests include the development, evaluation, and characterization of information visualization systems and techniques. She has worked on problem-driven visualization in a broad range of application domains, including genomics, evolutionary biology, geometric topology, computational linguistics, large-scale system administration, web log analysis, and journalism. Her technique-driven interests include graph drawing and dimensionality reduction. Her evaluation interests include both controlled experiments in a laboratory setting and qualitative studies in the field. She received the IEEE VGTC Visualization Technical Achievement Award in 2015.
The enormous successes of deep learning in many domains such as video, audio, speech, text, sequence, etc. has swept the academia and industry alike, to the extent that many are touting deep learning training as an alternative form of programming future applications. Amid this excitement lies a more sombre question: if training for deep learning models is compared to software coding, what is the integrated development environment (IDE) for deep learning training? Specifically, what are the debugging and analysis tools required for manually refining and evolving a deep learning model towards its final form? In this presentation, I will survey related work in this area and outline the visualization requirements of a deep learning IDE that we are currently working on.
Dr. Tzi-cker Chiueh is currently the General Director of Information and Communications Laboratories at Industrial Technology Research Institute and Research Professor in the Computer Science Department of Stony Brook University. He received his BSEE from National Taiwan University, MSCS from Stanford University, and Ph.D. in CS from University of California at Berkeley in 1984, 1988, and 1992, respectively. He received an NSF CAREER award, and several best paper awards, including the 2008 IEEE International Conference on Data Engineering (ICDE), the 2013 ACM Systems and Storage (SYSTOR) conference and the 2015 ACM Symposium on Virtual Execution Environments (VEE), Before joining ITRI, Dr. Chiueh served as the director of Core Research in Symantec Research Labs. Dr. Chiueh has published over 200 refereed conference and journal papers in the areas of data center networking, large-scale storage systems, and software security.