Transcript for:
Insights on Ubiquitous Data Analytics

yeah I'm very happy to be here um I'm going to talk about the work that I and my students have been doing for about 10 years now I was writing a review article and I realized this could be a nice talk so I call this anytime anywhere all at once and if you haven't seen a movie which I'm obviously inspired by there are no spoilers here although you should see the movie that's a good one and uh my set my subtitle here data analytics in the metaverse of course a little provocative this word means nothing nowadays or this is an effort to reclaim the term uh to and I'll try to explain how in the next few slides but the basic premise of the work that I do at least this slice I'll talk about today is the observation that in our today's society we're increasingly collecting data everywhere about what's going on not just online but also in the uh real world you know all the purchases are movements reviews decisions we make products we buy so on is is being tracked and if we take all the privacy and security problems with this and there are many and put them aside we can think of the possibilities of using this data for for decision making and personal uh uses as well as for professional uses maybe for companies and similar so increasingly we're also seeing that this data is being used and analyzed anywhere because of course we're all accustomed to having smartphones in our pockets and we have all kinds of apps and services available to us the question is what would data analysis look like when it can be accessed anywhere so the overall idea of this everywhere and anywhere data that's the starting point for my NSF career in 2013 it's been almost 10 years but it's this observation that that we have data from everywhere we can access it anywhere we can think of data analysis utility the question is what kind of opportunities would be available if we can start putting this data to good use now a little bit about myself you heard the intro I am originally Swedish and came to the US in 2008 I was a professor at Purdue came to Maryland in 2014 and I became the director of the hcil in 2016 and I was a director for five years until it was a good way to step down last year so I was the seventh director the first was Ben Schneiderman who founded the lab in 1983. and since 2019 I'm a full professor and the work I do in general is data visualization maybe I have short attention span or maybe a better word for it as I'm a generalist so I've worked across the entire spectrum of data visualization work the kind of applications of course that I'm concerning myself with today are first of all these mobile field settings if you are on your own Solo in a on the go and you have your mobile phone available what kind of data analysis would you be able to do at the moment these kinds of device applications are not very common data the data visualization field hasn't really looked so much at them when you move into a group setting in the field you might have more than one device available multiple people but in this case you'll see these two young women the devices are not really helping them they're trying to look at each other's screens here's the challenge how do we make these kinds of situations more practical maybe you find a place to put down your computer you are more static setting you're no longer on the go coffee shop or your office maybe a conference room but here the challenge is you probably have more than one device at your disposal but current devices are designed for solo use not really for multiple devices at the same time so the fact that you have a smartphone and a laptop and a tablet it's not going to help you so so how do we look at situations where we can have these devices stack and multiply and this all becomes even more challenging in group settings conference room people show up with their laptops just getting sharing information or making effective use of all this display space is challenging so even if we have multiple devices they don't really multiply They Don't Really stack at least not for data analysis sessions so put these four settings together you get the playing field for what I call ubiquitous analytics which is essentially a blending of ubiquitous Computing and multiple devices from that realm and data visualization and visual Analytics now of course if these discussions sound familiar it's because they are this is uh Mark Weiser he's the head of the computer science department at Xerox Park just about three miles from here I I looked up before I came here which is really cool and in 1991 he wrote a famous paper called a computer for the 21st century and the quote you see here is the very first line of that paper most profound Technologies are those that disappear they weave themselves into the fabric of everyday life until they are indistinguishable from it and of course you can make a case that today's environment with mobile phones smartphones and every person's pocket more than I think six or eight billion of these devices in existence today most of them now available able to access the internet means that ubiquitous Computing is here it has come we don't just recognize it it looks slightly different from the vision that Mark Weiser who's hailed as one of the fathers of ubiquitous computing thought of you know he talked about disposable tabs computers tablets you could use and then throw aside so they were not personal uh I talked about making devices in your surrounding area speak to you or Pro divide information so we have a similar situation it just looks a little different and this argument was discussed a couple of of years ago in an article on saying uh Chris Harrison another saying that what we have today is more of a quality Computing situation we have one device one personal device maybe a tablet too instead of lots of disposable devices but we still have this situation and the question is what does this look like when we apply a data visualization to work so that's the starting point for the work that I began around 2010 and then we'll had an NSF career in 2013. so I call this ubiquitous analytics and I thought of this as the use of multiple networked digital devices to enable a metaverse for data analytics and this word here again it's a convenient shorthand for thinking of these day a data world the parallel world to our own that we can Pierce use multiple devices as portholes or even immersive devices to see so that we can analyze in data anytime and anywhere now what would you want to do this of course the technology is there but we also want to I also want to point out that there are good reasons from Human cognition that says that Distributing our sense making of data into our environment is a good idea so so-called post-cognitive Frameworks that have been around for about 20 30 years and and cognitive psychology so these are things like distributed socially distributed computing uh cognition I mean extended cognition and bodied cognition they say that cognition human cognition is the system it's not just what's inside our skulls in our brains but it also involves other people physical artifacts history etiquette culture all of these things contribute to helping us think so to to bring this point home just imagine things like Post-it notes or your calendar on your phone these are extensions of your memory we have physical environment and the way we organize our physical space to help computation so just imagine cooking food and arranging all the ingredients that you still haven't put into your dish on the left side of the dish and then the ones that you have on the right side that we commonly use physical space to Aid cognition and also just the fact that if you have a group of multiple people they provide multiple viewpoints they provide multiple expertise and role roll different roles in your collaboration so that's another way to think of cognition you know even if people in a group they're not neurons they're not synops as in the brain they are still components of a cognitive system formed by the people that contribute and collaborate so the whole premise of of my work here is to say if we're going to distribute Computing we can better scaffold and support sense making by making the devices that we already use active and part of the sense making process and of course as I mentioned already the as technology progresses we start to see augmented in mixed reality devices and displays that enable us to put um you know computer Graphics or content on the world itself to further support this kind of setting all right so I'm going to use this bagel and if you've seen the movie you might remember the bagel this will be my organizing principle for this talk I'll talk about platforms I'll talk about some of the media that becomes available for this kind of ubiquitous analytics environment I'll mention collaboration and some examples there because the moment we start putting sense making in the world we open the doors for multiple people to contribute and then I'll go through a couple of applications at the end of this thing so that's my roadmap for today so let's start with platforms and if some of these images you see in the background then you'll note I'm a recent mid-journey con convert so I went overboard with some of these um this picture here is not from the journey this is my lab from 2010 at Purdue and it showcases some of the kind of display display environments that I'm talking about where on the wall we have a tile display and then a tabletop horizontal touch display in the center and uh and the far right corner you might see someone's hand holding a tablet so the key thing we I was going after in building this lab was just to recognize that in a in a future Computing environment it's going to consist of multiple devices they are different computers it's not just a single beefy computer running them all because if you do that's almost like cheating you know if you use put in graphical car graphics cards in one computer and then plug up a bunch of monitors then a lot of this management of display space and shared resources and Computing become trivial so we wanted in particular to to think about multiple devices that all collaborate and this first work called polychrome might be I think it's the third second or third generation of platforms that we built to enable stitching together all of these devices and providing a common uh canvas for creating these these visualizations and in this particular case the system called polychrome was built using web Technologies and just recognizing that the web browser today it runs on almost all devices from smart watches all the way to Smart TVs they have some form of modern web browser and of course web browsers today are very full function featured I mean they they have Hardware acceleration they have font rendering Vector rendering they handle sound full motion video and so on so there's it's it's a great starting point for building essentially a distributed operating system where each node is just a browser and on top of that we build uh build software just using JavaScript the polychrome system that I'm showing here was an entirely peer-to-peer based system so there was no server instead it was just broadcasting within the same lab that I just showed you so it it just enables you in this case to replicate a Dom a document object model for a single website or the web page across multiple devices and you make changes to one it will be for reflected in the others so this video I can show you some some excerpts it's nothing special about it but it's just showcasing a simple D3 visualization where we're just replicating the Dom so when you make changes to one of them another computer will see instantaneous updates so all the events on one uh gets sent to the other so again nothing spectacular but the thing this enables us to do is we can start building distributed applications with different displays that we can stitch together so the fourth generation most current generations of this work is called vistrates it's a collaboration with some colleagues at orhers University in Denmark and we in this iteration stepped away from the peer-to-peer server architecture peer-to-peer architecture I should say into a more traditional client server architecture just recognizing that peer-to-peer is super elegant but it gets into lots of consistency concerns especially as you're broadcasting and collaborating across long distances you get a lot of issues in broadcasting so we ended up using Appliance server and approach in this case it's a again it's a replicated Dom that runs on a server we use operational transform same technology that Google came up with for Google wave and now using Google docs to to make sure that any changes made to the the Dom gets replicated in the correct order to avoid inconsistencies and along the way you get a lot of nice features like versioning and and uh and so on so it it lets you build shareable Dynamic visualizations very simply and the this traits system as it's called also provides a component-based framework built entirely in JavaScript but it enables you to to bring together the entire ecosystem of web libraries like leaflet for maps D3 or Vega for visualizations plotly or high charts a lot of these things can just be brought together very effortlessly and even built using with no programming so let me show you this video it's going to go a little fast here and I apologize I don't have time to go into detail but what you're seeing is a computational notebook it's a computational littered programming interface to this rates and what we're doing is we're bringing in a bunch of standard components one for a map one for a filter algorithm one for a bar chart I think and then in this view here we're switching to a canvas view we're just invoking the visual representations of all these components and on the right side you'll see us connecting up so that the data that was loaded gets sent into the input ports of a bar chart and a map and we're essentially wiring them together without any programming so in the end what we get is is uh interactive dashboard see if this finishes and you'll see how this works where you can click on the map and the map bar chart will update or you can click on the bar chart and the map will update again the beauty here is that any of these components on them on their own could be placed on a smart Farm phone or on a tablet or on a tile display and everything would still be synced up and work live all right so the takeaway here from the this first part on platforms we found that in building these Multi-Device environments cross-device infrastructures we needed to have a clear role for each of the devices that's involved and we want to make sure that transitions between them are seamless yes so the the complementary angle I buy a lot now but I'm not sure I saw the demos I I want to like push that a little harder can you say a little bit about like what are the kinds of complementary modes that you ended up producing that really worked like uh was it that the the small devices the phones were sort of you know lenses that were focused on specific things and but like there was the person at the control room who was like what what are the modes and what's the estimates here that really resonates right so I have a few examples I'll show you soon but um basically when you have often it's some kind of asymmetric situation so in the example I'll show here is is a system where you have a large display with potentially multiple people in front of it it's touch display but then if you want to do something uh on an individual basis I want to save a query I want to explore data and have some some subset then I would bring that to my personal Smartwatch and I would do those those complementary tasks on the Smartwatch or the smartphone and if I'm happy with them I can publish them back so I think is that sort of what you're yeah so in that case it's it and that is not part of this particular presentation its work actually will was part of called Branch Explorer merge one in the audience here one of my former students where we essentially used revision control methodology to to Branch away from the tabletop do work on your own personal device and then merge it back into the tabletop if you found it yeah absolutely all right so that was the first piece of the bagel so second here is on on the media so what kind of unique forms of media are now available to us the first being just the display infrastructure this display space itself so imagine that you have a Multi-Device environment and it's this kind of environment is probably going to change dynamically as you go through your day so as you walk from the parking garage doing data analysis on your mobile phone as you do um you only just have a phone right when you get to your office all of a sudden you have your desktop computer and the moment you have this new device if there's if you have to spend time housekeeping moving the views around correctly that's the moment you're gonna say okay I'm gonna stick to one device and especially as you move let's say to a conference room later later on in the day and you have other people you have a large projected display again if you have to do the housekeeping to manage more than one device you're probably not going to do it so in this system called this tribute our goal was to say could we automatically handle the layout of the views in a data visualization dashboard depending on the dynamic availability of devices and of course you don't want to do this blindly because some visualizations belong together for example if I have a a bunch of stock market data and I'm showing it as a timeline chart I probably want to keep those together so that they have a common Baseline so I can compare across the same time for all stocks so what we did here is we came up with a bunch of heuristics you see an example of the lower left part of the slide here of just how our views related maybe because they are similar maybe because they share data maybe because they have explicit connections like you have a node link diagram where where individual entities are connected so you want to use that as a heuristic going to the constraint solving that is done automatically as the available devices change so the vistibute system here I'll show you a demo first starting with a single dashboard on a laptop and the moment you bring in in this case a phone you'll see one of the devices that the bar chart popped onto the phone and then in this case the the line chart the stock market data in this particular case came onto the um the tablet I'll play that again so the observation of course also is that some device some visualizations just want to be big like a map they want as much space as possible some visualizations they like to be wide and thin or some other configuration so all of these things can go into the constraint solving in doing this layout the other media I'll talk about here there's a bunch of other forms of media ranging from just multimodal uh different representations and interactions but I'll talk about this one which is computation the other side of the coin if you have multiple devices each of them has computational power and assume that you're in some kind of situation perhaps you're running computations on one device and it's going slowly you could take advantage of the fact that you have more devices in your pocket to literally increase your computational power so this this High project it's again entirely JavaScript based it doesn't use an explicit server one of the nodes in the system will become the server and without any download it enables you to share computation for something let's say a cluster into these local clouds so if you're doing some cluster algorithm let's say and it's going slowly you can just bring up a tablet and the the load will be shared across them so it's hard to show a really engaging video of this but I'll show you here just a brief sample of in this case two two tat laptops and laptops and a smartphone of course the smartphone is not going to have a lot of computational power to contribute it's just to show you the idea and in the end we could show here at the end of some some it's a DB scan cluster algorithm uh and in the end it's going to result in the cluster finishing yeah oops all right so the takeaway here is overall it's about automation because the moment as I said you have to to to to to start doing a lot of housekeeping a lot of these ideas will go out the window so you have to automate and minimize a lot of these challenges yes any significant amount of time for kind of the visualization to switch over of people starting to get frustrated with that lab yeah so we didn't explicitly look at lag but it is definitely a big factor this work from a couple of years ago that found if if you have more than half a second of latency people's engagement in the visualization will suffer so you'll start not paying attention it will not become a conversation anymore you're going to phase you know basically phase out for a bit while you're waiting so I I it is definitely a big concern that we have to keep in mind but we haven't studied it explicitly in this work all right so the third piece here is on collaboration because as I said the moment we put computation in the world and not just on your desktop computer in your office you can have multiple people collaborating so in one of these projects I'll talk about from a couple of years ago we were looking at the idea of proxemics and proxamix has been suggested is an a way to to use how people relate to space as a way to guide ubiquitous Computing and things like that so proxamix is this branch of Psychology from the 50s I think where um the way you as a person relate to physical artifacts and other people tells tell something about what you are thinking and what you're you're trying to do so in this particular work we had this idea that if you have multiple people again in front of a shared large display sometimes these people will want to work independently and sometimes they'll want to work with each other and we can determine this implicitly by just observing what they're doing so um it's a you know that this is a it's a tricky slope here because you certainly don't want to get to trying to understand what the person wants and go all the way to the clippy Spectrum right so you want to find some balance here and this paper became an evaluation of finding that balance which actions do you want to do implicitly and which actions needs to be done explicitly and we found in general that there's a need for both and if you're doing an action that will have an explicit well a strong impact on the visualization you will want to do that explicitly this video you'll see two people they all have a lens and as they approach each other the lenses overlay and then erase their hand the lenses actually merge so that's an explicit action now if they then start on splitting them and then they they leave they move away we is if we inter well we think of that as as they are no longer interested in in collaborating directly or closely so again we have the this these lenses they're controlled by their gaze they approach the proxemics tell us that they're collaborating you know you converse with someone you approach them and then as they move apart that is interpreted as as they want to they want to loosen the coupling they're no longer working closely together this final one I mentioned already a little bit about the complementary types of devices we call this David and Goliath it's a study of large displays and smart watches Goliath being the big thing and the Smartwatch being the small one and we found here that there are needs especially if you have multiple people for putting some personal information on your personal device a smartwatch is very personal it's even hard for another person to see it you know you have to really crank your arm to to let someone else see it so it has a very personal connotation and we found that that there were certain things like storage or even um using it as a remote control would be was useful for this this Smartwatch device so this is Tom introducing interacting with this large display and you'll see him do some actions where he's swiping towards his body it just means whatever I was interacting with I'm storing it on my device and then he goes to another play part of the screen and he swipes away from his body which means I'm now placing whatever I stored on my device onto the large display the dashboard that you're seeing so here we found just as in any collaboration things like consensus and coordination are key aspects and the benefit of having multiple devices is that some many times you can make this kind of coordination and collaboration straight forward all right so last piece here on some of the applications that we have explored and continue to explore for ubiquitous Analytics first one is a paper from Kaya this year just this past spring where it's called relive we worked with colleagues from University of constance in Germany and we had this problem where we're trying to understand mixed reality studies when people are moving around in a physical space you know this is a essentially a research tool that we were using for our own research in other projects so relive is all about being able to view 3D tracked data in 3D space and we found in that situation that there is a need both for doing in-situ analysis where I'm stepping into the 3D space and I'm I'm seeing these 3D traces of how people moved around what they were doing during a user study and there was a need for more of an ex-situ analysis when I'm sitting at my desk I have a high Precision mouse and a high resolution monitor I can you know I see a dashboard I can do analysis I have a more abstract nature so we needed both of these interfaces so what we ended up doing was creating an interface you can see that in the picture here that enables you to switch between one and the other so it's a form of a hybrid user interface where sometimes you do things in VR and sometimes you do them outside in just a desktop and the key thing we found also in order to make this work we had to make sure that there were explicit anchors between the two interfaces so that when you were in VR you still had some you saw some reminders of what was what you were doing on the desktop side and vice versa so this video here will show you the VR view only but it shows the 3D traces the yellow lines here that's how two different people were moving their tablets in 3D space and this is another project that our colleagues and Universal constants were doing and they had found the needed an analysis tool for this you'll see that there's a a Time slider here that we can manipulate to move these tablets in 3D space as time progresses during the study and then we can also even bring up some some of the user interface components from that look very familiar from the desktop version of this interface so that you you can have that anchor that connection between the desktop interface and on the desktop side you always had a 3D view of what you had been seeing when you were in VR the last thing I'll talk about is a paper that is brutally destroyed by the kai reviewers I'll talk about it anyway it's uh one of my the coolest Works we've done recently so hopefully no one here killed it uh the paper is called visualization it is an augmented reality authoring system for creating visualizations using a hotline lens really cool work in my very biased opinion uh probably we spent so much time on it so so we're kind of devastated but um the the the system here as the name kind of suggests bad pun we had this observation that when you're authoring visualizations in 3D using augmented reality you're using a lot of gestures and speech and it kind of felt like magic right you were casting spells so that's how we we leaned into this and um used it as a metaphor for for the system so what we're doing is we're providing a system where you enter you create entries using a grammar or Graphics to build essentially visualizations 3D visualizations in midair so this I know it's a very detailed image here but here's some examples of of that grammar it's a basic Json declarative specification for uh for graphic representations so you see at the bottom and then these are produced using discrete gestures and speech commands so the gestures are drawn from American sign language because we wanted something that was more or less standardized so you can you can do this the video is is also it's four different views I know it's a little a lot of things going on here but what you're seeing are three different people this on the lower left is my student Andrea in Maryland and then these are my colleagues at Bangor University in the UK in the same physical not physical space same virtual space they are they are casting spells well they're because they're authoring visualizations after a while you'll see them here but one after another they're adding classes to a declarative specification of a of a visualization that is going to show up eventually or it's a scatter plot and of course they they can all see the same thing be in the same space and interact with each other and oh yeah and one thing to add here too is that this is all built using um web-based just as the previous ones I said I I feel strongly about web-based Technologies for a lot of the infrastructures that we built that's true for this as well a lot of the development that's done in data visualization for VR and augmented reality has been done in unity and I I um I'm a little troubled by that because the unity is a close platform and propriety proprietary also okay all right so let me just close with this take away and I'll talk a little bit about the other work that we're doing in my group so first of all the takeaway from these applications what we found is if if we're going to see a future of creating data visualizations in VR and AR we're going to have to have to get some kind of standardized systems where you can have just like an in a graphical operating system you can have multiple applications running and they coexist so we have to start thinking about interoperability so we can have modular and standardized interfaces all right so this is the just an overview of the things that I've talked about right now in my research group we're interested in a couple of different themes Beyond this thing it's a big obviously component in my research a last few years has seen people starting to discuss accessibility aspects of database in particular blind users using data visualization I in in Fallout 2019 I had a blind student sign up for my data visualization class which really uh made me reconsider my view of data visualization so this is currently a big Topic in my group we're doing a bunch of aspects of using human-centered AI techniques you know that's partly why I'm excited to visit Stanford with your Center here on using you know putting putting AI techniques into the hands of users and the picture you see here represented on the upper right you presented this at the this conference about uh three weeks ago it is a virtual eye tracker where we trained using crowd-sourced eye tracking data on data visualizations we trained a CNN so that given a new visualization you can feed it in and you get an A saliency map saying what would a person looking at this for the first time focus on which is useful information for the signer you know I'm sitting down I'm designing a new visualization not having to run a full-fledged eye tracking study which costs money and time of course to give some fine quick uh rough feedback is very useful I have a student that's very invested and talented at building things so we're exploring physical Computing aspects for database and then also another another theme is just visualization recommendation and data science recommendation in computational notebooks okay that's it for me I'm happy to take questions thank you foreign regarding the visualization paper um which that seems super interesting so I'm sorry that one didn't work out like that but um I'm curious what the what the advantage was to having the gesture recognition um alongside the speech recognition because my thought would be speech recognition is probably a lot easier um and a lot less sort of intensive and and uh things like that so I'm curious what what was the advantage in that case of adding the the gestures um did it enable more uh like commands or was it sort of a just a a different way for people to interact with the visualizations like so it is definitely a lot about the being able to precisely point and select when you have created a data visualization during the authoring I agree with you that you can be more specific many times using um uh just a speech so that is heavily what is used during the although you can do both but but most people prefer speech when they're building a data visualization but once you are let's say you make a mistake you can use your gestures to select which clause that needs to be deleted much easier than having to use voice commands to navigate so I think it's the right answer is a combination of both and uh and that's the beauty of having both available one little Wrinkle in our particular implementation because we're using web-based yeah web-based and web-based I think it's Mozilla XR I forget except webxr browser forget which one it is um we had issues getting um the voice recognition on the hololens 2 to work and so he works fine with their own apis but the browser did not so we have to use the mobile phone for all the voice commands Okay so that it's just a you know implementation detail because of the current state of the apis thank you excuse me hey um so it was great to see an overview of all the work um my question is you know a lot of the things that you showed about using multiple displays or using multiple devices or even about collaboration um those are General things that I might want for all sorts of computing but you focused a lot on visualization applications at least in what you showed but I'm curious how uh the use of these kinds of General ideas about using multiple displays or you know devices how they relate to visualization right like what did you specifically learn about visualization in these contexts that is separate from what you would how you would use these contacts for other applications yeah so it's you're absolutely right that a lot of these things are generally applicable so uh um yeah I mean I'm a database person of course so that is my my default but uh the uh there are some things that I think that are specific to data one is a lot of the work we're doing uses situated data where it has the data itself is in the world it has a particular location like in the relive system so so that case in that case it makes sense to to actually focus on data this um but I could easily see a lot of these things having you know General usage and and so on so um it is I guess you know I I have some blinders on because I'm a native person but I certainly see that this is uh relevant to a lot of other areas too so a lot of your demos focused on the use of like board scale interaction right and of the sort of three tripartite pitch of like tabs pads and boards and Wiser's Visions I think the boards are the one part that haven't quite manifested in the in the same way as maybe envisioned you know typically when we see these kinds of things today they're used for like a single person control and just projecting like I'm just sharing versus if you sort of rewind to the Visions they had like you know these Erp collaborative sense making and you know NASA movies where everyone's like crowded around the gigantic wall size display or like Sherlock who's just like you know doing some mind Palace thing that you know navigating some complex data space but it like in practice there's like more manual actuation required I think what I'm trying to drive at is where is the threshold where you think this actually flips and we use boards you know in these sort of interactive touch sensitive Etc ways rather than just as the I'm gonna throw it on the wall and you can look at it like what would it take to get us over that threshold in the visualization space where I would use a board and preference to just the laptop in front of me where I can you know type and actuate it much faster yeah I mean it it's a good point for a couple of years ago tabletops were big where everyone were supposed to be gathering around a tabletop and the benefit being that it's big enough so you can you know multiple people multiple people can gather around it um but yeah they're out of favor and I think we're going we're just reaffirming this quality Computing one device to to rule them all so I don't know I think we see that too in my lab that we build a touch a big touch display wall and it's not being used and it's not it is it we don't get over that hump where it is actually better it it seems like those settings I think they are they're indicative of of how things like collaboration in general happens most of the time collaboration is not actually in the same space same people those are very small percentage so uh so I don't know I mean I I guess I sound a little pessimistic but I don't know if we'll get to to that vision of boards uh and big collaborative things because it it feels like you have most of the benefits on the actual pads and stuff but it looks cool yeah it does yeah I was just reflecting on that I was here in this place where's the eye room and so on all that work and and so it's it's uh feels like uh cool to be here um thank you for the great talk um I have a question about like where do you see this kind of work going say in the next five years a lot of the examples that you showed are focused on like analytics direct manipulation based on visualization but like things are changing in this anytime anywhere all at once world with like question answering and Legend analytics and like presenting insights and interacting with insights so there is this layer of abstraction from the kind of work you showed today so I'm curious to hear your thoughts on like in this ubiquitous Computing world like where do you see like the visualization space going yeah so definitely um it yeah it's definitely uh where is it going I mean uh a lot of the example that I show here are are various relatively simple data visualizations but but um we are seeing with various forms of conversational interfaces where you can ask questions or you have interfaces where the the insights there's some kind of human-centered AI in the background that is trying to to highlight or or Bubble Up inferences or insights in there so I think a future here is one that combines these efforts and um so so it's also pretty closely tied to to the some of the accessibility stuff a lot of things we do for accessibility are applicable for human analysts too when you're moving in a in a physical world you have limited attention you have limited uh hands to to interact with things than than having some automatic algorithms to help you turn out to be a essentially a curb cut effect that we can bring in from accessibility as well so yeah definitely thank you thank you very much for the talk um I I was really interested in a couple of the examples where there were these like spatial or like movement based interactions so when you were showing people like swiping from the screen onto their watch or when there was the collaboration example and people are like walking close to each other I'm Divergent um and I was wondering kind of like how you decide on what movements map to different things especially like some of them seem very intuitive like the swiping I I think people would kind of guess that would be but maybe the raising of their hands to merge yeah um wouldn't be I was wondering so like how how you decide on those and if that maps on to kind of the explicit implicit um decision making that you were talking about in the collaboration project I mean it's a very good point in general gestures are always low on discoverability and so so this whole argument of talking about natural user interaction is a little fraught I think so so in the case of the proxamic lens work where you raise your hands we were exploring a bunch of different uh action and actions and figuring out in that case that paper is a whole um exploration saying where's the boundary between implicit and explicit but the decisions we made on which gestures to use is relatively arbitrary so in some cases it makes sense but some of the actions you're doing are so abstract so there's no like natural way humans would do that anyway so so yeah I mean discoverability and gestures in general is a super tricky area the the one way we try to address it in the visualization work was to use like American sign language which is you know some attempt at grounding the actions we were doing with some some real phenomenon that people may have knowledge of but if they don't that doesn't help then and even if they do it's still going to be arbitrary that you have to you know sign the first letter for creating a new view in order to to create that view so yeah I don't have an answer for for these things it is definitely not natural I think that's a that's a non-moniker bad moniker uh so I have a question actually um um so in the the visualization project you were helping people author these 3D Scatter Plots and I'm wondering like four abstract like generic data analysis tasks I think those are often worse than like multiple 2D views like you showed in like the large display uh example so I'm wondering like do you see like why is it good to help people author these if if they're perhaps sub-optimal or just your opinion on using this yeah so 3D visualization yeah my dissertation was realizing 3D visualization sucks so so you don't want to do and I'm under an ardent adversary to 3D visualization so it is it's not a good idea and uh the what you saw here we try to do disciplined and we try to do it where the day the 3D stuff is not is motivated for I mean if if you're doing the first to realize when I started doing immersive analytics and putting 3D visualizations and augmented reality is to realize that if you're doing things in in augmented reality your visualizations will exist in 3D whether you like it or not so then you just have to realize how do you do that efficiently so that you you you don't get into occlusion and navigation and perspective for shortening and all that kind of problems so um so you want to be disciplined you want to make sure that the visualizations face you and there's no depth information that's problematic and so on so um sometimes it makes a lot of sense if the data is 3D like the traces then seeing it 3D definitely makes sense but if you're trying to show and maybe if it's a 3D surface that also makes sense but but if it's bar charts there's really no reason for you to create them in in 3D so so I I'm aware that our grammar enables you to do this and it is onto us to guide people in future work to not make those mistakes I'm interested though like because like people are like I I see a lot of these visualizations that are maps with like the bar charts or spikes and like laid on top of them and like now that it's getting easier for people to make these even if like visualization experts think or know it's a bad idea like that people won't just do it so like how do you think we should approach that problem yeah I mean um maybe the authoring tools need to provide guidance to avoid some some things like that and um it is easier to do sometimes if you do them for effect they can be useful for storytelling but if you're trying to compare a bar chart far in the distance to one nearby you're going to have a bad time so uh yeah it it is it is a problem and I'm one that I've been battling since my PhD really thanks for the great talk and it was great seeing all of these sort of additional interactions that sort of having multiple uh types of platforms can enable in conjunction with each other sort of going back to the conversation about insights I was wondering uh if you have any I guess insights sorry for being redundant or a lack of a better word for like the additional types of insights that could be gathered through having these multiple displays and being able to interact with them in a more embodied sense um that are in addition to just say like having a large dashboard with sort of a lot of synchronized visualizations or um yeah what sort of these yeah so this is a very good point I mean every time I review papers on this topic I ask myself what's the purpose of doing it in in VR or ar and so you need a good answer to get past me as a reviewer and I try to do that I turn that on myself too so there are reasons why you want to do this one one of course is that when you have a immersive space you have more physical space to place stuff in and you can have embodiments you know muscle memory you can have spatial memory that you can take advantage of just knowing I put my bar chart by my right hips I I know exactly where to look for it which is useful information and then the other reason Beyond just placement is the fact that often even you put stuff in in immersive then you get the use of both of your hands or at least some controllers so that you can start uh you know by manually grabbing things or moving things around and so on so so uh I think that that is another reason and then you have some some more uh you know maybe more so and so reasons like things like presence being there and and uh being able to to just use physical navigation to zoom in and out look closer or or compare things you know change your viewpoints and so on great yeah thank you thanks so you're pitching these visualizations that scale from you know in theory like postage stamp size out to like a large wall and it strikes to me that you're the kind of Cleveland and McGill results that prompted what kinds of visualizations we do where are gonna break down at different scales of human perception like the kind of rules and that we derive from our perception at a scale that's this small are going to be different than the ones that hit when we are when we're really huge have you observed how the where and when those principles bend and break or do they actually still hold up in all instances I think your intuition is correct and uh the the one thing we have noticed there might be others might be that when you you know you might have um Halo effects where a group of elements influence how certain elements in that group are affected but of course if you get close enough there might be enough white space so so that thing diminishes so in general I think you're right and um it would be interesting to to understand that I just haven't and and I and I don't think anyone has either you know most of the most of the graphical perception experiments and similar or Donalds like standard screens that are relatively small and yeah it'd be interesting to to figure out okay any other questions then let's thank our speaker one more time [Applause]