Curious Conversations, a Research Podcast

"Curious Conversations" is a series of free-flowing conversations with Virginia Tech researchers that take place at the intersection of world-class research and everyday life.
Produced and hosted by Travis Williams, assistant director of marketing and communications for the Office of Research and Innovation, episodes feature university researchers sharing their expertise, motivations, the practical applications of their work in a format that more closely resembles chats at a cookout than classroom lectures. New episodes are shared each Tuesday.
If you know of an expert (or are that expert) who’d make for a great conversation, email Travis today.
Latest Episode
Brendan David-John joined Virginia Tech’s “Curious Conversations” to talk about gaze data, exploring its applications in virtual and augmented realities and the associated privacy concerns. He highlighted the potential for gaze data to reveal personal information and related security implications, especially in a military context, and shared the projects he’s currently working on to better mitigate this threat.
(music)
Travis
It's been said that a person's eyes are the windows to their soul. While I'm not sure exactly how true that is or what their soul would even look like if I saw it in their eyes, I do know there's a lot you can learn about an individual from their eyes. And we're also moving more and more into a world where virtual reality, augmented reality, and other types of devices are using our eyes and our eye movement as part of that interactive experience. So I'm curious what these new devices, this new technology can actually learn about us from our eyes and also how can we guard against somebody misusing that same information? And thankfully Virginia Tech's Brendan David-Shon is an expert in this very subject.
Brendan is an assistant professor in the Department of Computer Science at Virginia Tech and also a member of the Private Eye Lab at Virginia Tech. His research interests include eye tracking, virtual reality, augmented reality, privacy, and computer graphics. Brendan and I talked a little bit about gaze data, what that actually entails, and what we can learn about individuals simply by looking at their eyes. We also talked a little bit about how nefarious actors could possibly misuse the same information, and he shared some of the projects he's working on to sure up security in the space, one of which involves helping create a more secure interface for our military members when it comes to using things like augmented reality. Brendan also shared some insights that you and I can use to help better protect ourselves as we venture into these spaces, and he shared how having all of this knowledge has impacted him when it comes to how he interacts with technology. As always, don't forget to follow, rate, and or subscribe to the podcast. I'm Travis Williams, and this is Virginia Tech’s Curious Conversations.
(music)
Travis
I am curious about this concept of Gaze Data. And so I guess that's a great place to start. What is Gaze Data?
Brendan
Yeah, yeah, I mean, I think the primer here is about human vision. And I've been working with GazeData, I've since been an undergrad researcher, and I guess summer of 2013. So I'm pretty in tune with what the eyes do. And I guess what you always hear is the eyes are the window to the soul, to some extent. And I think it's very true from a neuroscience perspective, right? The eyes are our main form of visual input and sensory input, assuming you have proper vision, folks with low vision or different conditions, where I have different, you priorities of their sensory source of information. But vision is essentially dominating the bandwidth of what goes to our brain, what our brain processes, and what we act and do next. So I think the best example, or guess the kind of metaphor I want to bring here, is we can think about our computer screen as having a very high resolution. It could be 4K, has thousands and thousands of pixels, but they're all uniformly distributed across the display. It's all the same resolution in some sense. Our eye works very differently. So where we look actually has a spike in resolution. And we call this area the fovea. So a lot of folks learn about this with rods and cones in high school and middle school. And that foveal region is really, really small. And the example of vision scientists use are you hold out your thumb at arm's length. The width of your thumb is pretty much all you can see in high detail at one instant of time. But your brain is what actually stitches all of this together. The world doesn't look blurry. It doesn't look like it loses detail. Our brain has just learned a process that kind of, I don't want to say frames, because we don't have a frame rate, but that frame by frame input of these small high detail regions are all stitched together to be a vibrant, high detail world around us. And it very quickly falls off, right? So that you get this really high resolution and the perfect kind of illusion that I like to bring up in this, it's really fun example where they'll have you look at one side of a screen and they'll bring a face in slowly from the right-hand side. And everything just looks normal, right? As you know, I'm keeping my gaze where I'm supposed to look on the left side of the screen. I can tell a face is coming in from the periphery, but once it overlaps with the region where I'm looking, I realized the nose and the mouth and the face are completely flipped upside down. And I couldn't tell that until it actually overlapped with my fovea to have the detail to pull that information out. But our brain's still smart enough to know, well, this kind of smart, you know, this sparse, blurry thing in the corner of my eye, it's definitely a human face. I'm very good at seeing human faces, right? Babies do this during development, right? And like until it overlaps, I don't get that shoe to drop that actually, there's something weird going on with the nose and mouth. Cause my brain has just put that part of the world and constructed that understanding of the world together. So that's at a high level why gaze data is so important. It really tells us what the brain is processing, what it's thinking. And our brain is really optimized how we survive, act and move in this type of environment.
Travis
Yeah, and so I guess when we are now using a lot of these virtual reality, augmented reality goggles that are, I guess, super close to our faces, how is that interacting with gaze data and our eyes in ways that's, I don't know, capturing or using that data?
Brendan
Yeah, there's definitely a few ways. And I think the first goes back to that monitor example, right? We have this 4K monitor. I have a pretty big curved screen in front of me, or you go to a projector system in an IMAX. And those things are just a bunch of pixels. And essentially, if I put the screen right up in front of your eyes, I have some optics and some lenses that kind of amplify things and make it look big in a field of view or look immersive. But essentially, there are so many pixels that I have to pack in, especially with how close it is to the eyes, to make the image look crystal clear.
What's interesting about virtual reality and how we see things in 3D is we actually have a left eye and a right eye view, right? That's how we see the world. And that's what the brain uses to figure out depth and information about the world. So we have to replicate that if we're doing what we call rendering or making a virtual world appear in front of us. So these virtual imagery and our brain kind of gets tricked to fuse these things together and think it's seeing a 3D world. And the whole point of this is that there's a bunch of pixels in your computer graphics, your video games, your virtual simulations, they have to fill all these pixels with colors. And that's a lot of computations, right? Because that distribution is uniform, but our eyes actually has a spike right around that point where your eye is focused or your fovea, which is where the gaze data is telling us, we can actually optimize that rendering and save thousands and thousands of pixels of computation. Cause my brain just can't put to, or I guess my eye in that case can't sense that high resolution there. So by giving these, you know, these systems gaze data, I actually get really nice optimizations in terms of power usage and the amount of time it takes to render a frame and make something show up. If you're thinking about maybe a video game or in a video game example, but this is really true for simulations, right? If I want to train somebody in a virtual environment and have them go perform well, whether it's a firefighter or a surgeon, I need to be actually be able to recreate photorealistic and high fidelity environments for them. So that's one way that it gets leveraged there. I think another interesting way it gets leveraged is with interaction. So folks are familiar with the Apple vision pro. One of their big selling points is that they have this really natural interaction called gaze and pinch. So instead of having to hold this physical controller, in fact, the Vision Pro doesn't even ship with a controller, right? It only tracks your hands and lets you point at things and type on virtual keyboards. It's a lot easier to look, just look naturally at an object and pinch my fingers. And if I can do this, it's kind of like magic is kind of the way they pitched that. And you, need that gaze data, right? If I don't have that direct direction where somebody's looking, it could be anything in front of me I'm trying to select.
But if I have an idea of where the eyes are pointing, I can do this, you know, detect this little pinch gesture out of some other camera on the headset and figure out, yeah, they're trying to select that button or interact with that character, right? In a certain way. So it's this really naturalistic form of interaction. And we've had this for a little bit, like accessible interfaces, think Stephen Hawking's set up, right? Where there was some sort of blink detection to do interaction and text to speech type of systems. We've had this around for a while, but we didn't always have the technology to just drop it on somebody's head and just have it work. There's a lot of calibration. There's a lot of time to build the systems, but we're really reaching that point where these VR systems can just track what you're looking at and naturally let you interact with them, which is really, really good because I don't want to ship a controller. I don't want to force you to also say, I forgot my Apple vision controller. can't even use it anymore. Right. We want that form of natural interaction. And I think the last thing I'd mentioned here about, you know, why this is really important is just that same thing I mentioned earlier, right? Our brain is constantly just sensing the world around us constructing some version of the reality, and we're using that to do actions, right? If I'm a surgeon, I need to do very specific things. I need to pay attention to specific areas. So what I, some of my research is actually called gaze intent modeling. So we take this data from the eye movements and what you're looking at and try to predict what you're going to do in the future. That way we could build some assistance, right? Maybe I need help selecting something quickly, right? Even in that case where I, you know, I look at something with my eyes and then I pinch.
Or if I need to determine that, maybe I missed something and I might make a mistake, right? It's a lot easier to do that if I know exactly what you are paying attention to instead of just getting, you know, a camera strapped to your head like a GoPro.
Travis
Yeah, it sounds like that you are almost trying to create like almost like predictive text that my cell phone has but for that system with my eyes.
Brendan
Exactly, exactly. Action and perception, yes. Based on that, yes.
Travis
When it comes to gaze data and gathering and collecting this data, what types of things could a person learn about an individual, perhaps a nefarious person, learn about an individual simply from the gaze data?
Brendan
And this gets into a lot of my research is like, this is really cool, but it's also a little bit scary about what could be learned about me from a privacy perspective. And there's a lot of privacy concerns, right? I think some of them are as simple as figuring out your age. Like there's an idea of anonymous browsing in a web browser, for example. I don't want targeted ads based on my age group or my gender or my ethnicity, but how you respond to the world and you can kind of think of the eyes as both an input and the output, right? It's searching for more information. And it's kind of revealing what I'm looking for. But it's also telling me what my brain's receiving and how I respond to it. It's kind of this loop between inputs and outputs. And actually, if you pay attention to that over time, you can really figure out, hey, what ethnicity is a person based on the way they look at other folks' faces? There's a lot of experimental research studies that do this. We're still seeing how it expands out to the natural world with what we see every day. But I can figure out your age just based on the eye movements themselves. I don't even care what you're looking at. The muscles in your eye have pretty...you know, pretty systematic tendencies to change over time. So if I just get this raw data unfiltered, just, the eyeball rotations, I can start to figure out, Hey, what roughly age group does this person fall within? And then if I pair that with the content they're looking at, I can start to figure out, maybe they look at things in a certain way from a gender perspective, or they look at different ethnicities and have some sort of implicit biases in the ways that they think and process the information about the world totally start to model this thing. whether it's accurate or not, you might feed this to some AI model or some predictive algorithm to feed you ads or even just personalized content in general, right? You think about different dynamic storylines and things of this nature, and that kind of push people to have different perspectives and thoughts and ways that they approach the world. All these things could in theory feed in from that perspective. So at a high level, those are a lot of the things that you learn about somebody from personal side of things. A lot of my research also looked at identification. So it's not the same as like a fingerprint or an iris pattern. It's not as strong. We call it a behavioral biometric. But if I have a sufficiently large set of people, let's say a few hundred or maybe even up to a thousand, if I have enough data, I could kind of pull you out of the hat if I have some of those eye movement data because it's very unique how your muscles are designed. So there's both some general trends like age that I mentioned before, but things that are very unique about you and the way that your eyes move and the way you respond to content. So identity and these other kinds of personal Preferences I would say are things that I can capture about you
Travis
And what might a nefarious actor do with that type of information?
Brendan
Yeah, I mean, at a high level, the ad space for VR, the immersive space, they're really, really unknown. But it's definitely the case where if I'm making money off a number of clicks that I get from somebody, and let's say in the VR space, a click is how long you look at an object, I can try to find the things that attract your attention most and make sure that I'm showing them to you. That could be more ad revenue for me because then I could potentially follow up, think about the way web ads work. They get money if you follow a sponsored click through an article or something.
If I know that I have more things that you're going to respond to, I can start to put those into the environment around you. And someone who's, you know, that's more, maybe I don't notice the effect, I'm just giving you my data, but someone who's really nefarious could find what my interests are. Maybe I have some very, very specific view on very specific politics or very specific items, and I want to push you to be more radical in that direction. I can start to control what you're seeing see how you're responding and maybe feed off of that with some sort of adaptive algorithm in a very nefarious, manipulative or deceptive way. That's something that I'm thinking about a lot. There's this idea of dark and deceptive patterns and I think vision and attention is a huge part of that, right? A web browser just knows what's on the screen at that point of time. The eye tracker gives them information about where you're allocating your attention.
Travis
You're doing a lot of research related to how we can be more secure in this space. And one of the projects that I know that you're working on is this project that's supported by DARPA. And so what are you working on right now related to how we can help keep our military members more secure in this space as they venture into using more VR and AR technology?
Brendan
Yeah, definitely. And at a high level, I think there's a lot of critical spaces for VR AR. I mentioned like surgery, for example, right? If I'm trying to get past these like physical monitors in the environment, I want to give you information. That is the benefit add. And it's usually more mixed reality. The idea that I can see the real world and augment it with additional information. mean, virtual reality is really good for simulations, but I'm not walking down the street wearing a VR headset and blocking out the real world. And especially these critical applications, you still want the real world preserved.
But what we're seeing is how can I display information in a way that enables you to perform more efficiently? And one reason that we think about that maybe in a critical, you know, as a soldier's case, right? Instead of having information that's maybe on a tablet, some digital information about a map or the rest of the world or the content, I might want that on a heads-up display. But I don't want that heads-up display to actually distract me from my, you know, the most critical task that I'm doing at that moment.
So part of the use of eye movements there is to understand the distribution of that content and make sure we're not distracting and actually having a net negative, right? We want to give them more information. We don't have to take their eyes off the task. But if I'm blocking relevant information or if I'm just making them look around too much and keeping them off the task, that's actually a net negative. So that's one reason we apply eye tracking there. My specific work within this DARPA project is looking a little bit more at what if somebody actually gets access to that information? So, right, I'm using that same information to make sure the display is optimized, I'm giving you information and maybe even letting you select things or give commands or indications really, really quickly if you can't speak at a certain point in time. But if somebody were able to get access to that communication channel, or even just put, you know, a really good thermal camera or some camera in the environment and looking at you and figure out your eye movements, does that reveal anything about your cognitive status and maybe when you're vulnerable? And that very much links to what I said before, right? Our cognitive, you know, our brain and that system is just taking information in, and your eyes relay that, if somebody can pay attention to that, they might be able to take advantage of a vulnerable moment or vulnerable status of the user, or even some long-term tracking of like, this person has been at their post for 12 hours, they're very tired. Right? If somebody can get any of that information, even at a very coarse level, right? They're not, you know, sneaking a camera into your, into your headset or something, but if at a very coarse level, they could get this idea of what you're thinking and doing, I think it could be used in very nefarious, malicious ways. And we want to understand, in just these critical settings, what is that trade-off to adding this technology and thinking about this data stream and the applications we use it for.
Travis
I know you're also working on an NSF project, National Science Foundation project, so what all does that entail?
Brendan
Yeah, so it's a really fun project funded by the Satsi Secure and TrustWorb cyberspace program within NSF. And this actually links back to that example I gave earlier about resolution and rendering, right? The idea that your display is uniform in pixels and lots of dense pixels, but I can only see a really sharp spike of those around my gaze direction at one point in time. So folks are going to actually use this optimization. We need to give eye tracking data to the system. So some of my past work from the dissertation has basically said, hey, I can trust maybe some Nvidia graphics card to get my gaze data, but I don't trust some random app developer or game developer. So your best practice is kind of just let the platform have access to it, but never give tools or code to developers. And like the Apple vision pro for example, has followed this type of approach. And that's actually, they're very privacy preserving in what they're doing. But what this NSF projects is exploring is, well, if I'm a very crafty game developer, I might be able to instrument my virtual reality game in such a way that different things that you look at
Result in different things happening on the graphics card or the performance of the system. This is called a side channel. So this actually isn't new, right? We can think about different side channels and systems like the GPS or sorry, the cell tower signal strength, for example. It can actually tell me how close I am to a certain cell tower and could give me an idea of where you are within Blacksburg or maybe where your office is or what medical doctor you're visiting for certain periods of time. So this is kind of a common side channel. It's a way to get information that I never gave an app access to.
But some other signal on the system it has access to reveals that information. So we've followed that analogy through to this virtual reality rendering case and said, hey, this foveated rendering thing is really cool. It saves power, right? It's really good and helps the games run faster. But what can I take advantage of by the fact that the graphics card responds differently to the environment that I render around the person? And the idea is that different things react to your gaze differently. And I can actually find these signals in the system and actually still try to solve and reconstruct that gaze data. So I'm not supposed to have access to it. Maybe I can track what you're doing in the environment, what general direction you're looking in, but not the specific object you're looking at. But if I take this attack, we call it a security side channel attack, I can kind of instrument what's called them trap doors. And that will capture some of the system performance. And that's something that I have access to as a developer, usually to optimize a game or make sure things are running smoothly. And I can pull and reconstruct that gaze data at a course level out of that. So that's what that project's exploring. Still, as I mentioned earlier, seeing these things emerge, we don't know all the applications, but if they're using these eye tracking based optimizations or interaction systems, we can start to pull out some of that data maliciously and just step right around the best practices for protecting your own privacy.
Travis
That is fascinating. sounds like you really are in this emerging space, kind leading the way, kind of exploring a new frontier of technology, which is really cool. Well, what should the average person be aware of in this space, and how can the average person really protect themselves against nefarious actors in this space?
Brendan
Yeah, think the average user is actually at a disadvantage because if you have one of these fancy headsets, would say eye tracking is showing up in these headsets, but it's not in the consumer. Say the MetaQuest 3, it's maybe a very accessible. The usage of this device spiked at Christmas time in 2024 recently, and it's used a lot for games. But we're starting to see these applications come out. What I think folks need to pay attention to is what are the privacy policies of this data?
Right? What is it? This kind of goes back to this idea of gender, age, and all the profiling that's going on on that side of things. Right? Like we don't know what happens with the data in the lot of case. And even if we were to try to read this really long policy and all the legalese that's in it, it's really hard to understand it. So one of the things I'm doing on that side, and this is still kind of, you know, in development, right? We don't know the full ecosystem, but what are better ways to like give you visual indicators of what your information relays about you? spent, you know, let's say we spent 10-15 minutes talking about eye tracking data, that might actually raise your eyebrows a little bit. And if nobody has heard this before, they aren't quite thinking about what my eye movements reveal. And if I just tell you, hey, your game is going to run faster if you turn on this eye tracking sensor and let this app have it, you're probably going to say yes, right? You just don't, you're not informed about these issues. And even if you were, and you would have to dive through the privacy policy to see how they're going to use that data. So we think about more immersive kind of actually visualizations in your environment. If I can just draw a little cyclops type of gaze ray out of your eye, for example, and let you play with that for a few minutes, you might start to think, there's some unconscious things my eyes do that I'm not actually comfortable with this app knowing. And maybe I'm OK just not sharing that data. Like we had an example where we put people into a virtual reality art gallery. And that was just randomly sampled art that we put up on the walls. And some of it had nude imagery. It's kind of older historical art. And when people turn this visualization on to try to inform them about gaze data, because they had never really used eye trackers or weren't very knowledgeable about it before, they noticed like, oh, these are the things I'm paying attention to. And they changed their behavior. They stopped looking at the nude regions and some of these paintings are like explicitly told themselves, try to avoid looking here. And that change in behavior really suggests that people aren't aware of what their eye movements reveal about them, whether it's just attention or these other risks that we talked about with age, gender, and profiling. So that's, I think that's my advice. You can't read all the privacy policies, but make informed decisions when people ask for sensor data, whether it's eye movements, heart rate, all these biosensors, you really should kind of really, you know, be a forensic investigator. Think about, do I need it for this use case? And who is this data going to? Is it just maybe the meta or Google who makes the device, or is it some third party application that maybe wants to make money or do something else with this data? I think that's the best advice I have for consumers. And we're starting to see more. Education tools, you know, obviously I can train medical doctors, but I can also immerse you in a language learning environment and make it very easy to, you know, practice learning French, for example, or some natural language, right? It's a lot easier to do than the, you know, talking to back and forth with some sort of computer device. I can actually see a 3d person and practice language with them, right? If I was also giving them gaze data, maybe they can make a more adaptive interface, but what are they doing with that data? So I went on a little bit there, but I think. I think that's the advice I would have for consumers is really think about what data you're giving up and when, when you're prompted for it. It's just so natural to say, yes, allow, yes, allow. Like I had to do it for this podcast recording is yes, allow my mic and camera and it's perfectly fine. And I picked while using this app only, right? Or the site only, for example. That's my advice from that perspective.
Travis
Yeah, I think that's great advice. sounds like it really comes down to awareness. Be aware of what you're clicking yes to, which I know I'm definitely guilty of probably agreeing to things that I didn't fully read. Well, I'm curious, doing all the research that you do, having all this knowledge, how does it change how you personally use technology?
Brendan
Yeah, that's a really good question. It goes back to what my PhD advisor said and she kind of got me on privacy security a few years into my PhD and she was kind of like, yeah, maybe once I retire, I'm just gonna live on an island and off the grid. But I still actively use the technology. And as I said, we're still not quite there to the point where like, everyone's using this in school, for example. I think there's a lot of challenges to fix first, which like, what age should I be using this? Like, does it affect the way my eyes develop? right, because putting some screen very close to the eyes does affect the development of the eyes. So I'm still obviously the early adopter type of person. I'm doing research. I'm a computer scientist in this space. And I tend to try a lot of these things, so it does, does, you know, I do it. Let me say it this way. I tend to lean in and just understand these things. I don't have like those meta Ray-Ban glasses that are, you know, always, you know, recording for example, not always recording, but always available to record a quick clip for something. I don't tend to put cameras on at home. Let me say it that way. But in a research setting or a very controlled setting, I think it's good to explore and play around with these things just to see the trends that are coming in the future. Definitely play console games, a little bit of VR games when I can. Like Beat Saber is really popular. It's a dance dance revolution, but for VR, right? And that has its own risks. I think a funny story there, there's this leaderboard for Beat Saber where I think 50,000 people or so basically say, hey, I want to be put on a leaderboard. They give up their motion data, which this isn't even eye tracking data. This is like them moving their head in hand to dodge things and like swipe things with swords, kind of virtual swords. Folks actually made an agreement with the company, some researchers at Berkeley to get that data set and were able to re-identify people out of 50,000 people with like 93 or 94 % accuracy. So just like as a gamer, even if I want to be anonymous in a different setting, like if I gave up some of this VR data, I could be tracked. But that went down a little bit of a rabbit hole. Yes, I do game. I think there's a value proposition, but you can always say no, right? I think maybe single player campaigns and long campaigns are dying a little bit, but that's what I would like to see more of. you don't, mean, social interaction is nice, especially through a platform like video games, but it's not the only thing. And I don't want to see the trend where like you have to always be online connected with some data connection to your gaming to be able to do and play with this type of entertainment.
(music)
Travis
And thanks to Brendan for helping us better understand gaze data and how our eyes might actually be windows into our souls. If you someone you know would make for a great curious conversation, email me at traviskw at vt.edu. I'm Travis Williams and this has been Virginia Tech's curious conversations.
(music)
About David-John
David-John is an assistant professor in the Department of Computer Science and the Virginia Tech Private Eye Lab, as well as a researcher with the Commonwealth Cyber Initiative. His research interests include eye tracking, virtual reality, augmented reality, privacy, and computer graphics.
Past Episodes
-
General ItemCommunity Dynamics During and After Disasters with Liesel Ritchie
Liesel Ritchie discusses how sociology helps explain community resilience in disasters, the role of social capital, and the importance of local relationships.
Date: Mar 24, 2025 - -
General ItemDrone Regulation, Detection, and Mitigation with Tombo Jones
Tombo Jones discusses drone regulations, safety, and counter UAS strategies, highlighting Virginia Tech’s role in advancing uncrewed aircraft systems.
Date: Mar 17, 2025 - -
General ItemPublic Perception of Affordable Housing with Dustin Reed
Dustin Read discusses public perceptions of affordable housing, the role of profit status, and how development size impacts community support.
Date: Mar 10, 2025 - -
General ItemUnpacking the Complexities of Packaging with Laszlo Horvath
Laszlo Horvath discusses packaging design complexities, including affordability, sustainability, and the impact of tariffs and supply chain disruptions.
Date: Mar 03, 2025 - -
General ItemEngineering Safer Airspace with Ella Atkins
Ella Atkins discusses air travel safety, VFR vs. IFR challenges, recent collisions, and how technology and automation can enhance aviation safety.
Date: Feb 24, 2025 - -
General ItemCancer-Fighting Bubbles with Eli Vlaisavljevich
Eli Vlaisavljevich discusses histotripsy, an ultrasound therapy for cancer, its mechanics, clinical applications, and future directions in treatment.
Date: Feb 17, 2025 - -
General ItemExamining the ‘5 Love Languages’ with Louis Hickman
Louis Hickman discusses ‘The 5 Love Languages,’ their impact on relationships, research findings, and the role of personality, self-care, and adaptability.
Date: Feb 10, 2025 - -
General ItemThe Behavior and Prevention of Wildfires with Adam Coates
Adam Coates explores the factors behind California wildfires, fire behavior science, urban challenges, and the role of prescribed burning in prevention.
Date: Feb 03, 2025 - -
General ItemComputer Security in the New Year with Matthew Hicks
Matthew Hicks discusses evolving computer security threats, AI-driven risks, and practical tips to stay secure in 2025.
Date: Jan 27, 2025 -
-
General ItemInternet of Things Safety and Gift Giving Tips with Christine Julien
Christine Julien joined Virginia Tech’s “Curious Conversations” to talk about the Internet of Things (IOT), exploring its definition, potential vulnerabilities, and the implications of using smart devices, especially for children. Julien stressed the importance of security and privacy when using IOT devices, particularly during the gift-giving season and shared insights on navigating these complexities with an aim of balancing the enjoyment and security.
Date: Dec 09, 2024 - -
General ItemNeurodiversity and the holidays with Lavinia Uscatescu and Hunter Tufarelli
Lavinia Uscatescu and Hunter Tufarelli joined Virginia Tech’s “Curious Conversations” to talk about the importance of understanding and accommodating neurodivergent individuals in various environments, particularly social gatherings during the holiday season. The pair shared the impact environmental factors can have on neurodivergent individuals, as well as the significance of predictability and communication in social settings. As a person with autism, Tufarelli also shared her first-hand experiences and the importance of embracing self-care.
Date: Dec 02, 2024 - -
General ItemAI and Better Classroom Discussions with Yan Chen
Yan Chen joined Virginia Tech’s “Curious Conversations” to talk about the use of artificial intelligence to enhance teaching and peer instruction in classrooms. Chen believes one potential use for AI, specifically large language models, is to monitor and analyze peer interactions in real-time. He shared the platform he and colleagues have created to do this, called VizPI, which aims to provide instructors with insights and recommendations to create a more engaging and personalized learning environment for students.
Date: Nov 25, 2024 - -
General ItemForest Health and Natural Disasters with Carrie Fearer
Carrie Fearer joined Virginia Tech’s “Curious Conversations” to talk about forest health in the wake of natural disasters. She explained how storms and disturbances affect forest ecosystems, the importance of human interaction in promoting healthy forests, and the opportunities for restoration following catastrophic events. She also emphasized the significance of native species and the role of decomposition in maintaining forest health.
Date: Nov 18, 2024 - -
General ItemSubduction Zones, Earthquakes, and Tsunamis with Tina Dura
Tina Dura joined Virginia Tech’s “Curious Conversations” to talk about subduction zones, particularly the Cascadia Subduction Zone, earthquakes and tsunamis. She explained the mechanics of earthquakes, and how the geological record and fossilized algae are helping researchers better understand past occurrences and predict future ones. Dura emphasized the importance of translating scientific research into actionable information for the public, especially regarding tsunami preparedness and community resilience.
Date: Nov 11, 2024 - -
General ItemTurning old Plastic into Soap with Guoliang “Greg” Liu
Guoliang “Greg” Liu joined Virginia Tech’s “Curious Conversations” to talk about his journey in sustainability, focusing on the innovative process of converting plastic waste into soap. He shared insights on the challenges of controlling the chemical processes involved, the types of plastics used, and the potential for creating both liquid and solid soap products. He emphasized the importance of sustainability in the detergent industry and expressed hope for future commercialization of his work.
Date: Nov 04, 2024 - -
General ItemEmerging Technologies and Entrepreneurship with James Harder
James Harder joined Virginia Tech’s “Curious Conversations” to talk about entrepreneurship and emerging technologies, specially highlighting the Department of Computer Science’s initiative, CS/root. Harder shared his belief that the entrepreneurship process can be learned and applied to various novel ideas and the ways the program hopes to teach and support it, as well as the role it will play in putting emerging technology in more people’s hands.
Date: Oct 28, 2024 - -
General ItemAI and Emergency Management with Shalini Misra
Shalini Misra joined Virginia Tech’s “Curious Conversations” to talk about how artificial intelligence (AI) might be used in the field of emergency management. She shared some of the different ways AI is currently being used and the concerns she’s heard from emergency managers. Misra also talks about the steps she believes will be necessary for the technology to reach its full potential in this field.
Date: Oct 21, 2024 - -
General ItemFemale Leaders of Nations and the US Presidency with Farida Jalalzai
Farida Jalalzai joined Virginia Tech’s “Curious Conversations” to talk about the state of female leadership globally, with a focus on the United States. She shared how she believes the U.S. compares to other nations in terms of female political representation, the unique challenges women face in the U.S. political landscape, and the impact of gender roles on women's leadership opportunities. She also shared the insights she gained through her research of female leadership during the COVID-19 pandemic.
Date: Oct 14, 2024 - -
General ItemAI and Securing Water Systems with Feras Batarseh
Feras Batarseh joined Virginia Tech’s “Curious Conversations” to discuss the intersection of water systems and technology, specifically focusing on aspects of artificial intelligence (AI). He shared the importance of using AI to predict and prevent water quality issues, such as high turbidity, and highlighted the need for water systems to become more intelligent and cyber-secure.
Date: Oct 07, 2024 - -
General ItemAlcohol Use and Intimate Partner Violence with Meagan Brem
Meagan Brem joined Virginia Tech’s “Curious Conversations” to discuss the intersection of alcohol use and intimate partner violence, highlighting the importance of understanding the causal relationship between the two. She debunked common myths, identified current knowledge gaps, and shared insights from ongoing studies. She also described the unique challenge of understanding these topics as they relate to LGBTQ+ populations and shared possible interventions on both societal and individual levels.
Date: Sep 30, 2024 - -
General ItemBrain Chemistry and Neuroeconomics with Read Montague
Read Montague joined Virginia Tech’s “Curious Conversations” to talk about the role of dopamine and serotonin in learning, motivation, memory, mood, and decision-making. He discussed his research on measuring dopamine and serotonin dynamics in the brain in real time using electrodes in epilepsy patients and explained the role neuroeconomics are playing in that research.
Date: Sep 23, 2024 - -
General ItemThe Future of Wireless Networks with Lingjia Liu
Lingjia Liu joined Virginia Tech’s “Curious Conversations” to talk about the future of wireless networks and wireless communications. He explained the evolution of cellular networks from 1G to 5G and the potential for 6G, as well as how open radio access networks (O-RAN) can help advance innovation in this space.
Date: Sep 16, 2024 - -
General ItemThe Mung Bean and Reducing Hunger in Senegal with Ozzie Abaye
Ozzie Abaye joined Virginia Tech’s “Curious Conversations” to talk about her work using the mung bean to diversify the cropping system, empower farmers, and reduce hunger in Senegal, Africa. She explained why the mung bean is a good fit for that region, the process by which she began to share it with farmers, and the collaborations she’s utilized to expand it across the country. She also shared what some of the challenges were in developing recipes across cultural lines.
Date: Sep 10, 2024 - -
General ItemCurbing the Threat of Invasive Species with Jacob Barney
Jacob Barney joined Virginia Tech’s “Curious Conversations” to talk about invasive species, their impact on native species, and the challenges of managing them. He explained the history and terminology of invasive species, their economic and ecological consequences, and the interdisciplinary approach to addressing the problem. Barney also highlighted practical steps individuals can take to prevent their spread.
Date: Sep 02, 2024 - -
General ItemMaking Motorcycle Riding Safer Around the Globe with Richard Hanowski
Richard Hanowski joined Virginia Tech’s “Curious Conversations” to talk about harnessing research to help make motorcycle riding safer in low- and middle-income countries. He shared the difference in riding culture in those areas as opposed to the United States and explained how his team is utilizing some of the Virginia Tech Transportation Institute’s pioneering technology to help increase rider safety.
Date: Aug 27, 2024 - -
General ItemThe Evolution of Political Polling with Karen Hult
Karen Hult joined Virginia Tech’s “Curious Conversations” to chat about the history and evolution of polling, methods used in modern polling, and how politicians and the average person can interpret poll results. The conversation highlights the importance of probability sampling and inferential statistics in generating accurate poll results, as well as the need for critical thinking when consuming poll results.
Date: Aug 20, 2024 - -
General ItemNavigating Back-to-School Emotions with Rosanna Breaux
Rosanna Breaux joined Virginia Tech’s “Curious Conversations” to chat about the challenges and emotions children may experience during the transition back to school. The discussion includes red flags to look for, as well as coping skills and support parents and caregivers can provide to help their children navigate the school year. The conversation touches on the impact of recent bans on students having individual smart devices in schools.
Date: Aug 05, 2024 - -
General ItemGeologic Carbon Sequestration with Ryan Pollyea
Ryan Pollyea joined Virginia Tech’s “Curious Conversations” to talk about geologic carbon sequestration, which is the process of permanently storing carbon dioxide (CO2) thousands of feet below the Earth’s surface. Pollyea explained what types of rock this is currently known to work with, the efforts he and his colleagues are taking to expand this to other geologic regions, and the potential impact that could have for the environment and economics.
Date: Jun 04, 2024 - -
General ItemVeterans and Mass Incarceration with Jason Higgins
Jason Higgins joined Virginia Tech’s “Curious Conversations” to talk about the intersection of United States military veterans and mass incarceration and his book, “Prisoners After War: Veterans in the Age of Mass Incarceration.” He shared what led him to work at this intersection, some of the reasons he thinks it’s often overlooked, and factors he believes lead many veterans to being in prison. Having interviewed more than 60 veterans whose service ranged from the Vietnam War to the wars in Iraq and Afghanistan, Higgins also compares and contrasts their reported experiences and shares some of the efforts veterans are undertaking to support each other.
Date: May 28, 2024 - -
General ItemMicroplastics, the Ocean, and the Atmosphere with Hosein Foroutan
Hosein Foroutan joined Virginia Tech’s “Curious Conversations” to talk about microplastics, the ocean, and the atmosphere. He explained what microplastics are and shared recent findings that indicate such waste is somehow making its way into the air around the world. He also described some of the research he’s doing to figure out how this is happening and shared his current theories.
Date: May 21, 2024 - -
General ItemReal Estate Values and Elections with Sherwood Clements
Sherwood Clements joined Virginia Tech’s “Curious Conversations” to talk about the impact real estate values have on the presidential election. He discussed some recent research he was a part of that explored the impact of the “homevoter,” what findings surprised him, and what he thinks the date tells us about the upcoming election.
Date: May 14, 2024 - -
General ItemAI and the Hiring Process with Louis Hickman
Louis Hickman joined Virginia Tech’s “Curious Conversations” to talk about the use of artificial intelligence (AI) during the hiring process. He shared the ways in which AI has long been a part of the process, the findings from his research on AI evaluating automated video interviews, and some tips on how job seekers can leverage the technology to improve their job hunt.
Date: May 06, 2024 - -
General ItemExploring the Human-Dog Relationship with Courtney Sexton
Courtney Sexton joined Virginia Tech’s “Curious Conversations” to talk about the unique relationship between humans and dogs. She shared the origins of the dog-human relationship, how the animals have adapted and become more attune to human needs, and their role in helping researchers learn more about human health.
Date: Apr 30, 2024 - -
General ItemThe Chemistry of Earth History with Ben Gill
Ben Gill joined Virginia Tech’s “Curious Conversations” to chat about piecing together Earth history through a combination of geology and chemistry. Gill explained how studying the cycles of different elements can tell a story and help us better understand the planet’s most pivotal moments, such as mass extinctions. He also shared how studying both the worth and best times of our planet can provide us valuable insights for the future.
Date: Apr 23, 2024 - -
General ItemCircular Economies with Jennifer Russell
Jennifer Russell joined Virginia Tech’s “Curious Conversations” to talk about the concept of a circular economy. She explained that a circular economy is a shift away from the linear economy, which follows a take-make-dispose model, and instead focuses on reducing waste and reusing materials. Russell shared examples of tangible products and industries that can be, or already are, part of a circular economy.
Date: Apr 16, 2024 - -
General ItemThe History of Virginia Tech's Helmet Lab with Stefan Duma
Stefan Duma joined Virginia Tech’s “Curious Conversations” to talk about the history of the Virginia Tech Helmet Lab and the impact it has had on sports-related head injuries. He shared how a military research conference led him to study helmets, as well as the critical role the lab’s relationships with the Virginia Tech football and sports medicine programs have played in advancing this pioneering research. Duma discussed the role of the helmet lab in helping to create a greater awareness about head injuries throughout all sports, and described the helmet shell add-on fans can witness during the football team’s spring game on April 13.
Date: Apr 09, 2024 - -
General ItemThe History of Food Waste with Anna Zeide
Anna Zeide joined Virginia Tech’s “Curious Conversations” to talk about the history of food waste in America and its impact on society and the environment. She shared insights related to several historical turning points and stressed that addressing food waste requires rethinking and integrating food security and waste management systems.
Date: Apr 02, 2024 - -
General ItemThe Dog Aging Project with Audrey Ruple
Audrey Ruple joined Virginia Tech’s “Curious Conversations” to talk about the Dog Aging Project, the largest-known study of dog health which aims to understand the keys to healthy aging in dogs and the risks to their health. She explained what information they are collecting, what it means for dogs, and how it might also be used to better understand human health.
Date: Mar 26, 2024 - -
General ItemAll About Air Pollution with Gabriel Isaacman-VanWertz
Gabriel Isaacman-VanWertz joined Virginia Tech’s “Curious Conversations” to talk about air pollution and its misconceptions. He shared his insights related to how plant and human emissions interact and what that means for our shared environment, as well as how he got into this field of study and his hope for the future.
Date: Mar 19, 2024 - -
General ItemRighting a Wrong Understanding of Newton's Law with Daniel Hoek
Daniel Hoek joined Virginia Tech’s “Curious Conversations” to talk about the recent discovery he made related to Newton's first law of motion. The law is typically translated as “a body at rest remains at rest, and a body in motion remains in motion, at constant speed and in a straight line, unless acted on by an external force." Hoek explains how he became intrigued by the law, the puzzles surrounding it, as well as the misconception that objects with no forces acting on them exist and how Newton's own account contradicts this.
Date: Mar 11, 2024 - -
General ItemMeasuring the Risks of Sinking Land with Manoochehr Shirzaei
Manoochehr Shirzaei joined Virginia Tech’s “Curious Conversations” to talk about the importance of understanding and measuring sinking land, commonly called land subsidence. He shared insights about the use of satellite data in creating high resolution maps, how land subsidence fits into the overall picture of climate change, and how he hopes the information is used by localities.
Date: Mar 05, 2024 - -
General ItemEmerging Technology and Tourism with Zheng "Phil" Xiang
Zheng "Phil" Xiang joined Virginia Tech’s “Curious Conversations” to talk about the intersection of technology and tourism. He shares the significant technological shifts in the tourism industry over the past decade, including the influence of social media and artificial intelligence on trip research and the experience itself.
Date: Feb 27, 2024 - -
General ItemAI and Education with Andrew Katz
Andrew Katz joined Virginia Tech’s “Curious Conversations” to chat about the potential of artificial intelligence (AI) in education. Katz shares his insight related to the applications of AI models, such as chat GPT, in analyzing student responses and providing feedback, as well as the challenges of AI in education and hope it can provide a more individualized education experience.
Date: Feb 20, 2024 - -
General ItemWarm, Fuzzy Feelings and Relationships with Rose Wesche
Rose Wesche joined Virginia Tech’s “Curious Conversations” to chat about the science behind the warm, fuzzy feelings that often accompany a new romance, the transition from infatuation to attachment, and how to maintain intimacy and passion in relationships. She also shared her research exploring the emotional outcomes of casual sexual relationships and provided advice for those in relationships.
Date: Feb 13, 2024 - -
General ItemThe Future of Wireless Networks with Luiz DaSilva
Luiz DaSilva joined Virginia Tech’s “Curious Conversations” to chat about the evolution of wireless networks, the importance of advancing the next generation of wireless, and the critical role the Commonwealth Cyber Initiative (CCI) is playing in that advancement.
Date: Feb 06, 2024 - -
General ItemThe Positive Impacts of Bird Feeding with Ashley Dayer
Ashely Dayer joined Virginia Tech’s “Curious Conversations” to chat about her work at the intersection of birds and humans, including a new project that explores the positive impact bird feeding has on human well being and general tips for the hobby.
Date: Jan 30, 2024 - -
General ItemSticking to healthy changes with Samantha Harden
Samantha Harden joined Virginia Tech’s “Curious Conversations” to chat about the science behind developing and keeping healthy habits.
Date: Jan 16, 2024 -
-
General ItemScreen Time and Young Children with Koeun Choi
Koeun Choi joined Virginia Tech’s “Curious Conversations” to chat about the impact of media on young children. She shared insights from her research on screen time and young children and introduced a project she’s working on that explores the use of artificial intelligence to help children learn to read.
Date: Dec 11, 2023 - -
General ItemThe History of Holiday Foods with Anna Zeide
Anna Zeide joined Virginia Tech’s “Curious Conversations” to chat about the history of foods traditionally connected to holidays occurring during the winter months, as well as the nature of developing personal traditions.
Date: Dec 04, 2023 - -
General ItemThe Chemistry of Better Batteries with Feng Lin
Feng Lin joined Virginia Tech’s “Curious Conversations” to chat about the chemistry behind creating better batteries for electric vehicles. He broke down some of the current challenges to mass producing an effective and affordable battery, and shared his thoughts on the potential for coal in helping overcome these hurdles.
Date: Nov 27, 2023 - -
General ItemAI as a Personal Assistant with Ismini Lourentzou
Ismini Lourentzou joined Virginia Tech’s “Curious Conversations” to chat about artificial intelligence and machine learning related to personal assistants, as well as her student team’s recent experience with the Alexa Prize TaskBot Challenge 2.
Date: Nov 20, 2023 - -
General ItemThe Power of International Collaborations with Roop Mahajan
Roop Mahajan joined Virginia Tech’s “Curious Conversations” to chat about the value of international collaborations to research and innovation, as well as how they’ve contributed to his work advancing the “wonder material” graphene.
Date: Nov 13, 2023 - -
General ItemDriving around Heavy Trucks with Matt Camden and Scott Tidwell
Matt Camden and Scott Tidwell of the Virginia Tech Transportation Institute (VTTI) joined “Curious Conversations” to talk about the institute’s Sharing the Road program, which has shared tips for driving around heavy trucks with more than 20,000 high school students since 2018. They discussed the research behind the program and shared practical safety tips for drivers of all ages.
Date: Nov 06, 2023 - -
General ItemAutonomous Technology and Mining with Erik Westman
Erik Westman joined Virginia Tech’s ‘Curious Conversations’ to share his insights on how machine learning and autonomous technologies are impacting the mining industry, as well as what Virginia Tech is doing to prepare students for the future of the industry.
Date: Oct 30, 2023 - -
General ItemAgriculture Technology and Farmers with Maaz Gardezi
Maaz Gardezi joined Virginia Tech’s ‘Curious Conversations’ to talk about the importance of developing agriculture technology alongside and with the input of farmers. He shared details about a current interdisciplinary project he’s working on at the intersection of technology and agriculture, as well as his thoughts on the potential for advanced technology in this space.
Date: Oct 23, 2023 - -
General ItemAI and Healthcare Workspaces with Sarah Henrickson Parker
Sarah Henrickson Parker joined Virginia Tech’s “Curious Conversations” to chat about how artificial intelligence and machine learning is currently being used in some healthcare spaces, and what the potential is for the future.
Date: Oct 16, 2023 - -
General ItemAI and Online Threats with Bimal Viswanath
Bimal Viswanath joined Virginia Tech’s “Curious Conversations” to chat about how the rise in artificial intelligence and large language models has changed the online threat landscape. He explained how this technology works and shared about a current project he’s involved with that aims to mitigate toxic language in chatbots.
Date: Oct 09, 2023 - -
General ItemAI and the Workforce with Cayce Myers
Cayce Myers fields questions on artificial intelligence’s impact on the workforce, regulations, copyright law, and more.
Date: Oct 02, 2023 - -
General ItemSpecial Edition: The GAP Report with Tom Thompson and Jessica Agnew
Each year, Virginia Tech produces the Global Agricultural Productivity (GAP) Report, which provides a snapshot of the current state of agriculture and a projection of its future. Tom and Jessica, executive editor and managing editor, respectively, of the report, joined the podcast just prior to the 2023 release to explain what it is and how they hope it's used.
Date: Oct 01, 2023 - -
General ItemThe Metaverse, Digital Twins, and Green AI with Walid Saad
Walid Saad joined Virginia Tech’s "Curious Conversations" to field questions about the metaverse, digital twins, and artificial intelligence’s potential impact on the environment.
Date: Sep 24, 2023 - -
General ItemSemiconductors, Packaging, and more with Christina Dimarino
Christina Dimarino joined the podcast to chat about semiconductors, the importance of packaging in onshoring their production, and what Virginia Tech is doing to excel workforce development in this field.
Date: Sep 15, 2023 - -
General ItemPilot: Electric Vehicles with Hesham Rakha
In this pilot episode, Hesham Rakha shares insights on what sustainable mobility means, the gas price at which electric vehicles becomes the more cost effective option, and some of his personal experiences with an electric car.
Date: Aug 14, 2023 -