Skip to main content

Transforming the understanding
and treatment of mental illnesses.

Celebrating 75 Years! Learn More >>

Day 2 - Workshop on Brain Behavior Quantification and Synchronization: Sensor Technologies to Capture the Complexity of Behavior

Transcript

Welcome Day 2: May 3, 2023

YVONNE BENNETT: Thank you and good morning to everyone. Welcome to day two of our workshop. We had a jam packed second day filled with topics such as sensors, systems, informatics, data science, computational neuroscience, and behavioral neuroscience and related experiments. So to get us started again, I would like to welcome Dr. Holly Lisanby who is a Division Director of Translational Research at the NIMH.

Opening Comments

HOLLY LISANBY: So, thank you Yvonne. It is so exciting to be launching day two. Day one was really fantastic, I would like to thank all of the speakers and attendees for your great Q&A, and we are looking forward to another exciting day.

I want to start by thanking the members of the workshop planning committee for BBQS sensors. And they really span multiple NIH institutes, offices, and NSF, starting with of course Dr Yvonne Bennet, Dana Greene-Schloesser, Lizzy Ankudowich, Svetlana Tatic-Lucic. Yuan Luo, Brooks Gross, Kari Ashmont, Tristan McClure-Beckley, Sandeep Kishore, Erin King, Holly Moore, Elizabeth Powell, and Eric Burke-Quinlan. This really could not have happened without you, so thank you so much.

Now it is my pleasure to introduce the moderator for this morning’s opening session, Dr. Carrie Ashmont. Dr. Ashmont is the Translational Team Lead at the National Institute of Biomedical Imaging and bioengineering or NIBAB, as it is effectively called, where she manages the NIBAB small businesses program, including the concept to clinical commercializing, innovation, or CSI program.

Justice supports the Institutes’ RADx tech program and coleads the dissemination programs for the NIH brain initiative, highly relevant for our topic today. I got to know Carrey when she was at NINDS, where she oversaw the translational device cooperative agreements and ran the NIH blueprint for neuroscience research education program on translational devices, or the R25 program.

Dr. Ashmont has degrees in mechanical engineering and biology and biomedical engineering. So her expertise is perfect for today’s session. Welcome, Dr. Ashmont.

Session III: Sensor Networks, Signal Processing and Considerations for Artificial Intelligence, (Continued)

CARRIE ASHMONT: Thank you very much. Good morning everyone. It is my pleasure to introduce our speakers for this portion of session three today. We have Changzhi Li from Texas Tech University, who will discuss portable radar systems for noncontact continuous measurement of naturalistic behaviors and physiological signals. And we also have Roozbeh Jafari from Texas A&M University, who will discuss digital medicine for cardiovascular health. With that I’ll pass it over.

CHANGZHI LI: Thank you so much for the nice introduction. My name is Changzhi. You can call me Chen Zi or you can all me Chaz, make it easier. So to begin with my presentation I will play a video taken of my daughter Chloe when she was less than one year old, and this year she is turning 10, so you can calculate approximately when this research was done.

So one night I brought a device designed by my students back home. And after Chloe and also my wife went to sleep I secretly placed the device on the crate and the laptop over there. So as you can see the device was hanging on the crate looking at Chloe, about maybe 30-40 centimeters away.

And on the other side, on the laptop you can see a real-time signal on top, time domain, and on the bottom is the FFG result. No other signal processing. Basically using a radio frequency signal, it is plausible to pick up the respiration and heartbeat of a person without anything attached to the body.

And what is surprising is this is actually not new. So back in 1975, Dr. James Lin invented the respiration detection system using a very bulky microwave system. And what really made it amazing was in 2002.

A group of engineers at Bell Lab, actually including Dr. Jenshan Lin, my advisor integrated the system into famous chip. And I started my PhD career in 2005 with Jenshan in Florida, and today Jenshan has already retired and joined the NSF as a permanent program manager.

After this, during the past ten years we have seen tremendous advancement in this type of technology as I will show you later today. So the mechanism here is actually quite straightforward.

Basically, we have a radio frequency sensor with Radar, and the Radar sends out a radio frequency signal towards a human body, and once the signal is bounced back inside the receiver of the radar, we compare a local a copy of the transmitted signal with the received signal, and based on engineering processing basically we can figure out the signal phase change due to the propagation along the wave. And because of that the small portion of respiration and heartbeat will show up in the detected signal.

I will skip the details, but from an engineering point of view we have lambda representing wavelength in the detected signal. And if you look at the question obviously if you reduce Lamba, the wavelength, you can make the risk modulation bigger, and because of that you can increase the detection sensitivity. So increasing frequency, pushing this into millimeter wave range is a way to make this really sensitive. And this is an effort done by a lot of RF engineers in the past ten year.

Another thing I want to mention is the transmit signal power here is more than 1000 times lower than the peak power of a cell phone. Moreover, we are detecting really slow movements, one hertz or 0.1 hertz. So we can heavily duty cycle the transceiver to make sure the average radiated power is really slow, and makes this barely safe for humans and animals. And we call that vital doppler, You use doppler effect to detect life signs.

Another coordination is called Micro-Doppler, and now we are talking about bigger movements, for example if I swipe my hand, there is movement, there is speed, and that is what is going to induce doppler formation.

So shown here are two examples. In one example we had the radar in front of this person. This person sat down, and then raised hand with a hand device. On the right hand side we are plotting the doppler frequency component as a function of time.

As you can imagine, if this person sits down then the distance from the radar is going to increase, and that is going to induce a negative doppler. In the meantime, interesting things, when we sit down our head is going to lean forward a little bit. Therefore, there is a weak positive doppler. And then over there you can see the signature corresponding to hand movement.

On the bottom side is a very similar demonstration. So we had a person who walked away from radar, generated a negative doppler, and then this person pushed a hand device, you can see this information over there, and after that this person walked back towards the radar.

And leveraging these types of micro doppler effect, industry is also doing a lot. For example, to move to the next page you will see that Google has a team called Google Soli, they leverage this to monitor hand gesture, and use that to interface with smart devices. And the goal is that you don’t have to touch the screen, you don’t have to mess with small buttons, you can control the device with motion.

And four years ago Google had their Pixel4 phone put on the market. And with this Pixel4 form you actually have this radar system, the green color shown over there. And for this radar sensor it can pick up hand gesture, you can remotely activate the phone, you can play music, volume up, volume down, things like that. And the engineer in Google who integrated the radar into the phone was actually a PhD student that graduated from Texas Tech.

And unfortunately, nowadays Google is pushing for lower cost. They removed the radar sensor from the phone, but we are working on integrating this radar sensor on other products, including Nest for smartphone applications. And all of this is really dependent on the aggressive progress in the past years in the semiconductor industry.

So for example, back in 2017 during the National Microwave Symposium, TI demonstrated their 77 gigahertz radar system, and nowadays this is quite popular for researchers doing radar IOT smart application research. And Infineon is the company that provides this radar to Google. So they demonstrated their 60 gigahertz radar chip.

So their chip was over there, connected with a tablet, and in the conference their engineer can train the attendees in one or two minutes, and you can master a lot of hand gesture commands to interface with the tablet. Analogue devices also had their 24/77 gigahertz systems.

In 2021, Infineon actually put the system, very smart, this is one third of the size of a business card. And with that you can plant this into laptop, and a lot of researchers not familiar with hardware can do research using radar sensors.

And I mentioned that if we increase the carrier frequency we get a higher sensitivity. So here, this student, first demonstrated using a 125 gigahertz radar signature, he can achieve a sensitivity as high as 95 nanometers. And using that he pointed the radar towards his throat, and he said one sentence, I am a boy.

On the righthand side shows the spectrogram detected by radar using two different methods. You can clearly see I am a boy. And on the bottom side is a detection result coming from the microphone of the smartphone. We can still see I am a boy, but you will see the background noise over there. And this is due to the ventilation fan in the room. So basically, you can achieve highly directional detection of small motion related to human bodies.

Another big application of radar is you can easily detect the distance. So in this demonstration we had a radar in the center of the room. We scanned the room full 360 degrees. And after that you can see a bare layout of the room. This is not awesome because the resolution is not high enough.

But what is really wonderful is, as I mentioned we have vital doppler and using vital doppler we can easily differentiate between human subjects and other stationary objects. So with very little computation we can achieve human aware localization.

And the next page shows a demonstration. So this is supposed to be a video, and I hope that it is going to play. In this video basically we have two people working back and forth in this very narrow corridor with RADAR, micro doppler and range doppler. What we can see here is if you focus on the lower left you can see the two people walking back and forth.

And all of the clutters are naturally compressed into the horizontal line. Because in this video we plot the doppler as a vertical axis, and the range as the X axis. So all the non-moving clutters are on this horizontal line with zero Doppler. We can easily extract human information. And this experiment was done with a smart radar device at 5.8 gigahertz. As you imagine, if we push the frequency higher we can have higher resolution, showing where the people are without using cameras.

And to leverage that, we can think about healthcare applications. So one demonstration here is consider the scenario of a person who is falling. If a person is falling towards radar, the distance or the so-called range is going to reduce. The doppler will increase, because the speed increases. And the signal will become weaker as the person lies down.

So basically, we leverage RCS, radar cross section, range, and doppler with similar detection. If you plot a result on doppler range video like that, you will see the dot moving this way, reducing range, increasing speed, and weaker signal. If the person falls away from the radar you will see the dot moving in the opposite direction. And the bottom side shows another case. This person made a small move, not falling. So definitely the difference is very significant.

The next page shows an experimental demonstration. The top side shows all frames of this type of radio frequency video. You can see the signature started from here, and eventually it moved, the signal started from the upper left, eventually it moved towards that direction, and then disappeared. That’s the first row. And in the middle row the signature actually moved in the opposite direction. The bottom row shows experimental results of a small jump.

And in order to make this really applicable you have to think about the continuous monitoring situation, and in that case we can extract the location of that energetic point. It’s a range doppler.

And plot the range doppler in the range doppler timeframe, which corresponds to the lower left, so we have doppler information, we have range information, and we have time. Here we see we have dots moving around, and applying machine learning it is possible to characterize and classify different activities in a continuous environment.

And now we come to my favorite page, because this research was done by a high school student girl, Rachel from Irvine California. She stayed in my lab in the summer for seven weeks. And what she did was she used the radar to look at a person in a car when the person is driving. When we drive, sometimes we pick up phones, we look around, we dance with music. So these are different activities.

But there is one behavior that is really dangerous, in the upper left. If the person feels sleepy then the head will drop. So Rachel found out if you look at the doppler as a function of time, different behavior is going to give you different signature, and using that you can differentiate those things. So she published the paper on IEEE Radio Wireless Week, and after that a PhD student took over the work, applying the data to machine learning program and demonstrated we can automate this type of detection.

So Rachel actually did this in 2018. She joined MIT in 2019, and this year she finished her degree in ECE and biomedical engineering, two degrees, and she was selected as an MIT Oxford Marshal Scholar to continue with her training in biomedical application and medicine. She also interned with NIH in one of the summers.

So the next page is another work done by a high school student, Hannah from Philadelphia. So last summer she stayed in my lab for seven weeks. And she likes to play violin, so I said why don’t you bring this radar into your home.

So the setup is shown over there. She had a small  radar looking at her while she was playing violin. And with the radar she was able to record different data, and she also applied this into machine learning. And two days ago her paper was officially published on IEEE’s sensors journal in the May issue. She was also admitted into MIT last November, got an earlier admission.

So what I wanted to really point out is over the past ten years there has been aggressive progress on system engineering, integration, and semiconductor. Because of that, this type of research is definitely doable for people not familiar with semiconductor engineering or radar systems.

And the last page on research, I actually stole one page from J-C So microwave cannot only be used for remote detection, you can also use that for wearable detection. And here I just showcase a simple example J-C’s group is doing, they are using microwave sensor to detect dehydration. And you’ve got to talk to him about ow interesting it was for the researcher to be dehydrated in order to carry out this research.

And the future outlooks, the advantage of microwave system is it is noncontact, continuous, there is no image, so it protects privacy. It is non line of sight. And we are also looking at other applications. This is a recent result we are going to present at the National Microwave Symposium. So we had two radar sensors on the glass frame to monitor eye movement. Hopefully this can be connected with people with neurodegenerative disorders. And finally, thank you so much for your attention, I really appreciate it. With that I guess I will introduce our other speaker, from the other side of Texas. The next speaker is going to be Dr. Roozbeh Jafari from Texas A&M University.

ROOZBEH JAFARI: Thank you. I just want to share my gratitude with Dr. Bennett, Dr. Schloesser, and Ms. Robinson, for the invitation and organizing this wonderful event. It is wonderful to be here. My talk is broader today. I will talk a little about digital health and wearables.

And I am going to start with the motivation for wearables. Watches are one of the first wearables that we started carrying with ourselves. The reason that we have a watch is not because we want to carry a piece of jewelry or electronics, it’s because we want to receive the time.

When we look at the history of watches, we used to have these pocket watches, and they were small enough for over a century you actually could strap them on your wrist. But at the time the society as unfortunately mostly male dominated, and they felt it would be a piece of jewelry to put a watch on there.

And it was not until World War I, when the soldiers realized that hey, I’m on a horse, I need to synchronize myself with the rest of my team, it’s much easier to look at the time here as opposed to getting it out of my pocket.

And then right after that we went into industrial revolution, it was a notion of time to synchronize ourself. Time is actionable. I look at the time, I get up, go pick up my daughter, or go get on the next meeting. And the interesting part is it did become a piece of jewelry. A watch that is $15,000 is not about the time.

So, the work that we do in this field collectively, I have the opportunity to interact with the digital health community, I was the inaugural chair of the Digital Health Study Section. It is always driven by actionable information. And the information needs to be accurate and reliable to enable precision health.

I just want to bring up the fact that I’m using the word precision health and not precision medicine. It creates access, and the scale are really important. We have a little bit of a chicken and egg problem here too. A lot of technology development is based on AI. The data for AI comes from all of us. We have to convince individuals to capture that data.

But at the end of the day, at the onset you don’t have the value proposition to wear these sensors. So these are some of the important directions where we rely on the investment and the vision of NIH, ARPA-H, DARPA, and many other institutes that would help us to take it over that barrier.

One thing I would like to share with you all is the purpose is never to replace the medical diagnostic process. Physicians and nurses, highly intelligent, highly trained individuals are not going to be replaced, not the sophisticated equipment that we have in the hospital. The purpose is to use this Oura ring that I’m wearing here to detect the pre-symptoms of COVID-19, the onset of disorder, so you can take a quicker action. Or when you go through the intervention, assess how well it is working.

So again, this is a collective view of the discipline, but at the core of digital health we have the human state. We are trying to read from human to determine the physiological behavior or contextual state. The work that we do is always driven by actional information. In my career I have built a bunch of hammers looking for nails that didn’t work out that well.

So building sensors, I’ll talk about some of those today. A good portion of these studies are actually sponsored by NIH, working with electronic tattoos, building sensors for blood pressure, using commercially available devices, understanding what it is that we sense. Believe it or not, there is just still a lot of information that needs to be captured. Signal processes is typically extracting signal and removing the noise, and analytics is typically personalization, how do you compare apples to apples over time, and how do you build algorithms that would be customized for each of us.

So let’s talk a little bit about the blood pressure. When we started this project back in 2013, I didn’t know why the blood pressure was so important, so I am going to just go over this very briefly. At the core of our cardiovascular system we have the heart. When the heart is beating, we have a heart rate, and then we have a stroke volume. The combination of those two is going to give us the cardiac output, that is how much blood is leaving the heart per minute.

Now, our cardiovascular system is extremely amazing. When the blood goes into the arteries, arteries have what is called peripheral resistance or peripheral compliance. They are not like a solid part, they expand a little bit. The leading cause for elevated blood pressure, hypertension, is not actually the heart typically, it is arteries. They become stiffer. And over time, the body is a feedback system, the heart also becomes a stronger muscle, and we end up with sustained hypertension.

Blood pressure is a time series signal. We get a signal that will look like this is, if you look at the left ventricle. The blood pressure changes for every cardiac cycle. If you’re lucky we capture this once or twice, and we only capture the systolic and diastolic. There is a significant need to look at the entire wave.

It is important, because BP, blood pressure is the most important biomarker for heart disease, and heart disease is the leading cause of death, with a significant financial toll. The work that we are doing is also going in the direction of measuring advancing hemodynamic parameters including arterial stiffness. We talked about it last night. You might have heard, stress kills. We don’t know how, but it has to do with arterial stiffness.

This is also another picture of a favorite of mine, that I received from Dr. Arshed Quyyumi at Emory School of Medicine. This is all of us. We go through this process. When we are in 20s, that’s where we are on the left, and then as we get older we go through this. You want to be able to measure that and characterize it.

So blood pressure was invented in Austria in 1881. The difference between the original blood pressure device and what we have right now is only the new one doesn’t have mercury, because it’s toxic, but the technology is the same. You place a cuff on your wrist. You inflate it. At some point you do not hear the pressure, you do not hear the tone, you start deflating it, you hit the point where you hit the first tone, that’s your systolic high blood pressure. At some point you do not hear the tone anymore, that’s your diastolic. And then when you hear the maximum tone that’s your mean arterial pressure.

This technology doesn’t work as well because you get very infrequent measurements. We have what is so called mask hypertension, you take the blood pressure, the blood pressure is not high, I go see a doctor, my blood pressure goes up because I see a white coat, white coast syndrome. And then there are studies that show, this is a Dublin study that looked at ABP and ambulatory blood pressure monitor at night time, and then they noticed that that night time blood pressure measurements is actually the most predictor of cardiac disease and the outcomes.

These are ABPMs, the device that you put it on. It gets activated every 30 minutes. It is actually fairly inconvenient. I have used it myself to understand the limitations. Sometimes it wakes you up and your blood pressure goes up because you’re upset.

So our take on this has been to use the bioimpedance and measure the blood volume changes. We use four-point sensing here, I don’t need to go into the details of it. But we extract a number of parameters, including the pulse transit time and the radial artery. And then you get a signal, and the bioimpedance that would look like this, it’s a time series signal with a diastolic phase, systolic, inflection point, and so on. And then we want to take this signal and then convert it back to the blood pressure.

Many of us have these PPG based devices, they measure the pulse arrival time. The limitation of the PPG and the optics is that light cannot go deep into the tissue, you get about four or five millimeter of penetration if you’re lucky.

If you look at the fluctuations of the blood pressure, this is the blood pressure in left ventricle in arteries, but when you get to the capillaries at the surface of the skin you no longer have the blood pressure, you only have the pulse rate. And it is not okay to try to capture blood pressure from capillary.

On the other hand, the bioimpedance can go as deep as we want it. And there are two other important characters for bioimpedance. One is it is not sensitive to the skin tone. When you go to the doctor, skin tones, the skin pigment will absorb the light and the signal to noise ratio drops. That is sort of equated at the onset of sensor development. Also, if you look at somebody who has a higher BMI, the thicker layer of fat is going to block the lights. And those are some of the challenges that bioimpedance does not.

So our take on this is we want to do sort of an imaging. Think about this as a camera. We have an array of these sensors, we do imaging, we try to figure out where do we need to focus, and then we capture readings from different cells.

The program looks at building sensors. It also looks at the data quality and augmentation, garbage in garbage out, in any AI algorithms, that’s a really critical piece. I’ll talk a little bit about the digital twin today, and the digital twin applies both to the development of electronics, how do I build the sensors, where do I place them, what should be the sites, as well as the underlying physiology.

And then the machine learning algorithms, the localization techniques, how do I do source reconstruction, and then the blood pressure estimation. We build electronics quite a bit. I prefer not to do it, but we have to do it because often times we have to really push the performance of our electronics.

Our experiments are fairly sophisticated. In the background we have a Finapres device that measures the blood pressure continuously, a $50,000 machine, we introduced a number of maneuvers called Pressor exercise and so on, and we capture measurements from our technology.

I am not trying to discount this field, but I have this example from the vision/robotics community. You are trying to build an algorithm that will detect a stop sign. So we have to have 10,000 instances of a stop sign captured at different angles, different light illumination, different type of cameras.

It’s a hard problem, but it’s not rocket science, it’s more of a heavy lifting. What is the equivalent of this when we think about cardiovascular health? How do I create 10,000 instances of heart failure? I would be lucky if I have three. And I can’t even have three, I have to start with animal models.

So the data that we have in this discipline is very hard to acquire. Highly imbalanced. We have a lot of health need datapoints where the disease state is often more challenging to capture. Everybody is different, you have a lot of confounders. And then in many cases you don’t even have the disease state. I don’t have atrial fibrillation. So how do you build an algorithm to detect atrial fibriliation?

So, digital twin. Who knows about digital twin? Everyone? Okay, great. So this term was coined by NASA in 2010, but they have been practicing it since ‘60s. So NASA runs a lot of experiments that are highly mission critical and you get only one shot. You are sending man or woman to the Mars, the rocket cannot fail, and it takes two years to go through that process. The rocket also goes through the aging. So you need to make sure that the rocket is being tested under different conditions. So the idea of digital twin is to create a computational model that remains persistently coupled with the physical system, and then you can put it under different tests.

Let’s look around. What is the most important motor that you see in this room? Our hearts, our cardiovascular system. So we need to have digital twin for that as well.

So I am going to skip through this. Really the excitement is all these novel sensors and the AI coupled with the model. The models have been around for a long time, and these models are fairly comprehensive. But the only way that this becomes actionable is when they are personalized. So we need to be able to do this. And that goes into the direction of digital twin.

So in the interest of time I am going to skip through this. I have this example. Think about this as a spring. You would like to know where the mass is. If you happen to have this equation, mass times acceleration, the damping factor, as well as the constant coefficient, you can write it and figure out where that is springing.

But let’s say I don’t have this, and I’m going to use a neural net to try to train it. I have data points up until today, where I’m standing here in front of you. I trained the neural net based on the classic function, minimize the error, the neural net has no idea this is a spring. But once I take this model and I apply this into my penalty function, now the neural net understands what you’re thinking about. And that’s the kind of mentality that we have when we think about these physics informed -

So we have been working in this area quite a bit. These are some examples of our technologies. This is the benchtop version, this is sponsored by NIDID, this is sponsored by NHLBI. This is the newest one which is in the form of ring form factor that will be sponsored by NIBIB. We’re actually running this study at Morehouse School of Medicine and African American communities because of the limitations that I mentioned earlier to you.

So one idea of capturing the physics part of the blood pressure is to understand the relations. We know that the blood pressure has correlations with blood volume, pulse transit time and heart rate, and we can capture that from the bioimpedance.

Some of these features correspond to the blood volume. This corresponds to the pulse transit time, which is the reflected pulse, when the pulse is traveling through the artery, it goes through the artery, it hits the end of it, and it comes back. As the pulse transit time goes up the pulse is going to come back faster. So we can capture it here as well, and then the heart rate also has correlations with the blood pressure. So this is the idea that Khan had, one of my PhD students who just graduated. And the idea was to use a neural net.

A neural net is really good, we use a conventional function, but neural net is really good in measuring the derivative. We captured a bunch of derivatives based on these features, and then we use the Tailor series. And when we use the tailor series you can actually train the model to work really well. It turned out to actually be very effective with a minimal amount of data reducing the data by a factor of 15 and looking at all of these different versions, electronic tattoos, ring, and then the benchtop version.

The neural net ends up giving us fairly accurate results, and then it doesn’t overfit, whereas the traditional model would overfit. The black here is the gold standard, the green is what we are capturing from our physics informed neural net, and then the yellow is the conventional models.

So this also motivates the connection to the sensor. One of the important aspects of this kind of work is really understanding the underlying physics. When I place the sensor on the skin, I want the sensor to stay at the same location, that motivates the notion of electronic tattoos and graphene, this is in conjunction with my existing work with Deji Akinwande in UT Austin, where now the model starts really understanding this, and the graphene is going to stay always at the same location. So once the model learns the physics it is going to stick to it.

Here is another example for Dr. Yu at Penn State, they have this wonderful technology called drawing on the skin that you can literally draw your sensors on the skin, how cool is that? So we talked a little bit about actionable information, end to end holistic view and personalization. One of the things that I would like to bring up is the abstraction that we build is the key.

When we look at the first flights, Wright Brothers, it was a 12 second flight. A 12 second flight is not a significant flight. But that flight helped us to understand we can go from DC to Beijing, we can travel at the speed of five mach, but that plane can not go to the moon. So that’s really what we are trying to do when we think about these paradigms. And then we need tools and try to simulate.

I would like to acknowledge my sponsors, my wonderful collaborators. This is my team. My daughter, she is in the middle, research graduate students, we had a party a couple of weeks ago, and then I’m going to end with this quote from Dr. Feynman. Thank you.

Discussants

CARRIE ASHMONT: Thank you both for the great talks. At this point I would like to invite our speakers as well as our discussants, Drs. Edwin Kan and Raji Baskaran up to the stage for some questions. And while they are getting settled, just a quick reminder, if you’re on Zoom please go ahead and enter your questions in the Q&A box. Thank you. Dr. Kan, would you like to get us started with a question?

EDWIN KAN: Let me ask a question for Changzhi first. Radar was developed for what you cannot see. So imaging is forever our competing powerful radar. How do you see what you develop that is either complementary or better than imaging?

CHANGZHI LI: Thank you for the great question. I think definitely I don’t want to use the word competing. Competition is always tough. I want to say more it’s complementary, radio frequency and camera imaging both have advantages and also limitations. So yesterday I actually had some discussion with NIH researchers here, and I learned that a camera is sort of unwanted sometimes, especially in home environment. If you want to monitor patients, privacy is a big issue. And with that being said this is something radio frequency guys could sort of advertise or rap a little bit about, because for microwave radiofrequency you don’t have that visual image. And if you look at the presentation I made today, a lot of things presented as sort of rough, and that is a limitation of RF and microwave.

The resolution especially, angular resolution, is not comparable to image. It’s just like the car industry. If you buy a car, you’re going to see cameras, radar sensors, all types of sensors integrated, and people may argue that Tesla removed the radar a few years ago, but the reality is Tesla is putting radar back, because they realized without radar there are going to be a lot of limitations. So I would say collaboration, sensor fusion would be a better approach from a system point of view.

RAJI BASKARAN: Thanks both of you for really great presentations. I think there are some common themes that emerged, so I would like either of you to answer this. You presented some version of biotival measurement that is nontraditional, or what is upcoming in census.

But also included some kind of data science, AI in the loop to interpret this data because it has a new modality, or maybe it is more continuous measurements, or you want to have some processing for interpretability.

Can you comment on as a community can we develop some theory of merit on when we collect data on this largescale data especially, both continuous time and large population, whether we should have some theory of medic for how we judge these that are not only about the physics, which I really appreciate the physics and from a neural net approach, but also if you do any data in aggregation the population based also theory of merit.

And can we kind of lead that so that the commercial solutions, like the optics that you pointed out, both of you, the issues with optics, which is skin tone, BMI, many things that are not really guaranteed to work as well, but maybe we can move the whole ecosystem towards better solutions by putting out figure of merits.

ROOZBEH JAFARI: That’s a great question. I love this, and I don’t know if I have answers to that question. I would share this with you. My view on this technology development is not that they would actually be incorporated into wearables like this. It’s really hard to convince these large companies to adopt these technologies for a small population.

The view is you would have some sort of patch for example that you would wear, the patient, or individuals who can benefit from this, they would wear two nights, every two weeks, and when they go under intervention the patch would effectively help us to characterize some of those important hemodynamic parameters.

It's going to be highly dependent on the direction, you’re trying to treat hypertension, stress, or other disorders, battery reflex dysfunction or those kinds of things, but then my general sense on that is the advantage that we have is we’re trying to really as opposed to studying with a group, I’m going into phenotyping, we’re trying to start from individuals and then build it bottom up.

CHANGZHI LI: So I strongly agree with what Bruce Bower said. My personal experience was when I was a PhD student engineer, I thought an engineering problem is the hardest problem, and there is always a figure of merit to evaluate, quantify. But after I got married, after I got three kids, I learned that human beings are a lot more complex than engineering systems. That’s a totally different level.

And that is why exactly I really appreciate the organizers to bring, I asked engineers together with NIH researchers to learn different concerns, discuss the things from the system point of view. So I really appreciate that we can get some figure of merit, or quantifiable thing to evaluate from system point of view.

But I think there are a lot of research efforts needed to really come to the stage to clearly quantify. Or maybe there is no way to absolutely quantify, but then we need to brainstorm together to figure out a way to really evaluate the system and find out what is the best systematic way to move the solution towards a better future.

CARRIE ASHMONT: Great. We are running a little short on time, so I am going to throw it back to Aaron for any online questions that we may have.

STAFF: Yes, there is one question from our virtual audience for Dr. Li. And the question is, is there a future for synthetic aperture in portable radar.

CHANGZHI LI: Great question. Synthetic aperture radar is amazing. Basically what I presented here is very low level, low resolution system. You can see the images are not really nice.

The advantage of synthetic apertures is if you move the radar and grab data from different locations, you can potentially integrate all the data together. And effectively you are looking at the target from a much bigger aperture. And in microwave antenna engineering with a bigger aperture, you get a much higher resolution. So this has already taken place.

For example, Sandia Lab has done a lot of demonstrations using synthetic aperture radar. You can clearly see the city, see a lot of things on airplanes. But the future I believe we can integrate this also in an indoor environment to benefit human beings, like we can scan for example our room from different locations, maybe using a drone or some other platform. We can get high resolution results. So synthetic aperture radar is something the radar community have been actively looking into.

JOHN NGAI: John Ngai here from NIH.  Thank you both, speakers, for exciting talks on the panel. I have a question for the panel. Originally, I was thinking for Dr. Li, but perhaps everyone on the panel could address. Could you talk a little bit about the relative advantages, disadvantages, and different applications of say radar and infrared imaging, and specifically LIDAR?

CHANGZHI LI: I believe there are also researchers here doing cameras and LIDAR. As we discussed a little bit, different systems definitely have their own pros and cons. So I think integration is something we can look into. On the other hand, complementary application scenarios. My feeling is radar is really lagging behind if we compare the development of other optical technology. There is a lot of things we can address.

ROOZBEH JAFARI: I just want to add one comment here when we looked at this diverse group of technologies. Radar can go deep, and that is really important. This standard monitoring paradigm is extremely convenient and comfortable, that’s the advantage.

But the disadvantage, including some of the vision-based technologies, has been for example you’re looking at a face, and you’re trying to focus on specific pixels to pick up the heart rate, you will have to have anchor points. They look at nose, they look at mouth, to figure out where that pixel is and keep it at the same location. If you move, things are going to change.

So there is also this push to move some of these technologies as close as possible. I mean, we have one of the leaders here, Dr. Rodgers, who has done a lot of work in this area. Using even ultrasound under skin in the form of an electronic tattoo has been also one of the exciting things. Because now the reference with respect to the body is no longer moving. And that would really push and help us to improve the fidelity of the signal and extract more important hemodynamic monitors.

RAJI BASKARAN: The more personalized you want, I think you want sound and light closer to the body, something like radio frequency is good for context, so things like what’s happening around the room or around the environment, because it is 3D, but it doesn’t have that directionality. So think of it like radar for context. Lidar for directional context. And then ultrasound, audio, as you come closer to the body from a human perspective, that may be one way to think about the overall thing, they all serve different functions.

CHANGZHI LI: I just want to quickly add for microwave and radio frequency near field detection is also possible. Actually Professor Ken and Professor Chau, they both have nice results, like near field detection, which is not really absolutely in contact, but still very close to the subject.

CARRIE ASHMONT: Thank you. I believe we are at or perhaps past time. So thank you all very much, and I believe we are going to break now.

STAFF: Thank you. The time is 10:51, we will be returning at 11:00.

(Break)

YVONNE BENNETT: Welcome back everyone to our BBQS Workshop as we call it. And welcome to session four. At this time, I would like to introduce my colleague Dr. Ming Zhan as the moderator for session four.

Dr. Zhan is a program officer at the NIMH and is team lead for the NIH Brain Initiative Informative Program, which supports the creation of data archives, establishment of data standards, and development of software tools, conducting secondary data analyses, and the data coordination, artificial intelligence center of the brain behavior quantification and synchronization program.

Prior to joining NIH, Ming worked as the Chief of Bioinformatics at the Methodist Hospital Research Institute. He was also an adjunct professor at the University of Texas at Houston and was Chief of the Bioinformatics Unit at the National Institute on Aging. His research included developing and applying computational and systems biology approaches in the cancer field. Please welcome Dr. Zhan.

Session IV: Considerations for Sensor Data Standardization and Archiving, Security and Privacy

MING ZHAN: Thank you so much. So in this section we’re going to focus on sensor related issues on data standardization, data archiving, and the security and protection of that data. We have four speakers to make three presentations, followed by discussion. The first presentation is by Maria Palombini and Bruce Hecht from IEEE, They’re going to talk about rare sensor standards.

The second presentation is by Dr. Sasha Ghosh from MIT, who is going to talk about transformative potential and challenges in open data and computation in neuroscience.

The third presentation is by Dr. Oliver Rubel, Lawrence Berkley National Laboratory. He is going to talk about behavioral data standardization.

Joining with us is also Dr. Gregory Farber, Director of the NIMH Office of Technology Development and Coordination, who will be discussed in the following discussion. Each presentation is going to have 12 minutes, then the speaker is going to introduce the next speaker. With that, let’s start with the first prestation with Maria and Bruce.

MARIA PALOMBINI: Good morning everyone. I am really excited to be here at the NIMH. I just want to say that myself, and my fellow panelist are what stand between – Really important here. I represent the healthcare and life science practice at the IEEE Standards Association, and I was delighted to see in these earlier presentations that a lot of the publications are coming through our various journals.

Many of you are already probably familiar with the IEEE, but really important that we focus on one core mission, and that’s advancing technology for humanity, and the majority of our work, whether it be standards, technical activities, papers, how we move technologies forward is always with that in mind.

With that, again we are the world’s largest technology association. We are global. We have staff offices in three different continents, and obviously we have members in 160 plus countries. Many of you probably attended at least one of our technical conferences over time, and have read many of our papers.

So, a little bit about the standards association. Obviously, we have many different paths to standards and many types of programs. A lot of the work that we do in the healthcare life science practice, and one of our chairs, Bruce Hecht is here with us, and he will talk a little bit about some of the work that we are doing. But really looking at how we can incubate ideas for potential standards.

Some people come to us, say hey, we have a good idea for a standard, and some people are like we need a little more groundwork, we need a little more support, we need a sandbox. And so this is what we do, and what we call industry connections. And this is really why we call it an incubator. Then we have our traditional standards, conformity assessment programs, registries, there are just a whole plethora of opportunity. The big one in our newest toolbelt is the open source. We do open source standards, and we have a very robust open assay platform.

So the healthcare life science practice focuses on three major branches, of which is pharma biotech, clinical health and global wellness. And essentially the volunteers pretty much across the globe in these three areas really look at how are we going to drive adoption and trust in these technologies and these applications across the healthcare value chain. So we are looking everywhere from bench to bedside, which is healthcare delivery.

So today we’re just going to talk about, we only have 12 minutes. We can talk about a case study that we’ve been working on. We could take a whole day for it, but we’re going to try to just get you the highlights on one or more recent open sourced standards, 1752.1, we think this is an opportunity that fits into a lot of the discussions we’re talking about today, and we’ll have Bruce talk a little bit more about criteria that we’re talking about, and potentially the applications and then user impact.

So, 1752.1 is actually part of a family of standards for mobile health data. It is an open sourced standard. And the one that was published is on representation of metadata sleep and physical activity measures. This is what we called the base standard, and then from there we went into the next therapeutic area for the wearable, and then I’m going to show you what the metadata on that kind of thing looks like. The standard for cardiovascular respiratory and metabolic measures.

So there is a baseline schema here from this particular standard that now can be totally adopted, or manipulated for lack of a better term, to say okay we had cardiovascular application, we had brain sensor application, we can have mental health, digital mental health. So this schema is a very strong baseline for the wearables.

So this is courtesy of Open MHealth. Open MHealth actually chairs the 1752 family of standards. And it really talks about the metadata landscape. We look at wearables, regardless of if they’re in body, on body, or around the body, and talking about how the data stream pretty much goes from cloud and then it can go into EHR or eventually into clinical trial management system.

The idea of 1752.1, so I’m going to show you – that’s me. This was sort of like a case pilot that we have been working on. The idea is to look at before the standard was applied and after the standard is applied. So we take the use case of sleep. So pretty much a question is how long do you sleep. And so there are four different ways that we can quantify the time or duration of sleep. So we look at sleep duration, total sleep, time sleep, sleep time. Which one is right?

Well in this case all of these different variations on how these things are calculated just create a problem when we’re trying to pull this data in for research. It’s just a simple semantic problem. So what the team did was, pretty much through the development of the standard, is basically they just created a mapping of these terms, to come to a point that say full amount of time spent asleep.

This is the metadata coming from these wearables is now streamlined. So the semantics are clear and the metadata is complete. And basically, this allows this data to be now used for clinical research. So I’m making this very simple, because the data comes from the wearable, goes through an activator and then it gets into the clinical research realm.

But the important key here is the idea that we basically streamline the semantics of just duration of sleep, and that was the point of the standard. But now this can be translated, so this is the Schema which is available on GitHub, it’s open source. Basically this is the schema that we can follow now for your cardiovascular wearables, or other therapeutic application wearables regardless of on body location.

So this is the way around the wearable situation when we talk about interoperability of the data or how are we going to port the data out, how can we try to get the data out of a wearable from just monitoring into clinical research mode.

Basically, some of the benefits, the big thing is the reduction of data friction, which is a really big one. It is the idea now that we can take also commercial health wearables and try to give them a platform to sort of migrate into the clinical research space.

This is also a standard that we have talked to the FDA about, it has not went into policy, but it is recognized. So if a wearable or the data that’s being used for research says it’s following 1752.1, which is the published standard, it will be recognized, and accepted in submission review.

But I think the most important thing here is that the data collected from various devices, regardless of who is the manufacturer, regardless of the proprietary algorithms embedded within them, it will have the same measure, and it can be pulled for analysis. This is like really critical towards moving wearables out of this current style of category.

So with that I would like to bring up Bruce to talk about some of the key considerations for the standardization.

BRUCE HECHT: Thank you very much. And I would like to thank Ming Zhan for hosting this session, and of course to Dana and Holly and Yvonne for having us here in this wonderful location. As you have heard there are many questions that we are seeking to answer, and we are looking for ways to do collaboration.

So to build on what Maria has shown you about how the use of standards could be developed, and how we are inviting everyone that is here today in the room, and those also that are online, to help put forward what would be standards that would be helpful in realizing the understanding of the brain, as well as applications into clinical practice.

So one example could be if there is a combination, and this builds on one of the three key themes over the last two days, which is multimodal. So what if there are different types of sensor modalities that are being collected about a particular question of interest.

And this one here that is being shown could involve temperature, pH levels, and pressure monitoring. This could have a variety of applications, for example if there is a pressure buildup inside the brain, either due to disease condition or possibly due to concussion or other types of traumatic brain injury. And so in this particular example it would be a desire to combine those three sensor modalities.

And then of course what we want is not only the data, but how it can be applied on its use case, and we might be considering the compatibility of the data, how interoapbility will occur both in the collection as well as how it is being shared through electronic medical records or other systems involved with patients.

Issues of quality. Sometimes in the systems engineering world we call these the ilities, like reliability, quality, and those would also include safety and security, privacy of the information, and how does this affect with ethics, which has been another recurring theme of the past two days. Those questions are both about respecting the individual, but also how we can gain collective information that helps make our society stronger and gives us the capacities to improve health for our communities.

And then what happens if we’re introducing other measures? For example, EEG is one of the most common ways that we have access without being invasive into the brain. If we want to collect that information, which might have a lot more time specificity to it, how can we use the design of the standard to help make that work?

So as you’ve seen this 1752 series has in it intentions. So what would it be. In the prior example that Maria showed, the question was involving sleep. And so there was an abstraction there about what might be important about sleep. In this particular example that is being illustrated, depending on the use case or the application, there could be different ways that the abstractions might be defined.

So in addition to that we wanted to share this point about brain-machine interface. So there is an initiative related to neurotechnologies for brain-machine interface standards, and this is taking the form of a roadmap. Maybe the second of the themes that I want to share is building on what we were just talking about intermodal, is this question of roadmaps.

So as everyone in this room is part of the development of technology, we know that what is exciting about technology is not necessarily what it can do today, but how does it evolve over time. And to get that evolution also requires an investment, both the resources, which we thank those that provide financial resources, but also time and learning.

And as we are each seeking to advance the state of knowledge as well as to inculcate, to bring forward the capacities of our researchers, our engineers, our scientists, our practitioners, our caregivers, and also ourselves, which has been a theme, we are seeking to find out well how does that change over time.

And overall there can be acceleration in the sense that what we learn in one generation of technology could require a substantial amount of investment. But as we get that learning that then sets the stage of the next cycle. And if we can continue to see widespread adoption and use of these technologies, then we can produce a virtuous cycle, which is really what we’re seeking.

Because if we are progressing in linear time we will have a difficult challenge to fill the needs that we see that are so important, those that we love and that are in our world. And so that idea of roadmaps, which is more generally being done across IEEE with the roadmaps committee and with the future directions committee. So here is one example we invite you to get involved with this as well, thank you.

So I just wanted to sum up. You’ve got my contact information here, myself as well as Maria. We’ve got global programs. As you mentioned, I’ll just add one more plug for one of the programs that is currently underway, that is the industry connections program for telehealth. So we’re looking at what are the technologies as well as what are the application challenges that are involved in developing telehealth in a hospital to the home and other ways that people can get the benefit of health wherever they are.

And so with that I would like to thank everyone, and I look forward to the conversation. It is my pleasure to introduce next Dr. Sasha Ghosh who is at MIT and the McGovern Institute and also with Harvard Medical School.

SASHA GHOSH: It is a pleasure to be here. I have learned so much over the last day and a half. All the things that have been presented, I am really excited. And for those of you I have spoken to, I want to put all the sensors on myself. I am looking forward to getting the sensors. I am happy to be your participant in progress. I consent to my data being shared openly as you do this.

I am going to try to tell you a few stories today. It is a little bit about my journey, but where we are today at the intersection of neuroscience, neurotechnology, and neuroinformatic. As we see this evolution of the world, we have heard a lot of sensors talk over the last day and a half. They generate data. What do we do with it? How do we work with it? How do we put things together?

This is a slide I’ve been using for a little bit of time, kind of inspired by Randell Munroe, creator of XKCD, with this hypothetical question: What if all data in the world were instantly accessible? What would happen to the society if that were the case? And I would say at the current moment, probably mostly useless. However, this gives me an opportunity to bring up something like this. We are in the age of AI and ML, we are doing a lot of things. So we have a lot of data. You keep working on it, and then keep working on it further. But hopefully we can change this world to something a little better.

But today I am going to talk about two different dimensions of this. I am going to first look at infrastructure standards, computing, the ecosystem of these tools and data that we live in, the training that is needed around it, but through the lens of a few other dimensions.

The first thing is availability. This cuts across all of these things. It’s not just data. We often talk about data as being the thing that we need to make available. But it’s about standards.

We just heard an excellent presentation from Maria and Bruce kind of talking about the evolution of standards. Computing, it is changing on a daily basis. Going to be able to tap into the dimensions that intersect, what’s available. The same thing with ecosystems and training, how do we become parts of communities that can evolve these things. There was an open call that Bruce laid out for helping with some of the standards development work.

Another thing is discoverability. This is something I have realized as our group has spent some time on it. Often we live in echo chambers, and it is hard to find things, they simply are situated. We have all the search engines of the world, but it’s hard to find things unless you’re in the right group at the right time.

Finally, the third dimension of this space would be usability. We’re putting things out there. And we want it. I’m very much an open-source geek, I want things out there. But that doesn’t mean that all of it is usable in the same way at the same time. We kind of have to think about what usability means, and further what someone might find as usable and not usable, how do we get that kind of information out there, the process where we can shorten the feedback loops in our systems.

So let me start with infrastructure. One of the hats I wear is that we maintain one of these brain initiative data archives called DANDI, it stands for Distributed Archives for Neurophysiology Data Integration. But we kind of all it not just an archive but a collaboration space, as part of our goal is to help change some of the culture in how we think about data and compute together.

So this archive has grown, just in about three years, to storing about half a petabyte of data. And that’s just the tip of that iceberg. If all of the data were instantly accessible, this would be much larger. And this is compressed data for those of you who considering we don’t use standards or other things and standardized data and streaming data. That’s a fairly modern archive. You can get to this data programmatically, you can push and interact with this programmatically, and you have compute resources that sit next to the archive that you can use to work on this.

However, we sample only a small space of what we might call the neuroscience. And the good news is that the brain Initiative and other resources are coming out that provide many different avenues of getting at different types of data in this space. So in the red box are all the brain initiative supported data archives.

But in addition to that there are a bunch of other archives. Actually, I didn’t put open neuro in that box, but one neuro is also a brain initiative supported with the archive. For the biobanks being created both in other consortia and cohorts around the world, which are giving access to a whole lot of data that is coming online.

So we have an opportunity, an incredible amount of resources and participant effort is coming in towards producing these data. Let’s make use of it. Let’s make them as good as possible so that we can do the kinds of science we want to do. Let’s bring in these new sensor technology datasets into this space, let’s see how they would be used by other people.

This brings me to the second component, standards. Now we just heard about a process of getting at standards. I am going to give a little bit of a story over here. So one of the types of data we’ve been asked to store in the archive is light sheet imaging of human brain tissue that has been stained through different proteomic markers.

And this dataset alone is about 70 percent of the archive right now. It is a lot of data. It is a whole human hemisphere, actually 60 percent of it right now, but I expect just this alone will scale up to about half a petabyte when it’s said and done. And that is again just the very tip of the next generation of data that will be produced by some of these consortia initiatives.

Why do I talk about standards in relation to this? Well, partly because when you think about standards you think about a bunch of different elements, data formats, data dictionaries, data identifiers. Unique identifiers and definitions around the data formats. You have to describe data, and that’s challenging because if something does not exist we use English, our human communication language to describe these things, and as we all know we are great at communicating. I think we all understand each other perfectly all the time. So hopefully we’ll do a little better with kind of digital representation of information.

And often we find that there are no consensus data standards. This brings me back to Randall Munro. Standards proliferate. Well, there were 14 competing standards How ridiculous, we need to develop one universal standard. Well now there are 15 competing standards.

So we try to do our best to not comply with Randall’s thing, and we brought tow communities together, the microscopy community and the brain imaging community. Both have been describing some of these things in their own terms and their own spaces, and we were able to bring these communities together, so instead of creating the 15 standards we tried to merge maybe the 14th and 7th standards in that space. And that allowed us to bring a bunch of people who don’t normally talk to each other together.

There was a notion of getting a community of people across exchanging information as well, which I think is important as we get into the complexity of neuroscience, as we look at the different kinds of information that are being generated. We need more people conversing and talking to each other about what information they’re exchanging.

Let’s move on to the third component, computing. We’ve lived in a world where we think of computing as the data exists, we pull it down, compute with it. Let’s start pulling more data. This is the evolution of the neuro pixels probe, for those of you who are doing neurophysiology will have seem some of these things. And it used to generate a terabyte of data in 13 hours of recording.

The next generation probe increased the space of channels significantly, it’s an order of magnitude increase. That changes the dynamics of data generation. The one that is in development I have been told will generate about half a petabyte of data per grad student, per post doc, or per experiment. That changes the dynamics of computing with these things. It changes thinking about whether that data might live and how that data needs to grow.

This is an example at different speeds of connection how long it takes to move data from one place to another. And different institutions and places where data are being generated will, at least in the near future, have some of these limitations that come into play.

The other side of this is computing itself. We are living in a world where AI and ML can change at a very rapid rate. The training of GPT3 alone, the significant burden on computing. However, some of these numbers below, so in terms of the scale we have to do multiplications on some of these things. But we have to think about computation strategies and what we might do as a community to reduce computation as these datasets and other things grow.

But one way to think about this is through this lens of what we call data levels. So this came out of the BIICN work thinking about data levels as raw data, validated data, linked data, featured data, and integrated data. Each of those requires computational processes that the community needs to come together to agree on or assemble a set of processes, as often those featured datasets are significantly smaller. But that requires agreement in the community in thinking about what we bring together.

For example, in the neuropsych community do we agree on spike sorting techniques? Can we agree on spike sorting techniques? I don’t know the answer here, but I encourage the community to kind of come to terms with what do we do to consolidate computation so that we can reduce the burden of running the same models over and over again.

So this brings me to ecosystems. The ICCN ecosystem was a wonderful ecosystem of getting archives, computational folks, and scientists together, we have this ecosystem of delivering standards. Data archives that can house the data and connect the data.

A similar ecosystem, you’ll hear a little bit more from Oliver in the next talk about the Neurodata Data Without Borders ecosystem that works on neurophysiology. I’ve been fortunate enough to be part of the NIBIB community, part of this NIPreps ecosystem which is trying to consolidate neuroimaging data alongside behavioral and other data in a common way.

We also have to think of our participants that contribute data in this case. And we have all heard the term fair probably, but we often don’t hear about the term care. This is about understanding human consent or participant consent that other participants are contributing data. Let’s make the most of it as it comes together in these archives.

One of the things that we did in the human brain consent process is to consolidate how we even approach the consenting process across different studies. So the open brain consent is one of these things that allows us to create documents that different institutions can use, even to get data in.

Because as we heard yesterday the sensor developed in the 19th century isn’t how we’re using the sensor today. Each of these data elements or data bits that we store are going to be similar. We don’t know how this will be used in the future. Let’s think about what that means from our participants and contributors today.

This brings me to kind of the last few points of this talk, training. The world is expanding. We have already talked about standards. We talk about grad students, post docs, research staff, professors trying to put data into these things. It requires a lot of different elements, and most of our programs are not quite care to go across all of these elements.

And this is just in standardization metadata space. The resolution of LLMs the change happening almost on a monthly basis in some of these spaces. How do we bring these communities together in terms of training and intersection which is required?

There are many challenges. Call them opportunities. Every challenge is an opportunity. There are challenges around curation, standards, and education. Our domains are evolving, we have to evolve with them. There are data and computational problems. We have to make meaning of the data when this appears 10 years from now or 20 years from now. We need sustainability in governance. Right now we’re storing everything, but the time will soon come when we need to decide what to keep, when we often don’t know.

Transparency and ethics are one of those elements where the data comes from complex systems, come from participants, and we need to understand how that is transparently done in a world that is complex.

This is a figure I used to use almost 10 years back. This was NetApp, a storage company thinking about sources of data. They were thinking far ahead at that point in time in terms of how much data the human body was going to generate.

This couples with our kind of research program and portfolio in our group. We want to integrate different sources of information, couple it through various kinds of technologies, but work together with clinicians, caregivers, patients, in a closed loop system. And we wanted to shorten these loops as much as possible, and be transparent.

So with that, I want to thank your for your attention.

OLIVER RUBEL: My name is Oliver Rubel, I’m at Lawrence Berkley National Laboratory. My background is broadly in data science and data understanding. So anything along the data pipeline, from generating visualization tools, analyses, algorithms, to data standards.

In the context of the Brain Initiative. one of our main projects is on Neurodata Without Borders. WB is a data standard for neurophysiology data that we’re creating. So what I want to do today in my talk is really more broadly speak about challenges and opportunities in data understanding.

What I’m going to do is I’m going to talk through some of the roles and the challenges that we have in data standardization, and I will also talk about then about how the opportunities and the requirements that arise from – (audio problem)

OLIVER RUBEL: With that, let’s dive in. Like I said, I am going to talk about the role of challenges and requirements and opportunities. And so throughout my talk you will see me use these kind of orange and blue markers to demark, because I’m referring to a role in a challenge, and then blue the requirements and opportunities that arise from that.

So, first of all, why do we need data standards? And so one main role that data standards have is that they allow us to efficiently use and share data, and to enable scientists to collaborate through data. And so data standards are really a critical conduit that facilitate the flow of data throughout the data lifecycle, from acquisition to processing, analysis, to sharing and publication and ultimately data reuse. And then also to facilitate the integration of data and software across these different phases of the data lifecycle.

And so scientific data standard technologies are really at the heart of the data lifecycle. And what this means is that as scientific data standards goes we need to support and integrate with the needs of technologies across the faces of this data lifecycle.

And so we need to work with, not compete with existing and emerging technologies, but it also means from a data user perspective or as an experimenter we really need to think of data standards as technologies that we use from the beginning and throughout our experiments, not just as technologies that we need to then finally publish the data. It’s something we need to consider throughout the entire experimental process.

A key challenge in the development of data standards for neuroscience is really in the diversity of neuroscience experiments, which involved a wide variety of species and tasks, data modalities, devices and brain areas. And so in addition there is also then the high rate of technological and methodological innovation that we’re seeing in the community. And so data standards for behavior need to be both comprehensive and extensible.

As an example in the data standard for neurophysiology, we cover a broad range of data modalities, from recordings of the neural activity through electrical and optical physiology, to metadata about the acquisition and the subjects, to information about in the design of the experiment itself, whether that is in the presentations of stimuli that we did to our subject, the structure in terms of trials of the experiment, a description of the tasks that were performed during an experiment. And then ultimately also reporting’s of information about the behavior itself, whether it’s through motion tracking or audio that are directly associated with the neural recording data.

And so as an analogy, think of data standards as a language. And much like natural language evolves with the concepts of societies that use them, we need more than just rigid data formats, but we require flexible data languages that enable us to communicate via data. And so scientific data standards are really the language we use to communicate through data, and as such these standards need to be comprehensive, they need to provide us with broad coverage of an experiment’s raw and processed data.

Our standards need to be extensible to support the evolution of these data languages with the changing needs of the communities. They also need to be stable, that means they need to provide us with standardized or concepts that we can establish and use throughout the lifecycle of our data. They also need to be accessible to the community, they need to be easy to use, and then finally they need to be reliable, we need to be able to have these technologies stable enough that we can use them in our regular day to day production as well as then preserve the data in the long-term.

And so in the context of brain behavior, a key challenge here is also that in order to begin to understand how the brain creates behavior, we need to be able to relate and interpret data from multiple modalities. We have heard this throughout this workshop many times.

And so for example, we need to relate the neural activity that we report to the stimuli that are presented during experiments, as well as then the behavior that our subject showed during the experiment. And so we do not just need standards for individual modalities, but we really need integrated data standards to allow us to store and relate raw and processed data from a wide range of data modalities.

But as an example here in the figure, you see a pipeline where we use NWB to store data from electrical and optical physiology which is reported simultaneously, and then we also store the processed results from spike sorting and our eye detection in the same data file, which in turn allows us to relate the data in analysis easily so we can look at both the optical and the electrical signals in a coherent structure.

One of the key challenges in the development of data standards, particularly for neuroscience, is the diversity of the stakeholders and the requirements at all the different phases of the data lifecycle. A key aspect that has allowed us to really manage this diversity of requirements, and really the diversity of discussions that arise from this, is that already early on when we developed NWB we identified and outlined the main components that we need to define the data standard.

And so for example when we just want to create data standards it’s not just about how do we want to organize the data, we also need to discuss how we want to store the data. We also need to talk about how we actually want to use the data as end users.

And so we need to be able to have these discussions, and have these different aspects of the data standardization process evolve. As we move data from our local laptops to the cloud, how we store data needs to change. But if the data standard itself still needs to be able to remain stable in these kinds of situations.

And so a key strategy then to manage this diversity of stakeholders and needs is to use modular design where we identify and define and insulate all of these different areas of concern, and then define standardized interfaces between them to integrate across these components.

And this is really critical because it allows us to bring the discussions with stakeholders, as well as to allow this data standard ecosystem to evolve, with the needs of the communities. So for example, as we move data to different systems again, we can adjust how we store the data without actually affecting how we organize the data and how we use it.

But data standards are only one piece of the puzzle. To really take advantage of the promise of data standards and to allow us to share and reuse data and communicate and collaborate with each other, we need more than just standards, but we need an integrated software ecosystem that spans the entire data lifecycle and builds around these data standards.

For example, in the context of NWB our core software stack alone consists of more than 1000 different software tools, such as APIs to read and write the data, tools to convert and inspect data. Tools to support the extension of the standards, to allow users to integrate new data types with a data standard and have that data modeling technologies.

And then on top of all this software that we have on the core there is a growing collection of community tools for visualization, analysis, data management, data sharing that sits around us as created and maintained by the community, and it is really this access that you get to new capabilities for data standards that provides a broad incentive for users to adopt standards that adds value to our scientific endeavor.

In the context of studying brain behavior, a key challenge that we have with regard to the software ecosystem is that we have still large gaps in the ecosystem for moving from designing the experiment to actually sharing the data. And this begins at the very beginning as we design tasks for subjects to probe behavior.

And so while there are many tools to design behavioral experiments, they really focus on how we program the controls for an experiment. However, we lack standards to really formally describe the tasks that are being performed.

So the beetle behavioral task description language is the first step in the direction by allowing us to abstract the description of tasks from the hardware design. But it is really just the first step. We need much more rigorous ways to define tasks in a way that they are both programmatically interpretable as well as human-readable.

And then, importantly, once we have defined tasks we need ways to evaluate these tasks that we define for subjects before we actually execute them in an experiment. We currently do not have tools to simulate task designs and to subsequently evaluate them before we put them in practice.

Another key aspect is that once we have executed an experiment we currently lack standards to publish tasks in a structured way. Most of the information about how tasks are designed is contained in preformed texts and methods sections. Which is insufficient to really let others recreate and reuse data and reproduce results.

Using again language as an analogy, data standards alone, as Satrajit also pointed out, are not sufficient, to fully define a data language. Simplified speaking, you can think of data standards such as NWB or BIDS as the syntax and the morphology of our language. They define the structures and the organization for exchanging neuroscience data. But then we also need to define the diction, meaning the terms that we use in our data. And that is what we need control sets of terms for.

And then to define the meaning of the term, meaning the semantics of our language, we then need large collections of external resources, such as integration with brain atlases, ontologies broadly for species, gene function, taxonomies and so on, as well as integration with persistent digital identifiers, such as ORCID, DOIs, RIDs and so on. And so data standards control term sets and external resources are really essential and synergistic pieces of this data standardization ecosystem that we need to address to really move to a world where we can share and reuse data freely.

A key part of that is then that in addition to storing data we need to be able to link data across different sources and with external resources. So for example, very simply, if I enter that I used a mouse as my subject, we need to be able to link this term mouse to its description in the taxonomies that define our species.

Or if I list myself as the experimenter that generated data, I need to be able to link that name to its unique identifier via ORCID to make sure we can identify who the actual experimenter is, because there may be many exterminators with the same name in this world.

Or when we define locations of the brain where we have a probe, we need to be able to link these terms that we use to describe these locations to the appropriate brain atlases, really allow us to integrate and reuse them.

And then, finally, beyond all of these various technical and methodological challenges that I touched on, a key challenge in data standardization that we need to address is that we need to invest in overcoming the sociological challenges, and to help scientists overcome the energy barrier for adopting data standards.

And so this requires a broad range of strategies, from incentives, through funding and data governance guidelines to help us climb this hill of data standardization. The training, guidance, outreach, and support, to help lower the barriers of adoption to make it easier for scientists to adopt standards.

And then also carrots in the form of access to unique capabilities such as visualization tools that will help us do better science. And with that I believe I’m at time and will do a discussion with Greg and others.

MING ZHAN: Thank you very much.

Discussant

GREGORY FARBER: While all the speakers are getting settled, I do want to thank them all for a series of quite interesting talks, at least that was my takeaway from this. So we do not have an enormous amount of time, which is a shame, because I think I could probably ask a series of questions that would last for another hour or so.

What I do want to really focus on, and I think I want to ask this question of all of the speakers, it has to do with the proliferation of standards. I know of two pathways to go from lots of standards to a standards.

One occurs fairly frequently, and it is that the funders or the journal or the FDA or somebody defines a standard and says if you want funding from us, if you want to publish papers with us, if you want to submit something to us, here is how it is going to come in. And that becomes the standard for the field. Nothing wrong with that necessarily.

But I know of one other case which involved the creation of the DICOM standard, which feels very much more like a community consensus that actually worked and got to a standard that everyone lives with. Now, I wasn’t part of those discussions, so perhaps it is not as clean as I am making it seem.

But my question for all of you is really twofold. First, where do you think we are today in terms of standards for sensors and perhaps personal tracking devices, which I think are going to be a critical component of the work that is going to be going on. Are we still in the development stage where it is too early to really think seriously? Are we too late in that there are a lot of different companies out here that have developed their own idiosyncratic standards internally, and none of them are sharing with anyone.

And part two of the question, that depends upon your answer to part one. But what is the path forward from where we are to where we need to be to allow the research community writ large to use and reuse the data coming out of these devices. So I don’t know who wants to start, maybe no-one wants to start. But does anyone want to take a crack at those questions?                                                                             

MARIA PALOMBINI: These are very good questions. But the second point that you had brought up in your early, with regards to consensus driven standards, these are exactly the types of standards that we develop at the IEEE standards association, and like SDOs around the world. The idea is to bring the industry participants together to build consensus and bring the standard forward, and not be beholden to a particular platform, or a particular framework or process. The idea is they build it together, it's neutral, and it’s agnostic, and should be representative of the voices that are going to be impacted by the standards. So by the voices I mean the clinical side, the technology side, the regulator side. And patients of course is something that we have to keep in our purview.

But the things about sensors and technology, the reality is we have all seen how rapidly, this is a growing market, innovation, the integration of AI, machine learning AI at the edge. This is unbelievable awesome in a good way, because it presents so many benefits for us. But at the same time the challenges that come with them haven’t gone away, and that’s the reality of the interoperability, the idea of compatibility, the idea of the sensor in itself we’re talking about the hardware.

Let’s not forget all the little intricacies of chips. And then the software around them. And the data transmission, and the data security, and the privacy, and the ethical governance. All of these challenges are still there, from he first sensor that was put on your arm like a couple years ago.

So I think we are early stage. The real core question is for everybody to think about is like what is going to change, where do we really need the standard development. And not to confuse it with policy. From a standards development perspective we are here to say the market has come together to address a problem so that we can actually open the door to innovation.

And from a policy perspective they are definitely part of the process to understand what that standard means. So I think it is early, but I think it is necessary, and hopefully we will see obviously more important standards coming up.

OLIVER RUBEL: If I could build on I think what is really exciting about that, what you saw in the talks yesterday and today, I wanted to highlight my own field with Dr. Andrew Schaal(ph.) about accelerometers. So there is something that was a measurement that people were interested in, but the technology really changed by the orders of magnitude.

So a million times more performance for cost. So I think the standards are themselves kind of a technology, it’s a technology of the way that people work together. So we might want to consider what are the figures of merit, and are we really seeing or what could we do to get a million times improvement.

Similarly, in the genomics field, of course the human genome project, which was an enormous amount of work, when I first started people reported on it and said look, you’ve only done 0.000 of it in the first year. So this project is obviously never going to be completed. But it is an exponential technology that changes over time, and that is really what we rely on.

I wanted to point out also, actually in Satrajit’s talk about open source, and Oliver says well the example of Arduino. So there is something that if you want to use a piece of sensor technology it comes with it compatible information that you can use and get it to work right away. But really where this is going, I think, and maybe this is where you could jump in Satrajit, is it is not just about the collecting of the data, but it is making the data computable. So what are you going to do with it. And you can’t do it all by hand by a single individual, so maybe there is an opportunity you wanted to come about.

SASHA GHOSH: I would like to use the term I think Maria the challenges, standards are not developed in a void, they’re intended for a purpose, and right now I think there is a gap between the standards development part and the intended use part, and that’s where I think the gap could be filled by the right people and the right communities to come together and say it’s not just a standard in the void, there are benefits to the stakeholders when developing the standard.

BRUCE HECHT: To speak to your question of timeliness, data standards always are starting from chaos, because initially when we create new technologies, everybody will create their own way of storing the data, and then as we come together data standards are ways of consolidating that structure. I mean, one key aspect I think that we have heard here is also this idea of multiple modalities. And we still have a tendency also to think of standards as a I need to develop a standard for a modality, rather than how can we reach across multiple standards and modalities.

I think we also have a great opportunity here in the NIH brain initiative to start talking about how we go across neurophysiology to behavioral sensors, and how we cross these. And we’re all scientists, all we want to do is talk about what we do, and so it is not that we want to create many different standards, but that is just the model that has existed for a long time.

But for example, we’re also working with bids on extension modalities to integrate and be with bids, but I think there needs to be much more strategic efforts to really say how can we bring all of these different areas together and start defining how we can work effectively across these different standards. Because yes we need different standards, but we also need guidance for users, how they can utilize these standards effectively to work across all these different technologies.

MING ZHAN: This is a wonderful conversation. We are going to take questions from the audience. We will start with the virtual audience.

STAFF: There are no questions from the virtual audience at this time.

ROOZBEH JAFARI: So my question pertains to data standards. And in many instances when we run studies the investigator knows about all the details. But when you make the data available, AI algorithms and those who are building AI algorithms, we know very little about the context around the experiments. Blood pressure, motion artifact is important, posture is important.

As a part of my obligation to NIH I always make my data available as a part of our publications after we have done the vetting. And the kinds of questions that I receive from researchers are fairly interesting. NIH has been doing quite a bit of work and leading efforts on bridge to AI, which is what does it take to get to AI.

So my question is the metadata, the context is highly irregular. It is very much dependent on the study. I always struggle with how do you define the standards, how do you define labels, the probabilistic nature, what should be the temporal resolution? Do you think there is a solution to that?  

BRUCE HECHT: The thing I would say there is that developing data standards is a means of finding consensus. And so as we talk about data standards there is always going to be a gradual development where let’s say, in the simplest case we agree on how do we store a time series, whether you have an electric signal or a radar signal, they are both time series. How can we define how we store those series? So then at least I can start interpreting the data.

And so you gradually move more and move to a more complete picture. But there is always going to be a thin layer on top that says okay, this is not standardized yet. And that’s why I mentioned in my talk also extensions are fundamental, because they are common components that we have agreed upon, and then there are all of these new pieces that we have not agreed upon yet.

And we need to provide you the means to communicate not just the parts that we have agreed upon, but then also include all of the additional information for this specific experiment. But if we can move step by step to a consensus, at least we’re getting closer and closer to being able to talk with each other.

MARIA PALOMBINI: So this is I guess more of a comment than a question. My comment is IEEE in addition to having its technical standards it also has technical standards. So as we are thinking of these multimodal sensors I think it is important for us as a committee to think what are the societal implications of all these sensors that we are thinking about. So I think bringing that community in addition to use the technical standards, but also bringing people from the sociotechnical standards like the 7000 series, they will be a relevant component of these discussions.

And my question would be what would be specific challenges of part of introducing the care framework that Sayed mentioned in trying to create standards for sensors technology?

SASHA GHOSH: So I think part of the care framework is about engaging stakeholders in the conversation. So those standards, we often build technical standards. With stakeholders we think through the engineering lenses, not necessarily through the stakeholder lenses.

When we think about IRB and other kinds of committees, there is always a participant involved in that kind of community. And I think that is the kind of thing that might bring some of those care elements in the development of standards. So I think that it is about more diverse representation in the development of standards that we have to work towards.

MING ZHAN: We will take one last question, because of the time.

PARTICIPANT: During that very beginning of the process, and I’m asking for the IEEE process as well as NeuroData Without Borders, as well as DANDI where you’re just working out the semantics and coming together on definitions. How many perspectives were in the room? Less than three? More than ten? And how much time did it take to do the first step? How much time did it take for people to agree on what a time series was? I am just trying to get an idea of the timescale of building a foundation of a shared database when it comes to semantics.

BRUCE HECHT: NWB started even before I joined. I have been involved with the standard now for nearly 10 years. And so in terms of the number of perspectives, it starts off with identifying really who are the stakeholders and bringing them all in the room. And so that means you’re not going to have just three or four perspectives, but we talk typically you need to have at least let’s say 20 people in a room to start off with.

And then it comes down to identifying what are the different pieces, because then we can have a focused discussion, it’s not all fundamentally that hard to define how to restore a time series. But if we had to clearly define that question and what we’re talking about so that we’re not ending up in discussion of data storage technology and whatever to not, let’s define what a time series is. And so when we can focus our discussions then we can gradually build it out.

MARIA PALOMBINI: It’s an interesting question because we get a question all the time on this, like how long does it take to develop a standard? So there are different tracks to it, and what I say is there is both the accelerator track, which is when you come in with a group of likeminded individuals, and they’re diversified meaning they represent different areas of the community, you kind of come in already with this groundwork for the idea, and then you bring in other individuals who understand like what the next step is.

I’ve seen this particular working group that they did on the sleep study, there were about 40 participants from start to finish I’m going to say average, because they might have started out at 60 and then dropped and came up through this lifecycle. I believe they took about 18 months to actually vote and have the standard published.

This second standard on the cardiovascular application is moving at faster speed because we have the baseline schema. So now really what they’re focusing on is the therapeutic use case and making the schema work for that.

And so subsequent standards in those therapeutic families will probably accelerate just as fast, and the cardiovascular group is hovering about 25 average participants. So they have the baseline from the first one, now they’ve brought in the clinical researchers and the clinicians on the cardiovascular side to obviously put the therapeutic application to fit those terminologies and that kind of thing.

MING ZHAN: So we conclude this part of the workshop, and thank the speakers and presenters for the excellent presentations and the discussion.

Staff: And that does take us into the lunch break. We will reconvene at 12:50 PM. Thank you.

AFTERNOON SESSION

DANA GREENE-SCHLOESSER: So, I want to welcome everybody back from lunch. The next session that we’re going to have is session five, which is Featured Experiments in Development of Computational Models. I would like to introduce our moderator for this particular session, which is Dr. Holly Moore.

Just briefly, Holly Moore oversees a portfolio focused on preclinical and basic studies in nonhuman models on neural mechanisms underlying cognitive, effective, and behavioral processes that mediate the risk and maintenance of compulsive drug taking and dependence.

And her background is in behavioral neuroscience and translatable research using primary rodent model systems to prove neural circuit function relative to psychiatric disease. So I would like to invite her up here, as she is a perfect moderator for the next session.

Session V: Feature Experiments and Development of Computational Models

HOLLY MOORE: Thanks very much. Our next session as you can see is called featured experiments and development of computational models. The idea here was as that last session to put some end users up here and computational modelers up here on stage to talk about their use of everything you’ve heard about so far in the meeting. We have as our first speaker Nico Hatsopoulos from University of Chicago, he’ll be followed by Avniel Ghuman, University of Pittsburgh, then Vikram Singh from UCSD. And the last of our four speakers will be Bashima Islan from the Worchester Polytechnic Institute.

And our discussants, one virtual and one on-site, will be Miriam Shanechi, a USC, university of Southern California, and NIH’s own, from the NIMH intramural program, Sylvia Lopez Busman. So I’ll invite Nico up to the stage to get us started.

NICHOLAS HATSOPOULOS: My name is Nicholas Hatsopoulos, I’m a behavioral neuroscientist. For most of my career I have been focused on highly constrained, restrained, artificial behaviors that we had nonhuman primates perform. And what I want to talk to you today about is a new area, relatively new for me, looking at natural neuroscience. What I mean by that, it is a term that has been coined, basically looking at neurophysiological processes under more natural contexts. And I am particularly interested in how the sensory and motor control areas control movement of the limbs, the arm and the hand.

For most of my career I have been working with macaque monkeys, which are relatively large monkeys. What I’m talking about today will be looking at these much smaller animals, these marmosets. They are really tiny, about 300 grams each. Very cute, they kind of look like Einstein with their tufts of hair sticking out on the side.

So what is my motivation for looking at naturalistic behavior in the context of neuroscience? Well, obviously the brain has evolved over millions of years to allow for behavior that would lead to survival. And so obviously these behaviors are behaviors that are real behaviors that you see in the wild that help them survive.

But aside from that, there is another reason. As I said, many people, including myself, looked at very simple models of how for example the motor cortex encodes information about behavior. And these simple models of the motor cortex can work very well in these highly constrained artificial behaviors, simple tasks like the center-out task, which is shown here on the bottom left.

And this is a task that was pioneered by Georgopoulos in the ‘80s, and basically, he had monkeys trained, macaque monkeys trained to make reaching movements with a manipulandum to one of eight peripheral targets. So there are these straight line movements, and usually it is with a video screen, so it is highly artificial. These animals are typically in a chair, retrained, their heads are usually fixed, and they have limited mobility.

But in this context you can look at what these cells and motor cortex encode. And here is an example of a tuning curve that has been seen now for many decades. In this context, simple context, you see the firing rate of one neuron as a function of the movement direction. And you can fit that with a cosign function quite well. And so this works great, it is a very nice, simple model of directional tuning of movement.

But now if you look to something more complex, like this study by Aflalo and Graziano, they monitored the movements of the macaque monkey in their cages. And they were doing all sorts of stuff like scratching themselves, doing whatever, grabbing for different pieces of fruit.

And in this context a very simple model like directional tuning could only account for less than 10 percent of the variance in overall response. So we’re missing something, and we are being misled I think if by just limiting ourselves to highly constrained and artificial behaviors.

So what we took and decided to do was work with the marmosets and look at prey capture as a model of complex motor skills. And I want to show you this video of this animal right here, if I can get it to play. Reaching out to capture an insect, sitting on the branch. There he goes, he gets it and then eats it.

So how do you bring this to the lab? We can’t do this obviously in nature. So what we did is we have these animal sin cages, relatively large compared to their size, at least compared to macaques, which are the workhorse in neurophysiology.

And we have these cages with branches and different kind of naturalistic objects that they can engage in, and engage in spontaneous behaviors like jumping, climbing, foraging, whatever, eating. But on top of these cages we also have an apparatus that they can voluntarily choose to go up to whenever they want to.

There is a little hole in the top of their cages, and they like to be high up, so they naturally will want to explore up there. And then this apparatus, which is enlarged here, you can see we can deliver prey. These prey can be moths or crickets.

 And then with that we can then record their behavior first of all, and we record the behavior using DeepLabCut which is a deep neural network that allows us to track in a marker less fashion without physically putting markers on, which I used to do for a long time, physically gluing markers on the lens. Here it is all done with computer vision, you can track the position of relative parts of the limb and then track the motion of the limb in space.

As you can see here, we have validated this approach using kind of ground-truth old standard, which is XROM, it’s 3D Xray Video Fluoroscopy where we actually have markers embedded in the bone, and compared it, and it did quite well. So it was quite accurate. So we resorted to that for recording the behavior. Oh, and this is an example of one of our animals doing prey capture in the lab.

There is a moth that was dropped in through the apparatus. He tries multiple attempts, sometimes with both limbs. He finally gets it. The video shuts off. And then he does ultimately get it and eat it. And from that we can track the kinematics of the arm and its very complex behaviors.

And at the same time, you’ll notice on top of his head this helmet, which is recording wirelessly the signals from individual neurons in motor cortex and somatosensory cortex. Here is the helmet, the wireless transmission, recording data from up to 100 electrodes at close to 30 kilohertz.

And if you look at the schematic here of the marmoset brain you might notice if you’re used to looking at brains, at least in nonhuman primates and humans, it is a very smooth brain. It has very few sulci or folds. And that actually is an advantage working with marmosets, because we can access parts of the brain that normally would be buried in the sulcus.

So relevant parts of the motor cortex and the somatosensory cortex, here M1 is motor, S1 is somatosensory, PMC is premotor. We can access all parts of that area which we normally buried in the sulci’s.

And these electrodes are Utah Arrays, so these penetrate a millimeter into the cortex, and presumably accord for probably layer five of motor cortex, as well as somatosensory cortex. And from that we can record from our best case up to 160 neurons simultaneously. These are the average waveforms of these units that we spike sorted.

And I want to also acknowledge my very talented postdoc, Jeff Walker, who really spearheaded this whole project. This is a project I have been doing together with my colleague Dr. Jason McClain at the University of Chicago.

So now we have both neural activity, spike trains from 160 neurons as well as behavior. So my graduate student Dell Moore has started to look at some of this data, and to try to develop a more sophisticated encoding model, something going beyond the simple directional tuning model I described at the beginning.

He began by incorporating a trajectory based encoding model, which is something I had worked on for many years in the macaques, and so he implemented this in the marmoset. So what this model does is instead of saying okay, the neuron cares about a certain movement, direction, or velocity, rather it cares about a whole temporally extensive trajectory.

And from this model, which is basically a generalized linear model or logistic regression, from that we can characterize the preferred trajectories. And these are examples of preferred trajectories from individual neurons in the 2D space. So basically, when the monkey makes a movement that follows that curve, this little curve, that cell will fire with the highest probability.

He then incorporated a functional network analysis, incorporating functional connectivity as an additional covariate to try to predict the response of neuron. So I’ll explain that in a moment. Anyway, let’s start with a trajectory encoding model. The idea is you take little snippets of movement, behavior, and you identify a timepoint zero, and you say okay, at that timepoint did the neuron fire or not.

And then you look at behavior before it fired, that’s marked in blue, that is what you might view as sensory, in the sense that the movement is occurring before the neuron fires, and the red is occurring after the neuron fires, that might be viewed as more motor like. And so we add the X, Y, and Z dot velocities of the hand, and we take these little snippets of motion, feed it into a generalized linear model, which is shown here, trying to predict the probability of a spike given the trajectory we had.

And we’re going to compare that to a very simple sort of proxy for directional tuning, where we only look at velocity at one time lag, just this little brief period of time, 100 milliseconds after the spike occurred, which is a proxy for directional tuning.

And then we use ROC analysis, and we look at the area underneath the ROC curve as a measure of goodness of fit. An AUC value, an area under the curve value of one will be a perfect model, 0.5 is chance. And what we found, or what Dalton found, was that among all the neurons we recorded from, each point here is a neuron, the trajectory encoding model performed better in almost all cases than the simple velocity encoding model, the directional tuning model.

That was true for both single units and multi units. And here is an example of one of these preferred trajectories from one neuron with an AUC of 0.7, which is pretty good, basically this cell will fire when the monkey makes this kind of curved trajectory from blue to red. And here is another example of kind of a crappy neuron basically the encoding model did not fit very well.

I’m going to skip this part because of time, but we found that the optimal duration of these trajectories was 500 milliseconds. So now to incorporate network effects.

What we did was use a form of mutual information, what is called confluent mutual information that tries to relate, to try to determine how much does one neuron, A, predict about the response of neuron D in the future. And if there is predictive power between A predicting D in the future we say there is a strong functional connection, and B might be weaker, and C even weaker still.

So from that we can build a matrix of functional connections, these are directed connections between input neutrons, that is sending neurons on the Y axis, and target neurons, receiving neurons on the X axis.

We broke it up between even and odd regions. And you might see, if you look carefully, they look very similar. So there is a consistency in these functional connections. That is one reason we did this, we broke it up. But there was another reason, which will become clear in one second.

So now what we do with this network model is we not only incorporate the trajectories but we also include the functional connections that we’ve measured independently of this encoding model, just based on confluent information.

So we incorporated these terms here. These terms have the weight alpha which are the functional connections alpha which are fixed, and they’re not actually fit in this GLM. The only things that are fit are the gains, the beta naught and the beta one, and the K, which is for trajectory.

So we now have this expanded encoding model. And what we found was if we incorporate both trajectory and network effects over the population, we significantly do better than just looking at trajectory. So in summary, if you look at the full network model, a full model including trajectory and network effects over the whole population, you do significantly better than just looking at network effects or trajectory alone. So with that I would like to finish up and thank my lab, particularly Jeff Walker, Dell Moore, and Paul Operatio. Thank you.

AVNIEL GHUMAN: Hi, I am Avniel Ghuman. I am from the University of Pittsburgh Department of Neurosurgery. I am not an MD, so I do not open any people’s brains. But being in the Neurosurgery Department gives me the kind of unique and lucky experience to have access to individuals with electrodes in their brains, and I’ll be talking about some of the unique things that we are doing with that, in that setup.

So, just as an overview I am going to tell you a little about the setup. Then I’m going to tell you about two things that we have been doing with real world behavior. A week in the life of a human brain, and a moment in the life of a human brain.

So what we’re doing is, as I mentioned, we have individuals with electrodes in their brains. So these are epilepsy treatments for the surgical treatment of epilepsy, if medications either don’t work or have stopped working. The one curative treatment for epilepsy is to have the part of your brain that’s causing the epilepsy to be resected. And before doing so, what they do is they implant electrodes into these individuals’ brains, and then they sit in a hospital bed for a week.

And so it is wonderful that we can come in and do kind of traditional neuroscience experiments with these individuals, bring a computer in and do your typical computer experiment. What I have been thinking a lot about over the last few years is what can we do uniquely in this population that really is a type of neuroscience that is either very difficult or more or less impossible to do otherwise.

And the fact of the matter is they are eating, they’re sleeping, they’re watching TV, they’re talking to their friends and family. And we have 128, sometimes up to 250 contacts that we can record from their brain during this true, real-world natural behavior.

So the first thing that I’m going to tell you about is something that we’ve been doing at the very long timescale. So at the scale of taking the entire data from the entire weeklong and seeing what we can do with it. So real-life and physiology, hormone levels and things like that, they change over the course of minutes to hours.

Yet almost everything that we do in neuroscience are these experiments where people are doing tasks that occur on the order of one second, two seconds, and we’re recruiting aural data on the order of milliseconds. So we know very little about what the brain does at these very long timescales. And particularly in a real-world environment. And this gives us a very unique opportunity to look into this. I should point out my very talented MD PhD student who defended his thesis last week.

So what we are doing is we have taken 20 people with these electrodes in their brains, we basically are building all-to-all brain networks, so functional connectivity between each one of those electrodes and all the other electrodes, every five seconds, for a week. So we have on the order of 100,000 of these slices of what’s happening in their brain, in their brain networks. We group them into regions and then eventually into networks. Right now I’m going to be talking about natural behavior, so we are very careful to remove both the parts of their brains that are causing their seizures, that are clinically determined to be causing their seizures, and any time in or around when they have a seizure in the hospital.

And what we’re able to do, and the idea here is we are essentially taking these networks and we are projecting them into kind of a neural state-space, and we’re just watching these networks go over this state space. Don’t worry about the colors right now, I’ll tell you what they are in a minute. But I just wanted to give you an idea of what’s happening.

So this is like a day’s worth of data for each one of these two subjects. And so this is a day-long trajectory of what their brains are doing. The first thing that we did, and kind of natural to what we’re doing here, is they are also under 24/7 video monitoring.

So we annoted those videos for what they’re doing in every single frame. Are they on a device, are they talking to a friend or family, are they manipulating an object, are they sleeping, and we asked can we classify what we’re doing, so we train a classifier based on one day’s worth of data, and asked on a second day can we determine what they were doing every five seconds based on their brain network activity.

And the answer is yes we can, and some of these things, this is from one subject, but in all nine of these subjects where we have these annotations we’re able to do so with above chance accuracy. And you can see things like when they’re using a device what gets activated is their somatomotor network and their visual network for example.

And the other thing that we can do is look at physiological components. So one thing that comes for free is heartrate, because they’re in the hospital so their heartrates are monitored continuously.

So we used kind of heartrate variability as at least somewhat of a proxy for arousal, and we showed that we can, based on their brain network activity, again we train the model on in this case like three days’ worth of data, and then we ask in the next three days do we have an above chance correlation between our predicted and the true heart rate. And so we can predict for example their heart rate based on their neural network behavior.

We also look at things like circadian rhythms. So we create an oscillation that happens over a 24 hour cycle. And again, train the algorithm on like the first three or four days’ worth, in this case the first three days’ worth of data, and then look at days four, five, six, and seven to show that in fact what we’ve trained does continue to follow the circadian rhythms. So these networks, they capture behavior and physiology consistently, over this real world behavior.

And just kind of opening up to connect back to some of the other things, this is a place where we would love to have some of these other sensors that we’ve been talking about the last few days. We would love to get that actigraphy that we can coincide with what’s going on with these neural networks. Measures of cortisol and things like that, so that we can understand these kinds of physiological brain interactions.

And then going back to this figure for a minute, so what this figure is actually colored by is what the patients were doing. So green is when they were manipulating an object, pink is when they are asleep, black is where we could see no detectable behavior, that is to say they’re just kind of staring into space. So you can then see, which makes sense based on the classification that I told you, that these kind of state-based representation can tell you what they’re doing and kind of goes into these specific clusters.

And then we can ask about transitions in the state space. And it turns out that neural transitions correspond very well in time with behavioral transition. Which we would hope they do, and we were able to kind of verify that just by checking when they happen in time relative to chance.

We have also gone on to do things like characterize the dynamics of the neural networks and things like that. And so this is what we have been doing at a weeklong time scale. But we also have the opportunity to ask what is going on at the millisecond timescale during natural behavior. So in this case what we are actually doing, and a very good PhD student in my lab, is doing continuous eye tracking while they’re having conversations with friends and family and just going about their day.

So eye tracking glasses, for those of you who don’t know, are essentially a mobile eye tracker. There are two cameras facing inwards towards the eyes, and one outward facing camera. The outward facing camera gives you their field of view, and then the ones that are facing their eyes are the ones that allow it to tell you where they are looking within that field of view. And so you get data like this.

Just to give you a sense of what the data look like, so on the left what you’re seeing is a single frame from the eye tracking glasses that have been annotated by computer vision algorithms for the objects and faces in the room. And then the orange cross is where they’re actually looking. And on the righthand side is the neural activity that is being recorded simultaneously with that frame being captured.

And I think it is always helpful to kind of see what it all looks like together. So just to give you a sense, it looks like what you would think. The person is looking at individual number one, which happened to be the patient’s best friend, and the man is her husband. So she looks back and forth while they’re talking. The people are gesturing. They’re talking, and then goes and looks at their cell phone, and you can see the neural activity flying by at the same time. So true, real world, natural recordings.

And so what we’re doing is what we’ve concentrated on is all fixations to faces, in part both because I have an intrinsic interest in social and affective perception, and because from the perspective that kind of narrows down the kind of visual information that we have to capture and we have to model.

I’m only very briefly going to talk to you about what the actual computational model looks like, and really all I want you to know is what we’re doing is we’re creating what we’re called a latent space that sits between the neural activity and what is being perceived, and that latent space is being defined by maximizing the correlation between the two.

And then what we’re able to do is do something like train the model with an hour worth of these data, and then look at the next half an hour or 15 minutes and ask can we do things like reconstruct what faces people are looking at, and for that matter can we understand the neural activity that correspond to these faces.

What you’re seeing right here is on the left is the face that someone was seeing on a single fixation. That actually happens to be my ex-lab manager. In the middle column, what that is is a computer vision model that creates about a 225 dimension, that essentially parameterizes that face into about 225 dimensions, and then you can visualize that parameterization.

So that middle one is just a computer vision model of the first one. You’ll see it loses the glasses because the computer vision model does not have glasses. Then the last one is the crazy one. That is a reconstruction of what the model believes the person is seeing based on the neural activity alone. Now, to be clear, we are reconstructing the computer vision model, not the original image. So we’re reconstructing a 225-odd dimensional space.

And very much hot off the presses, now of course at every fixation the face is moving. And literally I got this slide, the first version of this slide two days ago, but the final version of the slide this morning. So we’re not just reconstructing the face that they’re seeing, but we are actually reconstructing the motion of that face that the person is perceiving based on the neural activity alone.

And then just so you get a sense that this is not just one individual, we get multiple different identities, expressions, head poses. We’re modeling all of those things in this 225-dimensional space for every face that they’re seeing.

Now, of course to do neuroscience what we want to do is we also want to do the other way around. So this is modeling, this is taking the neural activity and making a reconstruction of what they’re seeing. What we can also do is then take that middle column, the computer vision model of what they’re seeing, project it through the model, and make a prediction of the pattern of brain activity that the person was having in response to that and compare it to the actual brain activity.

So what you’re seeing here now is not a comparison between the true and predicted, but is actually the correlation, the R value between the true and predicted across all the fixations. So what you can see is there are places we get up into the 0.4-0.5 correlation range between the predicted neural activity and the true neural activity.

The other thing just to point out from a neuroscientific perspective, this is the part of the brain that has been recently kind of characterized by Leslie Ungerleider and David Pitcher as almost a third visual stream for understanding social interactions and social perception and things like face motion and things like that.

And interestingly, across the four or five subjects that we’ve done, this area consistently lights up in every single case as being very important essentially to our reconstructions of the face.

The other neat thing about this model is because of this latent space what we can do is we can move around this space. And so what we can ask for example is if we move in one dimension of this space, what is going to happen is both the brain activity is going to change, and the face is going to change, which gives you a sense of what features of the face correspond with what aspect of the brain activity.

So for example in this dimension it’s one that really kind of showed identity, because you can see this face changing from one identity to another, and then you can see the brain activity, how it corresponds to that change in the face. And similarly, here is another dimension that is really much more about head pose. You can see as the head pose changes this is how the neural activity changes.

So this allows you to have fine scaled hypothesis generation and both data driven hypothesis generation, and then testing as well, because we can manipulate changing the face in a particular way and ask what changes in the way. And that’s what I’ve got.

The one people I absolutely cannot forget to thank are the patients who will allow us to do this work with them, they’re incredible, and it’s an incredible opportunity to do so. So really lucky we get to work with them. And Vikram I believe is coming up next.

VIKRAM SINGH: Hi, my name is Vikram, I’m a new scientist at UCSD, and Nico has already done the heavy lifting for me for marmosets. So the perspective I would like to give here is why we need to make sensors. Why do we even need to make new sensors and new technology, and are they just helping you get better data or more data, or are they really changing the perspective on what you see.

And in science it is much more important because when you keep generating data over and over again in a certain fashion, they lead to certain assumptions in the field, and then those assumptions dictate everything else which is designed afterwards, every experiment that was designed afterwards. So if you are limited by what you’re collecting and what you’re looking at then you will always be pigeonholed into it, and probably not represent reality.

Now I’m going to talk about vision, because you’ve got to start somewhere. So we started looking at vision. So we know that vision evolved a long, long time ago, but we also know that vision actually evolved for a very specific purpose. So like for every animal, the vision is tailored to how they live and how they see the world and what they need to do and what they request from the vision.

So vision actually is more of an active process than a passive process, where something falls on the retina and the brain does something, something, but actually it is the other way around. It is the animal who wants something, so the vision has to be a certain way.

And this is another video of an animal which is trying to hunt, but also be, if you pay attention to how the animal is moving, if the animal makes a wrong move, the animal will fall and instead of getting food will become food. So the stakes are really high. But at the same time, it has to use vision.

So when we talk about in neuroscience that we are doing brain computation and we are trying to learn how brain really computes, we always study it in such isolation that we are really not appreciating what the brain is actually capable of doing, which is do everything at once, and it has to become compartmentalized. But you have to do everything right.

So how do you study vision, but studying vision is not really easy, because you need precise eye tracking. And for a freely moving animal having precise eye tracking is actually really hard. And that is one of the reasons why it has not been done. So that was the objective when I joined Cory Miller’s lab.

So the way we went for our animal model is the common marmosets. It’s a small monkey, roughly the size of a rat, roughly the weight of a rat, and we’re going to use the advancement sensor technologies with 3D printing and wireless recording.

So in terms of making the device, these were the main considerations that I had. One is that it has to be lightweight, because these are small monkeys, they cannot carry a lot of weight. But you roughly know what kind of weight they can carry, because they carry their babies on their backs, and at times they are able to carry two babies. So they can roughly carry about 70 grams or so, for a short period of time. I don’t think they can carry it for a very long time. And then it has to be power efficient, and you have to have camera synchronization, and pupil tracking.

So the way that we go about that is there are two parts. The whole system has been split into two modular parts. One is actually the head assembly, which has two cameras. One looks forward like a GoPro, and the other one looks at the eye. And the second is what they wear on their backpack, which is the brains of the whole system, and it has the microcontroller which I was showing before.

So in the section D, you see our monkey Moe who was wearing all of this all at once, and is just chilling in the arena. Now, why is it even important, or is it just like we are forcefully doing it, and we are doing it because we can? But it is something that will be a motivation.

This is a little bit of audience participation. So you want to look at the eye, and this is a video survey, so it’s going to move. Now, since this is a head mounted eye tracker it is hard to tell if the animal is moving or sitting in a chair. Now, you have to raise your hand if you think the animal is freely moving. How many people think actually that the animal is moving?

How many people think the animal is moving now? What this is, is actually about 10 seconds, and about five seconds, the animal was sitting in a chair, and the other five seconds the animal has been moved. The eye tracking is pretty stable, if you keep looking at it it’s hard to tell if the animal is moving or not.

But if you look at the behavior of the eye it changes so drastically, although if you look at the vision neuroscience you will see most of the data we’ve just collected is the first five second, the way it has been collected is it is completely dissociated from all the other natural behavior the animal will do in a naturalistic setting.

So now this is the overall setting. What we have is an arena. The animal freely moves around. There are some dots in there, they have a different purpose, and probably I’ll touch on it towards the end. And we have two cameras, one is the eye camera which shows what is in front of the animal, and the eye camera that shows us where the position of the eye is.

And so the first question we wanted to really address is is it even useable. So what we quantified is this is like first pass, is how much the animal moves wearing that system, versus not wearing that system. And we are able to see that it doesn’t affect anything significantly, and the animal moves more or less the same speed, more or less the same time, and more or less the same speed across the session.

And all of these sessions are about 30 minutes long, and some of them are like one hour long. We don’t keep the animal out of their cage for more than an hour. And this is the occupancy map. It kind of changes. This is just to show that if they wear it they go all the way around the arena instead of sitting at one place.

The other big problem is you had designed a system which can record the eyes, but can you do eye tracking? And in order to do eye tracking you should be able to actually find where the pupil is, and really find the center of it, and from there you figure out where the animal is trying to look.

This is actually hard in the system that we are doing, because our illumination is not actually fully controlled. When the animal is sitting in a chair you can control the illumination, the angle of it, and then you can just apply a simple threshold, you can find the darkest spot, which will probably be the pupil, and then you can find the center of it. But in this system, when the image is something like what you see in A, it is really hard to do that.

But one thing that you can do is you can actually train a neural network to actually find the pupil for you and do the heavy lifting for you. So what we are using is a segmentation neural network. So most of the neural networks that we talk about are mostly classifiers, they will tell you this is a cat, this is a dog, and they will also tell you with what confidence they think it is a cat or a dog.

But this is a segmentation neural network, where what it does is instead of classifying a whole image, it classifies each and every pixel, and it tells you with some confidence if it thinks that pixel is part of whatever it has been trained on. So is this pixel a part of a dog, or is this pixel a part of a cat. And in that way what it does is it actually generates an image back for you. Wherever it's white is things that that feature exists, and wherever it is black the feature does not exist. And that is the approach for the eye tracking.

Well, it is easy to train. It takes only ten sessions. I can do it on my laptop, so that was good to know. And this is an example of if I use the old school thresholding approach then I will fail, I will have a lot of artifacts, while if I take the U-net approach here I get a much better result, even with a very poor quality of the image.

 And this is in comparison with the state of the art eye tracking, which is Edington system with R system which we call Cerebro. And they are comparable, at least in chair, there is no way that I can compare them when the animal is freely moving, because the other system is very heavy. But in the chair they seem very comparable to each other.

This is our strategy to find where the animal is looking. You are recording from the eye, but you really know where the animal is looking, so you have to calibrate what position of the eye means how many degrees from the center. So the way we do it is we show them marmoset faces on a screen, and they mix the cards to it.

And recently we have been doing something called a smooth pursuit, where you show a moving target and very naturally they are inclined towards looking at a moving target, and you can get all the calibration in like two seconds.

So now in the next figure, this is all put together, what we talked about. Let me just go over all of them, and then I will play the video for you. This is our monkey, Suki. She is in an arena. This is the eye camera. This is the wall camera. This is the result from the neural network which we are getting. This is what we are using Optidrive to track her head and see how far she is from the target she is looking at, so basically is cross section on that. This is the heading, which is which way she is oriented in the arena. This is the position of the animal, after the position of the animal.

And this is the neural activity from the primary visual area called V1. And one thing which I want people to note in this is that there is a behavior which is not rat-like at all. If a rat or a mouse will be left in the arena, the way they will sample the arena is they will quickly run around and they will sample it, make a sense of where they are in the world.

But a primate can do it like no problem. What they do is they sit in the center, that’s what you might have realized. Suki doesn’t really move. He finds a sweet spot, and with ten seconds has scanned the whole arena and has the complete map of the area.

So that is like something very primate specific, and that is something if you are studying vision, primate vision will act very differently than what you will expect from an animal that is not designed for that, or has not evolved for that.

So what we were able to find is if you look at the video you will see the wall camera, it’s crazy. The speed at which the animal moves, it seems like the animal should not be able to see anything. Everything is changing so fast. And that is true even if you put a GoPro on your head and go biking, if you play back that video it will look insane to you. It will be very shaky, very difficult to see. But when you look at, in your actual experience it has all the space moves.

The way biology has solved this problem is you can actually subtract the head and the eye movements. What happens is your head moves in a certain direction, your eyes actually compensate for that. So that way you are able to maintain your gaze at a position, that is what we are able to see, is that the head moves sand the eye moves in this burst, and it moves in the negative direction of where the head is heading. And if you sum them up you can get what is the gaze, and that gaze is actually just flat. So the animal is very stable. The animal’s view is very stable, although the animal’s head is moving a lot and the eye is moving a lot.

And the only thing that I would like to talk towards the end is well after developing this technology what we are able to do is fundamental neuroscience with it. So right now what we have is this is the neural activity of the animal when the animal is head restrained and head free in the same session. And these are certain tuning curves that we talked about earlier. We can listen to certain neurons which have a preference for a certain orientation of the grading or a special distribution of the grading.

And you can find that, which is very fundamental vision neuroscience using our system. And what you can get is also called a receptive field where there is a neuron which particularly likes if something is at five degrees. And then these are a stack of neurons, somebody likes five degrees, somebody likes 5.5 degrees, and so on and so forth. So you can actually get (inaudible) from this. That’s about it, actually. I will thank Miller Lab, and especially Jingwen Li who helped me run the experiments, and the brain initiative fundings which made all of this possible. So the next speaker will be Dr. Bashima Islam.

BASHIMA ISLAM: Hello everyone. My name is Bashima from Worcester Polytechnic Institute, and today I am going to talk about my work on understanding behavioral development using multimodal ubiquitous systems.

So let’s start with what do we mean by behavioral development. So behavioral development is a behavior analytic approach that allows us to look at behavior changes of both basic skills and complex skills across the lifespan of a human.

So when we talk about basic skills, it includes the motor, the language, and the cognitive development of babies when they grow up. The complex behavior involves concentration, sensor clarity, and equanimity. It is often referred to as mindfulness in one word.

Now, the major question that we want to ask is how we can monitor cognitive development or a mindfulness skill development, in a human. One solution can be we can look into EEG signal. We can review the brain activities. However, the EEG signals are not very good or very useful, they are also not very easy to access in daily life, because there is only timed recording we can do. They are often expensive, and they constrain the user who is using it.

So we want to ask the question, can we use the wearables that we use in our daily life, or the wearable that is easier to use, to do passive longitudinal monitoring in the wild that does to affect the way the person is behaving.

And we found that the answer is yes, but only the wearable itself is not sufficient, because to understand the complex behavior or the basic skill development we need to have multiple of these wearables on the person, which is often constraining, and they also need to be time synchronized with each other, and they also can have faulty sensors.

So we try to understand if artificial intelligence or deep learning and these techniques can help us in getting this mindfulness or this cognitive development or other development information from the wearable devices.

Collaboration, my theme is looking into two specific topics looking into behavioral development. The first thing we are looking into is child development delay monitoring, and the second topic we are looking into is mindfulness monitoring.

For this child development delay monitoring, me along with my collaborators at UIUC have developed a platform called Little Beats. It is a one ounce lightweight platform that has a motion sensor, an ECG sensor, and a microphone or an audio sensor to record. This small device sits in a pocket of a shirt of a chest of the baby, and so far we have collected data from 210 children who are between two to 24 months old.

And we have collected two types of sessions of data collection. The first type of session includes a 30 minutes virtual visit where we have the video ground proof data of what is happening, what the baby is doing.

Then we have a two-day-long home visit, where we do not have the video to preserve the privacy of everyone and the baby in the home. But that data is significant because it also allows the parents, the caregivers, the sibling not to be different when they’re collecting the data.

Our goal is to use this multimodal data to understand the action the babies take, the interaction with the parent, the sibling, the caregiver, and understand the environment around them. So due to the shortage of time, let me can quickly talk about the action. So when we say actions we mean multiple things. We mean position and body movement, which gives us the motor development, the baby. We include vocalization, which gives us the language development. And we look at sleep or consciousness state, which gives us the cognitive development.

So when we are working on position estimation, the first challenge we found was that we are having a single sensor replacement. So we have one device that has multiple sensors, the device on the chest of the baby. And it allows us to lose many spatial information which we would have if we had sensors connected at different lengths, for example in our hand, in the baby’s hand, and in the feet. We lose the spatial information, and we can see that that is why our kappa is declined for one.

The second problem we found, and it was discussed also yesterday too, that the movement by child and adults is not the same. Because if you look at a baby, their limbs are much shorter, so they have much more noise propagating to them.

The second problem is that any action taken by babies are not as smooth as adults, they are more jittery, they have more noise in it. You can see the effect, because when we use models that are developed for adult position estimation, like LIMU-BERT estimation, they do not perform well when we put babies data and we want to see the performance. However we observe that we can go to kappa 0.69 for babies if we can learn from the two day long unlabeled where we did not know what the baby is doing at that time, or we have the data.

So I am showing an overview of a proposed model where we have, we take the pre-channel experimental gyroscope data, so it’s six degree of freedom data, and pass it to this unsupervised machine learning model whose idea is to understand what is the correlation and what is the effect, what the complex that’s happening with diamine.

And from that with only those small 30 minutes virtual visits where we have the ground truth data, we can extract useful information like the position of the baby. We can do similar approaches for electrocardiogram or ecg data to understand the restfulness and exertion stage of the baby, which together can tell us important information like if the baby is in tummy time.

Then when we want to look at the sleep of the infant and understand if the baby is sleeping or not, we see the same trend. Because it’s only on the chest the eye cam does not give us all the sleeping states, because if the baby is moving their hands or feet, not moving their whole body, it can be presumed that the baby is probably sleeping, but they might not be.

Similarly, the audio data, if it is calm and quiet, that does not mean that the baby is sleeping. It can be the movement, the baby can be awake. But we found that using the three modalities, the audio, position, and IMU, we can increase the accuracy of identifying if the baby is sleeping or not sleeping by seven to 10 percent.

So along with this work we are also looking into understanding an environment like the household chaos which affects the development of the babies.

Now let me quickly switch gears and talk a little bit about what we do with mindfulness monitoring. So, for mindfulness monitoring we are focused on mindfulness meditation, which allows us to do an open and receptive point of view for the present moment. And research shows that mindfulness training coupled resting rate with dorsal lateral prefrontal cortex.

But it is not often possible to look at people’s cortexes to understand if they are in a mindful state or not, and that is an important question to ask, because we have found that many meditators, specifically the new ones, did not know if they’re doing things correctly or wrong, what they’re doing is right or wrong, and they do not proceed to continue this practice.

So your hope comes in from the literature that shows that even without continuous regulation in breathing, mindfulness meditation changes the respiratory signal in a unique way. So we can ask this question, can we monitor our breathing and have respiratory signs estimate our mindfulness growth.

And to do that we want to use the everyday objects used by the user, for example the smartphones or the air pods we use we often forget are in our ears. And the reason to do that, we don’t want to constrain the user by asking them to wear a chest band or a vest, which are pretty expensive, like Hexoskin costs $500. So in general the question we’re asking is can we hear our breathing information.

And to answer that we did a study with 131 subjects of healthy and respiratory patients. The first thing we observed is that it is not possible to get this complex information only with signal processing. So we move towards deep learning, which can give us the most complex information and can extract more complex features. But most deep learning algorithms need annotation.

So our goal here is to look at the inhalation and exhalation phases. And while annotating we found that there are borders of these phases where inhalation ends and exhalation starts, or exhalation ends and inhalation started, those are inaudible to human ears. So it is very hard for annotators to annotate those phases. They can say how many breathing rates were there, but not where the phase ends and where the phase starts.

This can be done with some of the IMU data, but you have to hold the IMU on your chest or somewhere, and also it is a lot of data to annotate.

So even though machine learning and signal processing cannot work alone very well in this scenario, they can work very well together. What we want to do is to use the signal processing with IMU data only on training, and transfer that knowledge for deep learning with audio data and learn only from the audio data to go ahead on the testing in the real life.

And this Teacher Student Model traditionally trained with two deep learning models. So the main contribution we were having when we were designing it was to have a teacher model which is a signal processing based model that does not require annotation, and using that to lead and guide the deep learning model of audio to understand the inhalation and exhalation phases. And we found that with this we can identify inhalation and exhalation phases with more than 77 percent accuracy.

This leads us to use and finally ask our question how much respiration signals have effect and how much we can get the mindfulness skills out of them. And what we observe is that we can understand how much if there is mindfulness would happen or not happen is 67 to 82 percent F1 score using the respiration signals. In the future we want to understand how much that is, but for now this does not increase.

So before I end my talk I very briefly want to talk about the other thread of work I do, which focuses on unsupervised deep learning on batteryless platforms. So the idea is we want to understand from the semantic diversity of the data and the context how much processing do we need, so that we can do much more complex computation in tiny devices.

And this will allow us to move forward to my official goal, to perform this develop AI models for behavior monitoring on these tiny batteryless devices which would allow us to move forward in order to provide a more cognitive and a more holistic solution.

And before I end I want to thank my collaborators for supporting me, and these young researchers who are working very hard to make this happen. And thank you so much for the time.

Discussants

HOLLY MOORE: We will wait for our speakers to come back up, and we have Maryam Shanechi to join us online. And Laura. To get us started, we’ll start with our discussants, Maryam Shanechi and Silvia Lopez-Guzman, as well as Laura, our three discussants, to sort of ask our speakers, give the reaction to ask our speakers a question that came to mind while they were giving their talks. So why don’t we start with Maryam?

MARYAM SHANECHI: Hi everyone. I have a question first for Nico. So as I was listening to your talk, I agree with you, it is really important for us to go towards more naturalistic behaviors and monitoring. But as you know, when we go toward these naturalistic tasks, accounting for internal states and their changes over time could be quite important, context changes, and how those may change the neural representations of say movements or reach and grasp. How do you think that we should account for internal states in these kinds of scenarios? How do we monitor them? And then how do we take them into account in our modeling? What are your thoughts on that?

NICHOLAS HATSOPOULOS: That was a good question. More sensors I suppose, is the short answer. More sensors basically is the answer. But as far as one of the things that we stress is that we are looking at various behavioral contexts, natural contexts, and we feel that by freeing up the animal, at least we don’t have to worry about adding stress to the animal, which I think affects the responses we’re getting.

So like traditional approaches with head fixed, retrained macaques, very often even fluid restricted, these are not natural conditions, and induce a lot of stress on the animal. So we’re learning something about a stressed animal, whereas in our case these are not completely free to do whatever they want to do.

By the way, we were talking lat night about informed consent. Now, of course these animals don’t have informed consent in terms of what we put in their brains. But they choose whenever they want to to go up to the apparatus and do whatever they want to do. It is totally up to them. They’re completely free. We don’t constrain them in any way. So at least we don’t have to worry about the issue of stress affecting the responses.

HOLLY MOORE: Thank you.

AVNIEL GHUMAN: For me the thing that I think about is I gave you our two different perspectives, a very long timescale and a very short timescale. And ultimately I want to bridge between the two, and I think that will start to get at some of these answers, is how do these things that we’re learning at these very long timescales that are changing over minutes to hours influence what’s going on on a millisecond to millisecond basis.

We’re not there yet, it's early days. But that is how I see what I would like to see some of this work that I described going, is as we start doing more and more rich things at these different timescales, also thinking about ow they come together. And I think that ultimately gives you kind of a neurocognitive context for what’s happening on the millisecond level.

MARYAM SHANECHI: Maybe that is some unsupervised kind of machine learning methods that try to also extract latencies that correspond to internal states, as well as latency that relate to behavior at the same time could help address some of these problems, even in the absence of explicit measurements, because as we know internal states are very difficult to measure anyways, so maybe that could also be helpful.

HOLLY MOORE: We will go to Sylvia to ask a question or to get a reaction.

SYLVIA LOPEZ-GUZMAN: My next question is to Dr. Islam, but I think it could also apply to the other speakers. So, to the degree that the models that we develop are constrained by the data that we collect, one question that I have is external validity. So, in the data that you presented, Dr. Islam, you showed that you have pretty good accuracy at utilizing your sensor-based data to predict whether the baby is asleep or lying down, or there is chaos in the house.

But the question is then actually to what degree these become predictors of I think of things that we care about clinically, things like for example developmental delays. The same applies to some degree to the questions that were just answered about natural versus naturalistic environments. There is always, I think we are moving in the right direction, both in the animal research and in human research, trying to capture more of real-world behaviors, but we’re also very constrained by the artificialness of collecting these data. So, can you comment on that?

BASHIMA ISLAM: Thank you so much for the question. That is a great question. I think the machine learning models, they only learn the data that it has been fed. It is very hard for these models to learn something it has never seen. There are works in this domain to look at future learning or short learning or zero short learning where you get some data and almost no data, but use other contextual information to learn about what you have not seen before.

And what I have found is that in this type of work we will look into the cases where we need national data. It is more important to get data that are not inside the lab scenarios, that are outside at home. For example, one of the things that we are doing is even though we are having a lot of unlabeled data, the main reason for getting such unlabeled data is these are having at home, we send a little the device, we courier it to the parents, they hop on a Zoom call with us to set it up, and then we do not see what happens next.

So that allows us to collect a lot of data which are unlabeled, because if we say oh, we are going to video everything you are doing for two days, or say 16 hours, there are two things that can happen. One, they won’t agree to do that. Second, they might agree, but be very staged behavior for those 16 hours. So by losing the ground truth we are making our lives harder, but it does allow us to have access to a group of data that does not have that much effect of being monitored or an artificial effect.

And I think for the machine learning perspective looking more into personalization, the future of learning and the zero short learning approaches where we can train the machine learning models or lead the machine learning models to understand things it has not seen before by giving different contexts. For example, there has been work on audio based short term learning which looks into the textual domain to understand sounds it has not heard before. And I think that is a direction we could move forward in this behavioral development domain.

HOLLY MOORE: Laura, do you want to jump in?

LAURA CABRERA: My question is particularly for Dr. Ghuman in terms of human behavior, i.e., for epilepsy. So most of the conference we’ve been hearing about these tensors that are more external. When you’re starting to get more I’ll say invasive, when yous tart to get physically invasive, you get more different type of data which we want that data as well, but I’m thinking what will be the type of ethical challenges you see in bringing these two worlds together. Data from more invasive types of technologies with data that we are able to just record from more day to day type of sensors.

AVNIEL GHUMAN: I am going to take a little bit of a twist on the question, if you don’t mind. So one thing that I think about a lot when we’re talking about this natural data is we also were talking about standards and data sharing, and we live in a world of data sharing. And so from our perspective, the invasiveness obviously gives us a really nice resolution into the brains, but kind of the generalizability, one of the things I have also been wondering is to what extent can we do similar things with Scalp EG(?), which would eventually start to bring it into connecting it with some of these other things.

But the thing I also think a lot about is how do we share these data, given that we have videos, we have all this very personally identifiable and personal information. And I know that is ubiquitous across a lot of kind of the human work that we’re talking about, particularly in the homes and things like that.

And so I have thought a little about kind of, I am involved in a consortium on what is known as federated learning, where you essentially train your algorithms on data that someone else holds, and you never get the data, but you get the trained model back.

And so from that perspective there may be ways to think about data sharing that can bridge across relevant, not just for the kinds of things that I do, but to all the different kinds of natural behavioral data that we’ve been talking about over the last couple of days, that may allow us to protect all the confidentiality from those kinds of ethical issues. And so I think we need to start thinking about kind of new infrastructures for how we’re going to start sharing these data.

And then I also work with a bioethicist where we also worry about ongoing consent, so we have this concept of ongoing consent, where we consent people not only at the beginning but at the end of every session, so that they know what was just recorded, they get a chance to vote up or down whether we get to use those data. So I’m very concerned about the bioethical concerns. I think I got to some aspect of how it connects to the other things.

NICHOLAS HATSOPOULOS: I guess you were concerned with the invasiveness of these technologies, was that some of your concern? So with regard to that I have very strong opinions about this. Unfortunately, as it stands right now the technology is not there to do the kinds of work we really want to do to understand the brain at the right level to do it noninvasively.

And in particularly in the context of the work that we do with brain computer interfaces, brain machine interfaces, there are just no systems out there that are noninvasive that give you the kind of resolution that you need to provide the kind of not treatment but allow people to really interact with the world in a useful way. It just isn’t there yet. Once it’s there I’ll be happy to use it, but it’s just not there.

HOLLY MOORE: Thanks. We may come back to that in the closing discussion. Before we move on, I want to take a moment, because I did forget to introduce Laura at the very beginning. This is Laura Cabrera, she is an ethicist at Penn State, and I wanted to make sure that you were introduced and you contributed so significantly. Why don’t we start with questions that are coming online, at least one, and then we’ll go back and forth.

STAFF: So there is a question from the audience that echoes a prior question that was raised. This is directed to all the panelists. How do we possibly expand this current technology or technologies to estimate internal states such as emotions in a realistic setting, for example as in monitoring depressive symptoms in patients with depression?

HOLLY MOORE: I am going to also relate that to a previous question, how do we take all this data that we’re collecting and link it mechanistically to an internal state or an internal process that we’re interested in. And maybe Vikram, why don’t you start?

VIKRAM SINGH: So my straight answer to that would be it is complicated, because you have multidimensional data. So it is multidimensional. And that is why we keep going back to neural networks, because they are so good at abstracting. And once you start abstracting and you start finding these clusters, and then you know that this cluster usually coaligns with this emotional state versus the other emotional state. So there is some way you can get very close to the state that you want to study.

Let’s say you want to figure out if the animal is depressed or the human is depressed or something like that, and some activity are recorded and it clusters there. But the problem is it is something on which you have already annotated that that was depression or that was something like that.

But if you find a cluster that is in between, there is no efficient way for you to say that if there is a cluster between let’s say this was depress and this was not depressed, but you find a cluster in between from a neural network, it is impossible for you to say is this person half depressed or half like – It can actually mean a completely different thing, it is a very different dimension.

So what I would say is if you have been measuring behavior and you can validate it with behavior, yes, you have a way for sure. Otherwise, if you are using just the output of a neural network to interpret what is going on, then I would say that’s a slippery slope.

SYLVIA LOPEZ-GUZMAN: I think this touches on a very interesting topic, which I think applies to all the presentations today, which is the issue of interpretability versus prediction. So training our models to predict whatever thing we’re interested in, it’s looking at this particular thing or it’s interacting with the phone is very different than understanding a mechanism.

And in interpreting then how those computational models are behaving, and whether that actually tells us anything about the brain. So can you comment a little bit on that tradeoff between predictability and understanding?

HOLLY MOORE: Why don’t we take another question? Jonathan, why don’t you go?

JONATHAN: Thanks to all the speakers. I just had a general question about what happens during learning. So this sort of gets to Avniel’s question about looking at integrating the timeframe of short term and long-term. Izaak Freed(ph.) has some nice studies showing that recognition of faces, face cells in hippocampus, develops for the doctors and the nurses in the nurse surgery suite, over time as the patient is being recorded.

So it would be interesting to know, from a behavioral standpoint, and also from a neural standpoint, what the representation is. And the same could be true for Nico, looking at what happens when you train an animal on a task, and then see freely moving how that changes the way not only the behavior unfolds but also the neural representation in terms of the trajectory.

And also it is true for breathing. Like whether you develop better breathing techniques, how does that evolve over time, and how does that manifest in terms of this connection between the neural and the behavioral. So I wonder if you guys could comment on the opportunities here for studying over a time course these interactions in a learning setting.

NICHOLAS HATSOPOULOS: I can speak to that. We are actually beginning to do work on learning, on motor skill learning. These are experiments that are slightly less natural. So instead of prey capture we are actually training these marmosets to interact with an iPad, to grab little beetles that are moving around in space. So they are learning a skill. It is comparable to a serial reaction time task.

And we are looking at that over extended periods of time, over a week maybe or two weeks. And also looking at the role of sleep in consolidating these memories. Because we can record wirelessly, we can record it in sleep as well as during the day.

I guess one of the challenges we have is if we’re going to stick to the single unit level, is it important for us to track the same neurons over those one or two weeks. And that’s a bit of a challenge. We know we can’t track every single neuron, the exact same neuron over a two week period, not all of them, some of them we can, but that’s a challenge.

BASHIMA ISLAM: I also want to answer something that when we’re developing skills like mindfulness meditations, over sessions, the skill set of the person evolves. So the person who has done the first session on meditation, their behavior will evolve when they’re doing the tenth session, because now they know more what they want to do. Do I do this or do that, should I breathe in or breathe out, what should I do.

So the trainings, over time, they develop, and they change as a signature. So that is one of the other things we are trying to look into. How does the correlation or how does the change happen if we cluster in two different groups, where one group are expert meditators who have performed it, done it before for at least five to six months, and people who have not meditated more than twice in their life. So what is the relation between them and how much it changes the (inaudible) nature.

HOLLY MOORE: We have to stop there. I am going to ask us all to thank our speakers and discussants for session five. So we have a five minute break, and then we’ll move to our closing discussion.

Closing Panel Discussion: Synthesis and Future Directions

HOLLY LISANBY: So, as you all are happily discussing, why don’t we wander back to our seats except those who should be coming up here for the closing discussion.  So we have Jeff, we have Kirk Brown on stage, we have Jeffrey Cohn, we have Ayse and Katherine virtually, and we would like to have Laura come back up. I think that covers everyone. That was just a schoolteacher’s call to get everyone on stage, that was not their formal introduction.

I am here though to formally introduce Lizzy Ankudowich, my colleague in the Brain Initiative whose home institute is NIMH. Lizzy basically wears many hats. She is the Program Chief, she is the Director of the Multimodal Neurotherapeutics Program at National Institutes of Mental Health.

And the overarching goal of the program she runs is to advance neurotherapeutics for major mental disorders in clinical dimensions of psychopathology through a non-siloed personalized medicine approach with an emphasis on synergistic effects of combined and multimodal approaches, and those are approaches to actually monitor behavior and track neuro activity.

Consistent with that, she is an integral part of the brain behavioral quantification and synchronization team, and part of the brain initiative’s brain research through advancing innovative neurotechnologies, that’s us, that’s the Brain Initiative, sorry. She is part of Brain, and she is part of BBQS. And again, you will learn more about her as she runs this last closing session where we hope to give you a synthesis of what went on and some overarching, some high level reactions.

LIZZY ANKUDOWICH: Thanks Holly. And thanks to everyone here for joining us for our final moderated session. The goal of this session, as Holly kind of alluded to, is to provide an opportunity for discussants to integrate and synthesize the wonderful information that we have learned about from biophysiological sensors to environmental sensors in the past couple of days, and to think about how to apply these sensors across novel contexts, and also to consider future directions for ways in which we can advance research using sensors to help us to understand how the brain gives rise to behavior.

So just so I am clear with the discussants, we are going to start off with kind of a roundtable type discussion, with both online and in-person panelists. So we have already kind of rallied the group here, but I just want to introduce Dr. Kirk Brown from Carnegie Melon, Dr. Jeffrey Cohn in person from University of Pittsburgh, online we have Dr. Ayse Gunduz from University of Florida, and Dr. Katherine Scangos from University of California San Francisco. We will also ask the chairs, the co-chairs John and JC to join us on stage for the discussion.

And I think if we can I would like to start off with some observations from our online discussants. Katherine, would you like to start us off with a couple of questions or comments about kind of what you’ve been thinking about basically for the past couple days as we’ve seen these wonderful talks.

KATHERINE SCANGOS: First of all, it is great to be here. This has been a fantastic two days. I think the presentations have been just tremendous, and very insightful into where we are looking currently and where we can go together as a community.

So over the past two days I kind of have seen five I would say main advances, but also challenges, where additional research is needed.

And the first of those is in biosensor development. And we talked a lot about this yesterday, but also some today. And so this is making devices smaller so they can fit into cells, having adequate power, so low power devices, devices that harvest energy from actual human bodies, and alternatives to Bluetooth, so that we can be battery free.

And then together with that are advances in flexible electronics, we can have this intimate integration with the human body. We talked yesterday about the silicone sensors, the bendable electronics, the skin interfaced wearable sensors and the ability to have some of these types of devices already in the ICU, monitoring babies, helping to develop skin for robots through body networked, wireless, flexible body sensors. And then having sensors that can access more, deeper into the body noninvasively, and this came up in the discussion today, but also yesterday, and some of the discussions with the electronic skin and being able to measure cortisol and serotonin, drug levels through sweat, and actually even generate the sweat, and then other newer technologies such as using bioimpedance or radar.

So that is one of the areas that I saw as a major need for development, but just such exciting advances. The second is really how to manage these dynamic environments. One of the ways that we have discussed in the last two days was sensors capable of learning.

So computational capacity even in the sensors themselves, so we can have in-skin learning we talked about yesterday, and forgetting, and how important the use of AI is within the sensor itself. And then how do we find meaningful signals when there is no ground truth to measure against.

And one of the challenges there, the subjective scales that we administer to people, and so some behaviors can be tracked visually with movement, and we have a number of examples, both yesterday and today, how we can understand changes in movement and compare signals across those different states, and we can identify sort of motor delays in children early.

And then other signals that can be identified through perturbation. For example, John Rogers talked about the pain of the blood draw in the kids. Or we can provoke anxiety and measure cortisol. But others are more challenging because there is no capability for modulation, or the modulation is very slow. And this is something that we talked about yesterday as well as today with the questions that came up.

So we have subject variability, noisy data, the data is complex and captured on different timescales, it is challenging to integrate these different signals, and so there is really a need for further approaches, multimodal approaches to help achieve those goals.

Along with that is the need for the real-world measurements, but how challenging that can be, and this came up in the discussion just before this, and how important context is, and modeling context and the challenges and importance though of really getting these real world measurements. And we had a number of really terrific examples, the context aware and personalized AI, the microwave signals, portable radar sensors, and then the longitudinal intercranial EEG recordings that can track signals for days.

And then from that I think there are two main questions, this is the third and the fourth main points, how do we get these technologies into the clinic? And then the fourth, how do we understand the brain better from all of these signals and data that we are collecting. And we had tremendous examples of how we are already using these data in the clinic to measure serotonin concentrations through ingestible wireless devices that are in the GI tract, monitors in the NICU, detecting developmental delay. And then putting these measures into the clinical trials so that we can advance drug development. And we heard a talk yesterday from Biosensics about how we already are able to use these FDA registered medical devices, but that further work is needed to really advance that.

Then today we heard terrific talks on how we can learn about the brain and body function. And so Nico talked about the neural basis motor control and learning, and really the ability to look at network based encoding rather than single neuron encoding, and really how tremendous those types of models could be.

We heard more about visual perception, both in humans and in animals, by allowing real world freely moving animal measurements. And we certainly need more work in those areas because as they are shown today what we are learning is so different from fixed and controlled settings.

And then the last I think real point that has been really discussed multiple times today is ethics, consent, data sharing, the need for the ability to (inaudible) the data. The question of how much data do we need, both at the population and the individual level, and data archiving and standardization, and the efforts, neuro data without border and by standards association to achieve that.

LIZZY ANKUDOWICH: Katherine and I agree with you. There are so many challenges and also opportunities for translation in this space. I just wondered Aisha if you have any additional thoughts or contributions.

AYSE GUNDUZ: Thanks for the great summary. Looking at the Q&A, the program is called Brain Behavior Quantification and Synchronization, but I think with the sensors we have really focused on the brain quantification. And most of the questions that came about were how do we quantify internal states, and the term ground truth came up quite a bit.

And then we do know that the clinical scores are done on a very different time scale than the brain function. And we know that they are also very subjective. So I think one of the challenges we have is how frequently we can sample these, and whether we could have a bit more objective measures to capture the patient’s states.

And I think actually the question in the chat is very on point. So can we actually quantify behavior of physiological function. So with all the sensor suites we’ve heard yesterday and today we basically can quantify physiological functions, but how each of these functions relate to again the internal states, especially again in the context of mental health, is going to be really important.

And obviously the clinical trials are very important for the clinical safety and translation into humans, but I think maybe this translation could be faster in the sense that maybe we don’t have to, in order to understand how these physiological functions relate to behavior, I think to explore the system, maybe while we are working on making these more scalable, miniaturization and so forth, I think we might also just start with a quantification within the labs again, and then once we have these, that would actually tell us what sensors would be more valuable and therefore all the efforts and resources could go into those.

Again, we know that timescales matter, we know that behavior is basically our interaction with the world, and therefore the environmental sensors are important. So when it comes to the predictability and forecasting of internal states, I can just give you an example of a tourettes patient where we sent them on four different DBS groups, and one week she just had a really, really tough week at work, and basically I can just forecast whether her symptoms are going to get worse with what type of day she is going to have, rather than looking at the brain states.

Again, this is the interaction of the brain with the environment and then the effect of the environment on the brain. And I think the other take home or the most discussed topic has been the interpretability of AI.

Again, if we are basically going to be able to use this within the clinical domain, first of all we need to understand what part of the data allowed us to come up with our conclusions, whether we were able to correlate them, the causality of them between the labels of the internal state, and basically we again actually one quote that I wrote down that I like very much is that we can’t use AI as a crutch.

As long as we are able to basically translate these physiological functions into the internal states, we may not be able to interpret all the information that we gain from this data and the complex models that we apply to the data.

LIZZY ANKUDOWICH: I really appreciate the idea of you’re presenting about how best to make sense of the various types of signals, and which ones are most informative in order to infer an individual’s internal states. I’m wondering if Jeff you have any kind of thoughts on the types of multimodal signals that could be used kind of in the mental health space? Thank you.

JEFFREY COHN: It’s a really interesting problem, because we have multiple modalities, multiple sensors, and multiple timescales that they’re operating from. And then there is the context, which we talked a little bit about, but I think it needs a lot more. We are trying to understand the person in a context.

We are a social species. So that is other people, and we are responding to them, and we are responding in time. So I think we talked about standardizing kind of data streams, but I would like to see us think about how to integrate kind of the sensor streams in a way that we can move among them.

One of the presentations yesterday talked about drift in accelerometers. Once you lose the synchronization, the data are wort much less. And so one of the things I wonder if how we can synchronize the multiple modalities, multiple data streams over time. So that was kind of one thought.

We also, in terms of sensor systems, two that I am very partial to, one is video and the other is microphone. I was really pleased to see in the last session kind of very nice work, especially using video. One of the questions there is how do we collect video in a way that is maximally informative?

I think the first person glasses that were shown, it was an exciting way to do that, to accomplish both kind of eye tracking in a free ranging animal, as well as recording their common specifics. Because I think a challenge for us is to understand what people are trying to do and what others are doing, how they’re synchronized.

HOLLY MOORE: Kirk, do you want to share some observations that you’ve had over the past few days?

KIRK BROWN: A few of the talks noted that sensors are not new, they’ve been around for decades. Clearly, there has been an exponential increase in sophistication of sensors. And I would like to believe that we are riding a very upwards slope so to speak in our ability to use sensors. I think though there are a few things that I think are important to keep in mind as we move forward, and one or two possibilities I think that are waiting for us down the road.

And to introduce this I was really inspired by one of the areas of focus that Dana Greene-Schloesser noted in her slides, describing this workshop. And there are a few points that I think were really important. One, the development and validation of sensors and tools, methods, analytic approaches, precise quantification being a second point. And integration of sensors with simultaneous recording of brain activity in humans and other organisms.

Addressing the first point, and a few talks mentioned this, the importance of reliability and validity. And I think that is a really key issue that we need to keep in mind. I come to this as a behavioral scientist with a strong interest in psychometrics, and know that the importance of reliably and validly using our instruments is really important, whether that’s psychometric scales or ecological momentary assessment, et cetera. So I think that’s a really important issue to be keeping in mind, and that means kind of cross-validating with ground-truth measures as lacking in sophistication as they may be, especially measures of internal states.

A second thing that I noted in some of the talks (inaudible) valuable from the point of view of the future is integration of sensors with not just other sensors but with other modes of obtaining data. And Satra noted this in his talk, the importance of both objective measurement, subjective measurement, with the idea of entraining those two toward intervention.

I think especially when it does come to internal states it is really important to be integrating sensor based data with what’s going on in terms of internal states. I think the potential for this work is enormous, as many of you who do this work believe.

I am particularly interested in smart phone sensing, and I believe the possibilities for intervention are going to really explode in their utility as we go forward. Just-in-time interventions for example, where algorithms that are crunching sensor data can be used to provide feedback to people to intervene when their behavior is putting that persona t risk in some way, for their physical health or their mental health.

The last point that I wanted to make that has come up repeatedly over the couple of days here is privacy concerns. And part of my work is in behavioral genetics, and we are coming to a place where just having information on a person’s genome can help to identify that person, regardless of whether we have the classic identifiable data from that individual.

I do wonder whether we might be moving toward a situation where sensor data, potentially combined with other kinds of data, could help to identify people in ways that they would not choose to be identified. So that’s a privacy concern that I think is important to keep in mind. And I’ll stop there.

LIZZY ANKUDOWICH: I was going to mention privacy as one of the most important concerns that has been discussed already in the meeting. And I think it goes beyond just privacy of the individual, actually we’re mentioning, you have sensors that are environmental sensors, and anybody who enters that environment, we will also have to discuss privacy for those individuals.

So I think the more we move to this uni-sensor type of thinking, the more these questions that we used to explore in a very individual specific realm needs to be expanded. Like now these sensors are interacting not just with the person, but maybe with another caregiver, or the mother. So I think all these considerations are relevant.

I also want to add something that was mentioned yesterday in terms of the informed consent, which I think has similar considerations, in that it goes beyond just individual, and it goes beyond I need to give informed consent if a sensor touches my skin. I think we need to think about if a sensor is monitoring my environment and it is collecting information from my body, I think informed consent is also relevant in those scenarios.

And the final consideration that I have been thinking about throughout these two days is the concept of invasiveness. And we usually think in terms of physical invasiveness, like something that is in the body. But I think when you think about multi-sensors, invasiveness can also be thought of as you have different sensors across your body, that can be perceived as invasive.

So trying to get out of the box and thinking about will participants like that level of invasiveness to their physiology and body, I think it is important to consider now before we get to advanced and then people don’t want to use it, just a few thoughts in the session.

PARTICIPANT: If we take off from there, John and J-C could comment on that. I think that there are a couple of things that we have heard about a desirable property of a sensor would be to be undetectable to the user if possible. And that does speak to invasiveness a little bit, if it’s physically not detectable. If it’s not sending you signals.

Some of us wear health devices, for example to signal whether our blood sugar is low and to signal whether our heart rate has gone up, and not everybody appreciates being signaled at all the time, and it does change your behavior.

But another kind of meta invasiveness question is just the knowledge that so much data about you is being collected. Where is it going? Who is using it? Who is actually drawing conclusions about you based on that data? That is also something that I think we need to address from an ethical perspective. And I wonder as people who develop these technologies what you think about that.

HOLLY MOORE: And also just to add on to that, the misinterpretations that could be made about you.

JOHN ROGERS: Maybe I could say a couple of words. Obviously, this is a space where my group is quite active from a research standpoint. So if you think about your patient burden, I think it depends partly, or maybe largely, in terms of the value of the data and the urgency of the need and the monitoring.

And so we decided to start with a focus on premature babies. A lot of people asked us why would you start there, that is probably the most challenging patient population in terms of sensor burden. These are very tiny, fragile humans with underdeveloped skin and fragility and so on. But we decided to start there because the need is greatest for wireless monitoring. And so I think there is a little bit of a balance there. It is not an absolute determination of burden, it is really a function of the quality and the value of the data streams that are being generated.

So I think maybe for envisioned programs in this space, maybe you prioritize programs in terms of greatest patient need, and that can be a step in the right direction. I think there are all kinds of engineering features that will also dictate the absolute sort of quantitative aspects of burden in terms of skin invasiveness and irritation at the skin interface for non-invasive monitoring, or maybe even these completely contact free techniques for monitoring. But I think that that is an important consideration.

And beyond just the engineering features of the devices, I think patient burden is also dictated by the nature of the user interface. So you mentioned sort of warnings and how much user manipulation is required to actually use and mount the devices.

And so systems that can remain on the body for very long periods of time without requiring any kind of user engagement is also an important factor around burden. We deployed a few hundred devices actually on COVID patients during the lockdowns in Chicago, and one of the things that we found out very quickly is because of the high levels of fatigue associated with these patients, they had no ability to engage with an app, even a very simple graphical user interface on the phone.

It had to be completely autonomous. You put the device on, you don’t have to do anything, you take it off, you put it on a charging pad and it automatically uploads data to a cloud server where all the data can be analyzed and then displayed in a very simple fashion. So I think there are user factors as well as base hardware engineering considerations around what dictates patient burden. But again it’s the need that’s going to drive adoption and compliance with these kinds of technologies.

J-C CHIAO: On top of that, I would actually add one more factor.  So first let me talk about the privacy issue. My group developed a sound sensor that can be attached on the back to record the lung movements, and then we give it to the doctor in the operation room to use.

After one use we decided not to do it, because it recorded every conversation of the doctor and nurse in the operation room. So that is obviously not just the privacy of the patient, but also the doctor, nurses, and other healthcare givers, that their privacy may be in danger too.

So this is a bigger conversation, I don’t think we can have one single answer here. I think it has to be case by case. Because one of the cases we are working on is that the patient basically tells me I will sign anything you give me for consent because I have this disease, I want to deal with this disease, privacy issues are secondary or other for them. So how do we prioritize this kind of a need? This really needs a bigger conversation, not just among us but also the clinical society.

The second thing I want to raise that we haven’t discussed about this is not just the privacy, but also encryption, the data. So as Dr. Li was presenting using noninvasive sensing, not just by radar, also we have near field. I don’t have to put something on you, I can just be close to you and I can sense your vital sign. I can use optics or acoustic signal. I can detect something without touching you.

So now how do we ensure that the data is encrypted so that nobody can hijack your data? Like for example we heard people hacking device all the time, like people hack into your pacemaker our your neurostimulator. So that could be something very dangerous. We haven’t discussed this part, because I think this needs to take place as our colleagues in both engineering society and mental health, neuroscience society, make this progress, we need to make sure the lawmakers are aware of this.

HOLLY MOORE: Ayse can comment on any of these thoughts, or we could also open it up to more general discussion if people have questions.

AYSE GUNDUZ: I have one more point, if I may. We also know, especially for mental disorders, timescales matter not only for the behavior but also for the therapy. Whether it’s pharmaceuticals or electrical neuromodulation, we know that the patient has to be consistent with taking their pills. And then if they don’t work they might come back and the doctor might have to prescribe another one.

And the same with neurostimulation techniques, they need to be on a defined therapy for a long time, longer than it would take say for a movement disorder. So again, I think that this quantification is also quite important also to guide the therapy and whether we are going in the right direction for the therapy. So I think the aspect of the therapy is also a topic that hasn’t been consistently discussed.

JEFFREY COHN: Going back to the point about privacy and access to the data, that seems so critical. One of the things that we struggle with is there a way to deidentify and anonymize data in a way that would protect that.

So certainly, with respect to video, instead of recording the full data stream one could process the video to extract features. Now, some features can be recombined to go backwards, but many features can’t. Now, that would protect anonymity, but at the cost of lost fidelity, because you can’t go back when you have better feature extractor and reprocess.

And just like one other approach, the federated learning model, in which all data goes to a protected cloud source, and can only be accessed with say approved algorithms, can such a model provide security.

J-C CHIAO: Actually, this is an interesting question, because since I was working on the neural recorder, I got asked this question a lot for the past 15 years. There is no protected cloud. Any cloud can be hacked. There is no protected data storage. And even though you think you go to VPN, your identity can still be found. So therefore, our engineering society needs to think something clever that can protect our privacy and data safety.

And I am bringing this in many conferences when I talk, asking people to think about how do we do this for implant and wearables beyond they want to protect their bank information or something like that. This part really needs to be done and has not started yet.

JOHN ROGERS: I agree with that. I would just add a couple things. I think anything is hackable. There are best practices in encryption. We use HIPAA complaint Amazon Cloud services for storing vital signs data and so on. I think the better solution to it is to address it at the level of the hardware, as opposed to reduced resolution or downsampling of the data or encryption and so on.

And I can give you specific examples. So we work with rehabilitation specialists who deal with stroke survivors out of what used to be the Rehabilitation Institute of Chicago, now the Shirley Ryan Liability Lab. And many of those stroke survivors suffer from aphasia. And so their vocalizations and time-talk cadence and so on needs to be monitored so that rehabilitation protocols can be tailored to the individual. And ideally that needs to be done continuously, 24/7.

And we were talking to the rehabilitation specialist, and they said well could you develop a wearable microphone with Bluetooth. We said yes, we could definitely do that, or you could use the phone. There are zero stroke survivor patients who are willing to have everything recorded all day long, every conversation.

So the solution was not to do the audio recordings, downsample them, and then just extract sort of periods of vocalization and do things in that way. Instead we developed a skin interfaced high bandwidth accelerometer that mounts on the suprasternal notch.

And so that device is not measuring soundwaves, it’s actually measuring vibratory signatures of speech at the surface of the skin. So it is noninterpretable recordings, but it has all of the kind of fidelity you would need to determine tonality and frequency and cadence of speech.

And that turned out to work really well. Everybody was much more comfortable with that kind of solution. And that same device are actually deployed on opera singers now to monitor vocal dose to avoid vocal fatigue. The same situation, nobody wants their conversations recorded all day. And so that was kind of an example of something we approach from a hardware side that seemed to be a good solution to this privacy issue.

JEFFREY COHN: That is a really good solution. We did work on mother-infant interaction, in which we in both the mothers and the infants we modeled, we used sensor over the glottis which I think is what you are describing. And it produces what sounds like whale talk. There’s no linguistic information there, but it is very good in terms of kind of speech rate, timing, and we’ve found it to have very high correlation with microphone data as we well

PARTICIPANT: Just to note that it is sometimes difficult to see you all, I just noticed. So why don’t we take a question from the floor? And if a long time goes by wave or something, it’s very difficult to see.

PARTICIPANT: I am (name) from City College New York, and I study brain-body interaction. And I use a lot of different sensor modalities. I’m trying out new sensors, and one of my biggest problems is to make the sensors talk to each other and time align them to one another.

And one of the challenges of taking Verisign’s out of the lab and still studying these brain-body interaction is to have a way to have these sensors talk to each other. And we’ve heard all these wonderful sensors being developed, and I would love to write all of them all at the same time. But how do they talk to each other? Can we just readily go out and take one of your devices and then synchronize them with all of the other devices that are out there?

Another thing would be sort of time position. Another thing would be can we have multiple devices synchronized and in time with each other so we can do simultaneous recordings like record all the heart rates of the audience right now to see how it reacts to all the wonderful speakers, I’m curious if you would like to comment on that.

PARTICIPANT: Anyone can take that. Also, speakers with expertise on these questions, like expertise on internet of things, systems design, please come to the microphone and join the conversation. But who wants to take that one up here?

JOHN ROGERS: I can say a couple words. Somebody else may want to weigh in as well. I think you ask kind of two questions. One relates to interoperability of devices, maybe from different vendors or from different labs, that’s one set of issues, and then the other one was just time synchronization across multiple devices on an individual.

So for that latter challenge I think it is more or less solved, it depends on what level of time synchronization you’re interested in. So we have body area networks of devices. So for monitoring neuromotor behavior in these infants that are at risk for neuromotor delays we have 11 devices mounted to different body positions, so we can reconstruct from the accelerometry and gyroscope information from all 11 of those devices’ full body motions in avatar form, but then we have the raw data, we can do machine learning.

In that case we have one device, and I think the terminology might not be quite right these days, but it’s master device and slave devices around it, and so there is a clock exchange that happens from the master device to all of the slave devices, and that happens ten times a second. And so there is no drift, there is no long-term drift.

There is some latency associated with the way that Bluetooth system on a chip components handle packet exchange and information, so you can get time synchronization down to about a millisecond, which is good enough for like kinematics and body motions, it’s also good enough for pulse wave dynamics, so pulsatile blood flow, hemodynamics, it’s pretty good for that as long as you have a large enough separation between devices so you can measure those delays.

I think doing better than about a millisecond probably requires a hardware modification, it is not something you could probably address with firmware or software. But if a millisecond is good enough that is a solved problem. You can do that, and we do that routinely, I think probably other groups as well, for hemodynamics and full body kinematic measurements. So the drift issues kind of go away, timing drift at least. In terms of interoperability I think it maybe comes back to some of the standards conversations that we are having earlier, a little bit more focused on data, but probably standards in terms of how devices from different vendors can communicate with one another.

I think that’s a much bigger challenge because it requires coordination across multiple companies within the industry. What we see is there is a tremendous amount of interoperability even in ICU monitoring devices that exist in the most sophisticated NICU/PICU facilities or ICUs. In the US you have the non-pulse oximeter, and you have the GE Dash that’s doing the ECG, and those do not talk to one another. It is very difficult to get even raw data streams out of them. And so interoperability is much more challenging.

I mean, we have like a whole suite of devices, and they all talk to one another because they’re being produced out of my group, I don’t know that that’s necessarily a solution that we’re looking for, but we appreciated the problem, and I think having devices that all operate on a common hardware platform is very important.

I think maybe one final thought along those lines, I think there’s a real need for interoperable platforms that can be used by the research community. There are all kinds of incentives for companies to have their own proprietary protocols and so on. But if you could imagine like a modular system where you have sort of a base station that has your power management integrated circuitry, the battery, the Bluetooth SOC, let’s say for example, and then you can just swap in different sensors into that base unit, the base unit determines interoperability, and then everybody can throw their sensors into that kind of base unit, I think that would be a very powerful resource for the community.

PARTICIPANT: You can imagine a wearable Arduino. All sensor developers could talk to that device.

J-C CHIAO: I agree with John, we are using the same approach, using Bluetooth for data synchronization, and can go down to one millisecond accuracy, and we did find out one thing interesting is that the data storage, writing into the hard drive is actually delayed more than one millisecond. So then that kind of destroyed that thought. So that’s why we need to get bigger and bigger hard drive every time we do the longer measurements.

Also there are several vendors now. They realized this problem, not just adrenal, that is a very low hanging fruit, also like Texas instruments now is developing this module that can plug in six to nine different sensors in one single module and they do the synchronization internally.

So that way you can get different sensors from different vendors, and the hard way of taking care of your synchronization issue. So I think that as this field keeps growing I think more and more vendors will come up and provide that kind of platform solution.

PARTICIPANT: Before we take your question, I did want to return to Asha’s comment about different timescales, because I don’t know that it was something that was fully addressed. And that is there is the therapeutic time scale is not the same thing as sort of the minute by minute behavior timescale, and one big question is how do they relate. If we only look at the therapeutic timescale, and then check in on a week or monthly basis, we already do that. We don’t need all this fancy stuff to conduct therapy when checking in on such a slow timescale.

So the question becomes how do all these fast events that might be embedded into these very slow changing events that are therapy or response to therapy, how do we actually monitor whether therapy is being successful based on the relationship between those fast events and this very slow course of improvement.

AYSE GUNDUZ: It is very important for us to basically reduce the time to therapy.

KATHERINE SCANGOS: Another challenge that goes along with that is the changes in neural signals or moods or internal states might not be temporally locked to an event that leads to them. So understanding how you might have a change in heart rate immediately after an event and you may have delay of a day before that manifests in a change in mood. So understanding all of those timescales is also really challenging, even independent of a drug or stimulation.

PARTICIPANT: Exactly. Basically there are multiple different kinds of correlations that can occur across those timescales. Something fast that happens today might be actually causally or predictively related to something slower or faster that happens tomorrow, so that the multiple correlational structures that are available across these timescales are daunting. How do you all think about that?

KIRK BROWN: Just speaking from a mental health perspective, it is true, the clinical course of the condition can be quite slow. But often I think what we’re relying upon when we look at that timescale is a person’s own self-referral for example to a clinic or to another kind of indicator that may be too slow to really effectively intervene in that mental health condition, potentially when it comes to sensors potentially before the person even knows that they’re at risk for a condition.

So I think that is where sensors can be potentially very valuable to indicate that a person may be at risk for a particular depressive episode or what have you.

So for example, we are doing work, and others have done work, showing that just having information on smartphone sensors like GPS and call and text logs, et cetera, are quite good predictors, not perfect but 80 percent accuracy in predicting who is going to have a depressive episode.

So if you can potentially change the time scale, alter the time scale so that the clinical time scale becomes more in line with that moment to moment quantification of information, of behavior. And potentially not just treat conditions better, but potentially prevent them or ameliorate them.

KATHERINE SCANGOS: And perturbation I think is also really important, and a useful tool, especially on a personalized level, trying to understand behaviorally or just through conversation, what triggers a patient to feel or induce certain motor states or behavioral states, internal states.

And then repeating that so you can have more trials and try to constrain the context in addition to using brain stimulation, stress tests, other types of perturbations, pharmacological perturbations that can help constrain the context and reveal repeated events that you can then help match the timescales of events and integrate the signals.

JEFFREY COHN: This notion of the therapeutic time scale is an interesting one, because it comes out of a model of assessment of doing Hamilton interview at so many weeks intervals. And when we talk in terms of depressive episodes, we are implying that there is a continuous state until that transition.

And yet there is a lot of variability, we don’t know very much about it, and it seems one of the really exciting things about the move towards sensors is to understand something about what are the temporal dynamics of a depressive episode, and are people coming out and then just going back down, is that a motivated thing. So I think we should enforce the sensors to kind of conform to preconceived notions.

PARTICIPANT: I think that is such a great point, capturing different timescales of events, and capturing things like internal states and inter-individual variability is quite challenging. I had a question that maybe moves us away from this topic a little bit. Earlier in the morning we discussed issues of standardization that have to do with harmonizing the format of our data, to facilitate data sharing for example. But I was wondering what thoughts you might have in terms of harmonization of analysis pipelines as well, because I think there are a lot of degrees of freedom in terms of how we even just pre-process and analyze these signals.

And a lot of us are using essentially the same signals. We’re interested in heartrate variability, skin conductance, and accelerometry, and maybe voice analysis. To some degree there is no point in reinventing the wheel. So what kind of thoughts do you have, and whether as a field we should try to move into harmonizing those practices.

JOHN ROGERS: I can take a stab at that. My group is more focused on issues of material science and circuit design and hardware and that kind of thing, and we do data analytics when we’re forced to do that, which is frequently. But that’s probably not our main strength. We develop neural networks, various kinds of machine learning, state machines, digital filtering, pretty standard approaches but kind of tailored to the use case.

So for example with opera singers we need to measure vocalization but separate them according to whether it is a singing event or a speaking event. That requires a machine learning model that you have to apply to the time series data. So it is a little bit tailored to the use case. And I think maybe that’s an example where it’s pretty specialized, maybe there are other people who would want access to that.

We post everything to GitHub, we are an academic group, everything is out in the open. It is not in some cases maybe packaged with a user-friendly interface. So that may be an area of need that could cut across broader efforts beyond our own, where the algorithms are there but the user interface just doesn’t allow access by the broader community, but we try to open source everything we do. I think most academic groups have that orientation.

PARTICIPANT: I appreciate that. And the majority of groups have their own packages and pipelines. A few years ago, in the neuroimaging field, we encountered this crisis in the field coming from the fact that everybody has their own preferred pipeline. And that results actually in issues of reproducibility.

And so yes, we tailor things to our own personal questions, and each research question is different. But to some degree, if we are using the same physiological signals, it could be interesting to posit whether there should be some harmonization in what we do with them.

JOHN ROGERS: Definitely, I agree. I am not arguing against that notion, but you try to think about the best mechanism for achieving that outcome. For vital signs and maybe more traditional measurements, maybe heartrate variability, things like that, there are gold standards to compare against. And so you can develop different algorithms but ultimately they have to produce the right answer. So that’s an important constraint I guess as well.

HOLLY MOORE: We will transition to final thoughts. You have been standing here forever I feel like.

J-C CHIAO: IEEE has a lot of working groups trying to standardize the process. So today Bruce just mentioned one very small part. Actually there are many groups working on ECG, EEG, all different signal processing.

PARTICIPANT: That was my initial reaction to Sylvia’s question, that sounds like a good job for a workgroup.

PARTICIPANT: Can we reframe the conversation around privacy, around thinking around sensitivity and risk and benefit, and kind of think about it around the lens of what design principles should govern our generation of data sensors and computation around it, because all of this is going to get integrated. We are currently seeing these as separate streams, but they are going to come together.

So could we kind of think about reframing the conversation a little bit around sensitivity, risk, and benefit, and how the sensor community and the data analysis community can come together under some of those principles, and would that be a set of actions that this community could take in the short-term to say what the principles should be under which such devices and computation should be designed and developed.

LIZZY ANKUDOWICH: I will add that you will need to bring the potential users to the conversation as well. Developers may have assumptions of what are good design principles, but maybe they don’t necessarily overlap with those that the users agree and become comfortable with. So bringing those early on I think would lead to a better set of principles for design. Again, that is a great way to reframe that question.

PARTICIPANT: I think you are out of time, but I am just wondering if I can say something briefly. I think I am one of the few MDs in the audience, and I think it would be important to partner with physicians as well. So I’ve been hearing this not only as somebody who might wear these, but who might treat somebody who wears this.

And there are all kinds of issues that have not even been addressed here. For instance, if I treat somebody and they have this huge data, what is my responsibility as a physician, to know those data, to interpret those data, standards have to be set for that as well. And if something is not accurate, I’m not telling my patients about it. And privacy is an issue. And Sasha, you said you would wear all of the sensors? I would wear none of them. I just wanted to put that in there as well.

HOLLY MOORE: Next we are going to have your final remarks, closing remarks. J-C and John, you can go ahead and stay here, or go to the podium, whatever you are comfortable with.

Workshop Summary and Closing Remarks by Co-Chairs

J-C CHIAO: First of all, thank you all very much who stayed in this workshop for two days. I couldn’t express more appreciation to all of you. Many of you I know for years. Many of you I just made new friends, and I hope this workshop brings everybody together.

So let me do a very quick summary of what we have talked about. We’ve talked about how we have developments. We talked about wearable, implantable device component, systems, software, and we talk about signal processing and multi-sensor platform, and we also have several demonstration of clinical application. And then we also heard unmet need for energy consumption, power harvesting, and also interconnection between network and data transmission.

And then, because data are massive and in big quantity, they are very noisy, so we heard about using machine learning and AI for processing, and also using computation model to analyze, predict, understand the behavior. And it was very nice to see all of these animal models being studied.

So if one of you finds a non-invasive way to detect why my kitty cat doesn’t like to use the litter box, please, I would love to volunteer experiments. And then we heard about standardization, all different efforts by different institutes and univocities to try to synchronize everybody’s data stream. And then we talk about the future development.

And then of course in this workshop when we first started it was coffee table conversation, we tried to bring people together. And so we can only invite a limited group of people. So there will be quite a lot of people we have not heard from. And also for the people we have not invited, could you put up the slides for the overall paper?

Also, for the people we invited to speak and discuss, we really don’t have enough time for you to disclose your research in a comprehensive way. So therefore in the beginning I mentioned that I encourage all the presenters discussants, coordinators to submit a paper. This paper can be about overview or review of what you have talked about, that your sensor development is suitable for studying complex behavior.

This paper can be submitted through IEEE sensor letter. This journal is targeted for a quick turnaround. More informative to everybody who is in this field to have a very quick understanding of what each other is doing. The paper deadline is July 1st, and we’re trying to get this out to disclose to the world what we are doing by October 1st. And so if you have any questions you can contact one of us as guest editor.

You’re also welcome to go to the website. If you have not submitted a paper to IEEE, IEEE uses scholar one portal, which is quite intuitive. At the portal we also will have a list of a topic, and we can choose your topic, and we can have a quick conversation to see if your paper is suitable or not.  I do encourage everybody doing this because this w y we have a formal recording of the workshop, and we can also let more people to know what we are doing and what we are looking for.

So in this workshop, because of the limitation of time, we did not actually include some of the topics that I was interested in. For example, assessment of subjective feeling. My research is working on closed loop inhibition of chronic pain. So we record the pain signal and we trigger the neurostimulator to inhibit pain.

When I was doing that I was also thinking in the Sensor Society we are actually doing that already. For example, you were a smart watch an you look at it, oh today I only walked 4000 steps, maybe I shouldn’t take that bus or Uber, maybe I should walk home. Now the sensor is correct or modifying your behavior. So this kind of a feedback mechanism we have not talked about, and that is simply not by anything did by the hardware but that you consciously modify your own behavior.

So this is a topic we didn’t talk about, and I was hoping that if we have a researcher that is working in these fields that can tell us what they have been doing, and maybe this will start another conversation.

We also didn’t talk about the subjective feelings such as anger, depression, frustration. Those feelings can be amplified if they don’t have sensors to monitor them. Sometimes they are so angry we don’t even know we are angry, and we might have outbursts doing something we regret.

And therefore, I think what we’re doing here with sensors actually helps with somebody who medicates their own mental state. So this is something I was hoping that if there is a researcher working on this in this field, share some insights to the Sensor Society and also the neuroscientists studying this field.

So before I close this, I was sitting here, I couldn’t feel anything but very proud ourselves. We are all working in different fields, but we have come together to solve several very serious problems, not just disease, but make our society better. So we are not working for ourselves, we are working for our children, our grandchildren, our friends, our family, al of society. And we are not just working for Americans, we are working for everybody in this world. So I think we should give us a pat on the back, knowing that today when we go home we are doing something really great for this work. And I will least John summarize his results.

JOHN ROGERS: Thanks J-C. I know we are running behind time, so I am going to go pretty quickly here. But I like your vision of some kind of device to modulate your stress level. Because I can imagine if I’m wearing a few sensors right now, my blood pressure and stress levels are pretty high, it would be nice to be able to drop that down somehow.

But let me share a couple thoughts, one kind of programmatic and one focused on technology and engineering. And so the focus of the last two days has been sensor technologies, to capture complexity, behavior. And what I saw was not necessarily complete bifurcation, but two classes of efforts. One kind of at the level of academic, frontier level research on sensors.

And then on the other end of the spectrum you have sensor development happening in commercial settings, in many cases very sophisticated, but much different styles of research. So the commercial systems tend to be locked, they’re not interoperable. We were just talking about that, a lot of times you don’t have the access to the raw data streams. The academics are much more open in that sense, but they are also much less capable in sort of scaling and providing availability to the broader community to their new and novel device technologies.

It really was kind of apparent to me in this last session where we heard about these really advanced and exciting studies on behavior and brain activity, where most of the work was using Utah Ray’s very well-developed technologies, in many cases it’s commercially available.

Or on the other hand, for neuroscience, behavior oriented research groups, trying to hack together engineering solutions that in many cases look rather sophisticated, but you get much more powerful as you can develop funding mechanisms.

And I think that maybe this is a role that NIH could play that would bring these two communities together in a more intimate fashion, so that sensor developers and engineers could be working directly with behavioral scientists and neuroscientists on hypothesis driven questions around sleep or pain, stress, neuromotor disorders, dementia, neurodevelopmental progress and brain-gut access for example.

Maybe rank order according to greatest medical need, because I think that would help drive the academic community to sensor modalities that are really moving the needle on having a realistic impact on these kinds of behavioral studies, and would also force them into a mode of developing, and this is  not characterizing the entire collection of efforts, obviously, but in many levels developing sort of hero-level one-off sensors that kind of terminate with a publication, but more in a mindset of trying to develop systems that can be produced at quantities that really allow a broader impact in terms of understanding the relationships between the complexity of behavior and brain function.

And I think over the years you would ask, okay, what does the success story look like. And I think if you think about the combination of (names) that was a great example of neuroscience coming together with engineering or Brian Led(ph.) and John Vavimby(ph.) and other examples, neurosurgery and bioengineering for example.

And in many cases those technologies eventually gain traction and have some kind of stable commercial foundation for their continued propagation, but there is kind of a gap between that initial discovery and scale that allows for that commercial translation.

But you’ve seen it happen over the years, Blackrock is a great example, neuro pixels, Florian and Tim Harris in those cases, we have been involved in that kind of things through NeuroLux and Optogenetics technologies, and Scoptics was a technology that emerged out of Mark Schnitzer’s lab.

So it happens, but how could we setup collaborative funding vehicles to make it happen more frequently? I think that would really have a big impact on the broader community, bringing engineers who work closely and are intimately connected with behavioral scientists and neuroscientists and so on.

So that was kind of one programmatic opportunity that I came away with, and I am sure that others have different opinions and thoughts around our discussions over the last couple of days, but that was just one thing I wanted to share with the broader community.

The other one has to do with engineering topics more explicitly, and less about programmatic opportunities. And that would be like where are the grand challenges. I think we heard from Nico and we heard from Avniel, and so on what is a solution, what is a noninvasive method for collecting electrocorticography, like how could you go about doing that.

And could you develop skin interface devices that offer a million channels of EEG data, and could you use those million channels to sort of triangulate and capture high frequency, spatially localized information on brain activity, that might be interesting.

And incorporate arrays of voxels and combine electrophysiology with optical measurements of brain activity. That might be a grand challenge, how do you do noninvasive ECOG or non-invasive Utah Ray type measurements, that would be a pretty exciting development, if you could identify an engineering solution to that.

And then the other one has to do with biochemical markers of brain activity, could you develop something that looks like a CGM but is monitoring dopamine or serotonin or something like that in a continuous way, I think that can be exciting as well. Sweat may be sort of a non-invasive biofluid that would serve as an alternative to interstitial fluid, ECGM technology is gaining a lot of traction, and could you sort of leverage that for measuring brain activity?

But just to conclude I think I want to extend again my thanks to the co-chair here, J-C, I think you did a lot of the heavy lifting with Yvonne and Dana to put this together. I think all the speakers and the panelists and the discussants and the moderators and so on. It’s been a tremendous event, and I have certainly enjoyed it quite a bit, and learned a lot as well. So thanks to everyone.

J-C CHIAO: I want to thank our NIH colleagues. You sacrificed a lot to put this together. I cannot express how much I appreciate this effort.

DANA GREENE-SCHLOESSER: For our concluding remarks, I just want to introduce Dr. John Ngai. He is the Director of the Brain Initiative, and he oversees the long-term strategy and day to day operations of the brain initiative and strives to revolutionize our understanding of the brain in both health and disease. I’m not going to go through all of his accolades, but there a many and a lot, if you want to look on the biosketches and abstracts. And with that I give the floor to John.

Final Remarks

JOHN NGAI: Thanks Dana. I want to thank our co-chairs and our entire planning committee for putting on a great two days of science and engineering, it has really been wonderful. It has also been wonderful to be in person with at least some of you.

Whether you arrived here in person or via Zoom, we have to thank our partners at Bizzell for making it all happen. So thanks very much to you folks, and the staff here at Bizzell for making it work quite well. Though this is our first hybrid meeting since the before times, I think it worked out quite well and I look forward to more of these coming up in the future.

If I may have the slides, I just wanted to, given all the summarization we’ve had over the last two days and all the detail, what normally I would consider a 50,000 foot overview is actually a 100,000 foot overview, so bear with me. I think what we saw here was the introduction or reminder of a lot of new technologies that have been developed by new engineers, as well as novel use cases for technology applications by the neuroscientists. So I think this is really a great substrate for follow-up discussion.

So in a way I feel this meeting was really successful in that these ideas got put out, there is a lot of discussion. But the real measure of success is for you folks to start talking to each other and put together projects that are going to result in real impacts out there, both in the research domain as well as in the clinical domains.

We heard a lot about the needs, we heard a lot of really cool technologies. I think let’s marry the cool tools, the possibilities with the applications, I think we can actually generate the mother of invention here for additional technology development. So I really do encourage you folks to continue talking, and NIH Brain will continue doing what we do to help facilitate those discussions and those projects as they come along.

One thing that I think will assist a lot in bringing folks together from these diverse backgrounds or disciplines will be the formation of multi-disciplinary teams and consortia. And I think this will not only lead to better ideas but also a more rapid dissemination of these ideas. And to the extent that you folks can work in the so-called pre-competitive space, as the saying goes, better to have a piece of a big pie than no pie at all. So I really do encourage you folks to think along these lines.

We heard some great discussions, and it could take up an entire meeting of themselves, about data use, reuse, and sharing, and all the things that go along with it. And I think the really important thing to bear in mind is to not think about this after you’ve done your experiment, after you’ve made your device. I think a lot of these problems won’t get solved unless we think about these issues of data reuse and ethical use of these data at the outset. So I really encourage you folks to be thinking along those lines.

And the next, there is a lot of discussion about data standards. As an old colleague of mine once told me, I’ll paraphrase, I think people would rather share each other’s toothbrushes than each other’s data standards. But we need to overcome that. And I think this is really an opportunity for social engineering. We need to think differently about how we do science as a community to solve these big issues.

I think everybody came here with articulating great aspirations for what they can do with their technology, with their science, but it’s not going to happen, we can’t solve this, and it will really take a village mindset to solve this as I think one of our biggest challenges. And then finally we do really need to incorporate our thinking about what the impacts of our science is, ethical, legal, societal impacts. It is very important for what we do.

And again, another example of what we need to be considering at the outset before we start an experiment, at the initial level of conception in order to ensure that we do things in a way that is going to be beneficial for not just our science but for the folks that we purport to help.

So I encourage you to think about that. Brain takes this very seriously. We have an active neuroethics working group that gives us great input about how to think about these issues both prospectively and sometimes admittedly reactively, and you can learn a lot more about that as you follow what Brain does.

So just a few slides of our own. Just a reminder, we have our Annual Brain Initiative Meeting coming up in June. Anybody is welcome to attend either virtually or in person, it will be in North Bethesda, June 12th and 13th. It is going to be packed with three plenary lecturers, six symposia, and three concurrent sessions, trainee highlighter award talks, and a lot of opportunities to network and talk science with each other. And this is the first time we are back in person since 2019, I think. People are really excited to be coming together and having discussions of the sorts you had today.

I just want to mention we have three terrific plenary speakers lined up that actually address many of the issues that we discussed during the last two days. So on the left is Dr. Vanessa Ruta from Rockefeller University, and she is going to tell us about her really exciting work on the modulation of neurocircuits and behavior, using the fruit fly as a model system.

In the middle is Professor Anita Farahany from Duke University Law School. She is a very prominent ethicist, and she actually specializes in this area of data privacy and data use in the neuroscience space. She just recently published a book. And she talks not only about data privacy but this concept of cognitive liberty.

So I encourage you folks to at least tune in for that talk if you’re not there in person, for some really I think insightful and provocative thinking about some of the issues we just discussed here today in terms of how do we deal with data from humans.

And then finally Dr. Sameer Anil Sheth from Baylor University who is at the forefront of using various invasive technologies for treating neuropsychiatric disorders and other conditions. And I think a really exciting set of speakers for the plenary talks, but really great opportunities for folks to get together and network. So I encourage you to attend either virtually or in person.

So there are plenty of ways for you folks to learn about what the Brain Initiative is doing and the kinds of science we’re supporting. If you don’t mind a little bit of extra email in your inbox you can subscribe to these various features, and you can learn more about us on our website, braininitiative.nih.gov.

So again, thank you all for being here. We have really had a great two days of folks both in person as well as virtually, and we look forward to gathering with you again.

HOLLY LISANBY: So in closing, let’s give all of the speakers and discussants a round of applause for outstanding discussions. And again, going back to some of the logistics of this whole meeting, again John, thank you for sponsoring this workshop with the Brain Initiative and the NIMH for sponsoring this workshop as well. And here are some of the folks behind the scenes in science policy putting together to travel for you, our contractor Bizzell, NIH Events Management, OD Communications, and our caterer. So thank you to them.

And the last slide too, Holly mentioned this morning all of the program officers who worked together to put together this meeting. And I could not have done it without them. And I hope you got to meet them all. So that’s it. And we will be sticking around until 5:00 if you want to have some more discussions, there will be more discussions to follow. So thanks again, everyone.