Select Page

How are the latest advancements in Artificial Intelligence impacting the future of healthcare? Can humans develop an emotional relationship with a machine? Jon Shieber, Senior Editor at TechCrunch, leads a panel on the evolving relationship between humans and machines.

Jon Shieber
Senior Editor
TechCrunch Today

Jonathan Gratch, Ph.D
Director for Virtual Humans Research at Institute for Creative Technologies & Research Professor
University of Southern California

Katie Aquino

Fred Muench, Ph.D
Mobile Health Interventions
Director of Digital Health Interventions in Psychiatry
Northwell Health

Jon Shieber:
Hi. I’m Jon Shieber. When I’m not making artisanal pickles in Brooklyn, I report about technology and venture capital investment in Manhattan and points elsewhere around the globe. We have a wonderful panel to talk about
can you develop a relationship with a machine. I think all of us would agree that not only can you, but we have. The question I think to interrogate is what kind of relationship you have with the machines and how desirable that relationship is. To
help us get into that we have an incredible panel. We’ve got Fred Muench who is with Northwell Health and mobile health interventions. Thank you very much. Sorry about that.

We’ve got Jon Gratch who is a professor at USC who does research into all of this stuff. We’ve got Katie Aquino in the middle, who works with the robotics company BodAI and talks about trans humanism and the future of technology and our relationship
with robots. There’s a lot of stuff to get into, but … That was a pretty cursory introduction about what these fellows and fine lady do. I’d like them to do a little bit of a brief introduction to explain a little bit more about how they work with
technology and mental health and the relationship between humans and machines. Fred, if you wouldn’t mind giving a brief intro.

Fred Muench:
Sure. It’s a pleasure to be here. I’m a clinical psychologist and I consider myself a technologist as well. My main interest is how do we facilitate progress in the therapeutic relationship and behavior change over the
long term by understanding the interaction between technology and the human. How do we maximize behavior change using technology and the human relationship. What kind of digital dose do you need at the right time and when is that most effective. When
do you need human empathetic contact. That’s a lot of the research we do is to try to understand that phenomenon.

Jon Shieber:

Katie Aquino:
I founded a company called BodAI and what we’re doing is were creating personal use robots. We’re using a myriad of technologies including VR, artificial intelligence, and also we’re creating advanced humanoid robots called
Bods. At BodAI, we’re looking to provide these personal use robots to all peoples of all budgets, all passions. That simply does not exist today. I think that these emerging technologies such as AR, VR, will be opening those experiences up to the
public and we’re very excited about it.

Jon Shieber:
Jon? I believe you have a multi-media thing for us too.

Jonathan Gratch:
Yeah. We’ll see if it works. I tried to bring some technology to introduce myself.

Jon Shieber:
What? Technology on a tech panel? That’s insane. It’s crazy. Never going to work. This is the way these things go. I warned you.

Jonathan Gratch:
This is what I do. I try to build machines that can in some sense have empathetic contact with people. This is Ellie and she’s designed to do an interactive screening with people. Let me just play a little bit of that.
You can’t hear the audio, but she’s basically explaining that she’s not a therapist, but she’ll talk with you … Here you go.

Are you okay with this?

Off-camera male:

So, how are you doing today?

Off-camera male:
I’m doing well.

That’s good. Where are you from originally?

Off-camera male:
I’m from Los Angeles.

Oh, I’m from LA myself.

Jonathan Gratch:
One of the things we do here is we’re doing facial expression recognition. We can automatically infer the pattern of facial expressions people do. We’re analyzing audio signal in the voice. We look at posture like that
connect. This character is also doing some things that she generates facial expressions and postures and can do the moment to moment feedback that conveys a connection to people in natural conversation. Ultimately the goal is to try to elicit indicators
of depression and part of our research shows that people actually disclose more to this kind of technology than they might when they’re talking to strangers.

Jon Shieber:
Jon! Spoilers, come on! We’ll get into that in the conversation. Come on, geez.

Jonathan Gratch:
I’m done. I’m done.

Jon Shieber:
One bit of housekeeping. To set things off. I like to make these discussion as interactive as possible. I’m also really lazy, so if you wanted to tweet questions at me while we’re chatting up here … If you have anything,
any burning question you want to pose to any of our panelists over the course of our discussion, feel free to treat at me at my Twitter handle, which is @JSchieber. If there’s good enough reception in this hall, hopefully I will get them and it will
work and I will incorporate your questions into our discussion if they are good. If they’re bad, I’ll just make fun of you, because that’s the kind of asshole I am.

I guess the first question that I’d like to pose to all of you is not “Can people have a relationship with their technology, which clearly people do, but what kinds of relationships people have with your technologies?” Fred, we’ll start with you, Jon
you, then Katie will wrap up there and then move on. Fred, what’s your take on that. What sorts of relationships do people have with the offering that you provide.

Jonathan Gratch:
Based on the work that we do and the research, we see 2 or 3 primary points of relationship. One is that the technology opens it up to engage into a system very non-judgmentally. Not only are you expanding reach, you’re
allowing someone to have the opportunity to disclose and engage in a system at their own pace. Technology allows people to do that. We’ve seen engagement rates go up dramatically. We built an automated SMS program for problem drinkers and what we
found is that people were signing up in droves for this program, because they didn’t have to go into care and it wasn’t connected to their health record. They were engaging. The other relationship that we see is that you can provide an ongoing salient
touch of therapeutic goals in a way that the traditional therapeutic relationship can’t do. If you are introducing some automation that is completely tailored and customized to someone’s goal is that you’re allowing that relationship to continue when
the relationship with a human can’t continue. Whether that’s at 3 in the morning, whether someone’s going out for a drink and they type in drink and they get a drinking plan for that evening … Whatever it might be, you’re allowing that and you’re
supplementing the human contact.

Then what we’ve also found is that there are crisis points and there are points when people want human contact. As long as you can integrate that into automated systems, like when someone has a risk for relapse, one of the things we found was that people
type in “regret” very often to these machines. After they’ve had a heavy night of drinking they’re typing in regret. They want to talk to someone about that. What I’m’ very interested in and what we’re really seeing is that people want to interact
with machines in certain settings and they want to interact with people in others. How do we find that component?

Jon Shieber:
Jon, with your research, is there going to be a point where there won’t need to be a human … Ultimately what you’ve got is this responsive AI system that’s learning how to be emphatic, while also being a completely non-judgmental
program that has no conception or need to conceive of what exactly is going on with that person. It’s just this reflective conduit. Not conduit, but this reflective party that can maybe guide a discussion, if it’s programmed the right way or assuage
it. Is that the next step from where Fred’s work is going? Does that become the natural progression to ease the human part of the equation out of the equation more?

Jonathan Gratch:
Our ultimate goal in our lab is to make machines able to understand people, able to empathize to be emotionally intelligent. I would say the state of this system gives the illusion of emotional intelligence. It doesn’t
understand deeply what you’re talking about. It’s useful as an icebreaker to open up the conversation. You wouldn’t want to supplant … I don’t see us able with technology in the near-term to be able to fundamentally understand a person’s goals motives,
needs. That’s necessary for treatment. I see this as a door-opener. People … The original reason for the project was that we’re trying to figure out how we can use technology to reach people who are reluctant to talk to people? Maybe they’ll be more
interested in talking to virtual people and we have some data that they are, but that’s I think the first step. We need to find out how the technology compliments the strengths and weaknesses of human clinicians.

Jon Shieber:
When I think of where y’all are, it seems like everyone’s on a spectrum. Fred, you’re at sort of a text base version of solving the problem. Jon you have this visual element to it as well. Katie, you’re full on. These are
robots that are going to be interacting with people. What does that do to the relationship? What does the physical element add? How does it change things? What sorts of things are you looking for your BODS to do? Are they hot BODS? Are they attractive
BODS? Are they …?

Katie Aquino:
Well it depends on, well, what is hot? What is a relationship? I think that relationships in general are just changing. We see that all around. I think that our perspectives of humanity … I mean, we are changing. We are
evolving. We are becoming in a sense cyborgs. We live through our phones. They are an extension of us. Can we have a relationship with a machine? Absolutely. At BodAI, what we are doing is, we are working closely with all backgrounds of people to
find out what they want, what makes them tick. We think about our favorite Sci-Fi movies, even talking about the film “Interstellar.” All of us fell in love with the character Tars. Tars was an AI and he wasn’t a cool AI, because he was a smart, artificial
intelligence robot. No, that’s not it. Tars captured out hearts, because he had a little moxy. He was an interesting, cool, robot. He could challenge you. We want challenges. As humans, we want challenges, so we’re going to bring those challenges
into the AI’s that we are continuing to develop. We want them to be fine-tuned. We want the to challenge us. We want them to excite us.

As far as aesthetics goes. Will we have hot robots? Sure, we’ll have hot robots. They are anatomically correct, but at the same time, we have to keep in consideration that not everybody in the future is going to want a perfect looking woman of today.
It’s not going to be like that. It’s going to be changing. Do we even want to look like somebody who is a human? Maybe we’re evolving. I think there’s a lot of things to take into consideration.

Jon Shieber:
I didn’t mean to go down that rabbit hole quite yet. We’ll get there.

Katie Aquino:
Let’s look at the adult toy situation today.

Jon Shieber:
Now, we’re definitely going to put a pin in that and come back to it. Geez, am I blushing? Is it hot in here. Woo! Do you want your emotional, your AI’s to have moxy? Jon, do you want your AI’s to have moxy? Is moxy something
that’s important?

Jonathan Gratch:
It can be important. IN general, what part of my research is trying to understand that people use emotions strategically. They don’t always just show what they feel. They use emotions to achieve certain social goals.
In humor, self-deprecating humor, but also moxy can be effective to convey a certain attitude which could promote certain goals in certain situations. In therapeutic situations I’m’ not quite sure, but absolutely… In part of our research, we look
at business context, negotiations, where people clearly use emotions strategically to achieve ends.

Jon Shieber:
At some point, would an application for this be to have the AI do the negotiating?

Jonathan Gratch:
Absolutely. There’s work on that right now.

Jon Shieber:
Is that being done in your lab?

Jonathan Gratch:
Yeah, it’s being done in our lab. It’s being done all over.

Jon Shieber:
That’s sweet. Again, I’m lazy so this sounds phenomenal. Less work for me. Where’s my basic minimum income. Kidding. When you think about what it is that your machines are feeling or what it is that they’re reflecting or
providing to the humans that they’re in a relationship with, what does that look like and how does that happen? Fred and Jon, y’all are doing the research on that stuff? What are you seeing and what is it exactly that’s being collected to give these
programs a soul? No, I’m not going to go there, but to give them that intelligence, that emotional awareness. Is that even the right term?

Jonathan Gratch:
At some level machines need goals to achieve their ends. Emotion is an important part of goal achievement, motivation. In some sense, very rudimentary sense, these machines have needs and goals. I wouldn’t necessarily
equate them with human needs and goals, but I think eventually as they have to become fully socially intelligent, they will bring important needs to therapeutic situations. I think there’s also an interesting ethical dilemma, because it’s also totally
possible to design a machine that creates complete illusion of needs and ends, but has different ones. That makes a lot of sense in negotiation. I think there’s some interesting ethical dilemmas that will be coming down the pikes.

Jon Shieber:
Fred, are your programs learning? What are they learning? How are they learning?

Fred Muench:
They are learning. We start out though with getting a comprehensive picture of the individual. Just like all machines, they have to be trained, as Jon was saying. As we build the algorithms to then trigger and ongoing intervention,
it’s what someone puts in is what guides it. We have to allow for the end user to give us information. We know that people are much more honest in self-disclosure to these machines, so people are going to give us information and then we take that
information and modify it. One of the pieces we’ve found, is people dislike getting the wrong type of feedback. We’re very careful. We’re very conservative in what we put out there in terms of making assumptions about what they want. We’re conservative
and then we look at certain things and allow individuals to make judgments about their sane, for example.

Do you feel like over the last week you’ve changed to achieve X goal? Based on that we have enough information to understand goal revision, understand where someone is, provide just in time empathetic feedback and then move on. What I would say is we’re
at such early stages in building these adaptive algorithms. We are very careful as not to throw certain things out there, because of the potential downside to someone getting the wrong information.

Jon Shieber:
I think that’s incredibly important. Katie, how are your robot … Can you talk a little bit about the technologies that y’all are looking at in terms of how you’re going to integrate these kinds of intelligences into the
robots that y’all are building?

Katie Aquino:
It’s a mixture between … To backtrack. A lot of things that we see today, especially those viral videos of things that look like AI, a lot of the times, they’re more like a chat-bot. What we’re developing is something
that’s not a chat-bot. We’re going to bring in a little bit of deep learning. We are going to have some elements of the ability to chat back and forth. They will be able to remember, but what’s important is having that base personality. The experience
of what we’re feeding each BOD or personality is going to be gamified. Through gamification, we’re creating a seamless user experience that’s controlled by your smartphone app. Through your smartphone app, you’re going to be interacting with your
phone, collecting data. This data from who you are, what your interests are, is going to be fed to the algorithms that is part of this BOD, this personality. I think when we’re online dating, we put a profile up, we have what we like or what our interests
are … We’re kind of looking for people that have similar interests. It’s kind of the same thing. We kind of see it as you’re creating your own online dating profile, but for your BOD, for your synthetic partner.

Jon Shieber:
How has that vision of the future manifested itself in the reception that you’ve gotten for this idea of putting the BodAI product out there as something that people will eventually have. Or there’s another one that does
these exoskeletons that are also enhanced … All of this enhancement or relationship … making the relationship with technology much more physical. Bringing it out of the realm. How is that been perceived broadly? What’s your sense of that?

Katie Aquino:
People are actually excited about it. We have a different kinds of feedback from different kinds of people and that’s just how it’s going to be. There’s people who are saying “Oh, I can’t wait for the day.” Then, there’s
people who are saying “That’s interesting. I would definitely try that. That sounds awesome.” I haven’t heard anybody actually say to me “That’s scary, I don’t want to do that.” Of course, there are plenty of people who would give that feedback as
well. The most important thing is we are working with an entire subculture community out there, known as idolators. Hardly anybody has heard of them, but they’ve been on television. One is named Dave Kat. He’s been on TLC. “My Strange Addiction” and
“Taboo.” He’s married to a synthetic person. Unfortunately he is a robo-sexual, but because the technology simply doesn’t exist and what does exist on the market currently is slightly horrifying, people are awaiting the day when they can either experiment
or actually live with a synthetic partner.

Jon Shieber:
Really, no one has questions about that? Either that or y’all really are bad at Twitter. We’re going to get to an actual person walking around with mics at the very end. In about 10 minutes. That is something that’s going
to happen. I’m waiting for the comments to start pouring in. Reception is also really bad. I’m going to follow up with that a little bit. I want to telescope it out a little bit. Jon and Fred, do you look at something that is this material and physical
that people can have an actual tactile relationship with, as the endgame for the types of therapies or systems that y’all are looking at. Where you have these either robotic proxies or these avatars that exist in physical space?

Jonathan Gratch:
Part of my research, I try to understand how do people respond differently to machines than people. One of the reasons I was fascinated with this therapy project is because … Actually, when you interact with people
you engage in impression management. You try to build yourself up. You don’t want to disclose, because you fear being judged. One thing that I was curious about this project was the idea was to add a human-like element to a machine. We know that people
disclose more information to computer forms than to people in some cases because they feel more anonymous, they feel less judged. The question was the other pathway is that you can build rapport or alliance by being a person and giving that kind of
empathetic feedback, but if you combine those two things, I wasn’t sure if it would be the best of both worlds or the worst. You might actually undermine the advantage of a computer by adding these human like elements.

So far, we haven’t showed that to be the case. My guess would be that … In part that’s because the character here emphasizes her computerness in many ways throughout the interaction. I think if you add full … We’re actually going to experiment with
this creepy life-like android with this system.

Jon Shieber:
Creepy and horrific are words that have been used to describe these things.

Jonathan Gratch:
My guess would be the more you make it really like a human, the more you’ll undermine the benefit that you get from listening-

Jon Shieber:
There’s a theory about this. It’s the uncanny valet, right? There’s that point at which the anthropomorphized thing becomes near enough to human but not quite. That’s why all those baby dolls are really creepy, right? They
look … They’re sort of real, but they’re not real. Fred, I jumped in on you.

Fred Muench:
One of the things that I’d like to do is add to what … We work with disclosure as well. Women report more sexual partners to a computer. Men report less. People report domestic violence, because there’s no impression …
It’s not necessarily that it’s anonymous. You know someone’s at the other end, but Jonathan brought up, their non-judgmental. To go back to that, it is that there’s a tremendous opportunity to gather information and with diagnostics. I think that’s
where we’re going to see this massive change in what therapy and psychiatric and mental health is. It’s going to be completely taken over digitally. It just does it better. It does a better job. I don’t think that the human relationship and the goal
of the therapeutic will be a machine. I think machines can augment contact, but I … I teach a class called “Crafting Mindful Experiences” at NYU. What I find more than anything is that the majority of projects … They are all over the place, but a
lot of them are “How do we use technology to create a deeper connection with other humans?” Technology allows us to connect in ways we haven’t done before, but we also have to disconnect.

For people with extreme trauma, I do think … We know that dolphin therapy, pet therapy, any type of connection is good. Whether it’s a machine or not. At the same time, it’s about helping people engage in their world. We know people with social anxiety
would rather talk to a machine. The goal is to get them to talk to humans, so how do we find that balance?

Jon Shieber:
I think that just lends credence to a theory that’s circulating more broadly in the tech community in general. There will be specific applications for specific types of AI’s. Some will be more effect-less and some will
be more affective or have a more emotional response based on the kinds of things that you want to get out of that AI. AI is the wrong word. It’s more like a learning program. Someone brought up Tay. I feel like we should talk about Tay just a little
bit. I have some things to say about Tay. Basically, the question from Arianna Tobin … Yay, Arianna. Can we talk about Tay? Therapy is inherently vulnerable and technology is inherently experimental and the tension between that. I think that Tay is
not a good sort of corollary to the work that y’all are doing. Tay was just goddamn dumb. Why would you unleash an innocent program on Twitter. It’s expecting the worst of people. That was like Microsoft being completely myopic in terms of the the
ways in which it was thinking about how the AI would actually learn. The assumption that Twitter represents real speech or some sort of actual forum … I mean, in some sense, it is a forum for ideas and it motivates change, but there are just too many
trolls in the world. Trying to learn from the unfiltered internet is a terrible idea.

You wouldn’t throw a child onto Twitter and be like “Okay, this is how you’re going to learn about humanity.” Nor would you do that with a BOD. That’s my take on things. Do y’all have anything …

Fred Muench:
The only thing I have is just as we disclose more to computers, the reverse also is we see the downside of humanity … It’s, yes, when we have a problem, we’re going to open up and also, we’re just going to open up about
anything. You’ll see it with bullying on certain social media apps. We see it all over. The other is that people are impulsive. What someone says in the moment on Twitter is all about just pure drive, ID …

Jon Shieber:
Some of what people … I mean,

Fred Muench:
Some of what people say.

Jon Shieber:
You also have the foundation of the Egyptian revolution on Twitter.

Fred Muench:
I get it, but I’m saying using that to build a system to learn off of. I completely agree with. You’re getting the best and the worst of both worlds.

Jon Shieber:
Anybody have anything.

Jonathan Gratch:
I guess I’ll just say that I think was an illustrative example of the limits of the current speech processing technology. Most of what you see with speech processing is fairly shallow. It doesn’t have a deep model of
what’s going on in the interaction and I think that on the one hand, highlights the need for the deep models, but counteracts some of the great hype around AI these days. It’s about to take over the world. It illustrates that these techniques have
severe limits, when it comes to deep intelligence.

Jon Shieber:
And frightening that you want to cede so much control to them. Does that also aspect of things give you pause, in terms of how much the limits of what the technology can do? Or are we just saying the same thing?

Jonathan Gratch:
People in AI recognize the limits much more than the media and Elon Musk seem to at the moment. There will be important, slow advances. People incorporate these things when they’re effective. A lot of the hype we can
realize it will take a while before these things are able to deeply understand and communicate with people.

Jon Shieber:
I’m going to give in to my more prurient and salacious nature. Let’s talk about sex, baby. With the artificial intelligences, the way they are currently now, is a sexual relationship with a robot or an AI something that
is … I just wonder about the power dynamics and whether that’s actually something that’s desirable or something that people should be looking for or whether that’s just a replacement for people who have some associative disorder and can’t relate to
humans. I don’t know if I’m being too judgmental. I’m just throwing that out there. Anybody want to take a crack at it?

Katie Aquino:
I think we need to take a step back and think that the far future implications of this technology that we’re making is going to be advanced. It will be like people. It’s not so cut and dry. Even going back to Tay’s tweets.
That was, like I said before, there’s a clear distinguishment between the chat-bot AI’s and also more of deep learning. There’s a lot of differences between them. Of course, when you put something like Tay on the web, it’s going to have that effect,
because it’s too tempting. On the other hand, I think that as far as the sexual implications go for these robots, it’s going to be an exploration. We people are exploring ourselves. Technology is becoming a part of us. If anything, sex is going to
become better in the future by far. We are biological machines and there’s a lot of problems that happen unfortunately to our biological bodies. There are real companies, organizations, real people, scientists, researchers, working right now to create
a next human, an evolved human in the far future. Not right now, but I do believe that machines will be the first step towards us exploring our own sexual identities. Then in the future we will also perhaps evolve theoretically into maybe something
else. Maybe a man and machine with a machine.

Jon Shieber:
Fred, Jon, do y’all want to wade into these waters?

Jonathan Gratch:
I guess I’ll say, I think we should distinguish sex from relationships. People are having sex with machines right now. Right? I teach … If you see the movie “Her,” I would say is what an image of what a relationship
could be like. I teach a course on affective computing on emotions and machines. One of the assignments I give my students is to think about “Should we build “Her.” I give them some of the research on how does Facebook change us or things of that
nature. I think it’s a good thought experiment that all technologists should address. What are the implications of building machines that could form real relationships with people and is that a good or bad thing for society?

Fred Muench:
Just to add … If you look at the sex industry, they’re always advanced in terms of technology. They’re always the first at doing augmented reality, virtual reality, adding in machine learning, toys. You do see where things
are going to go when you look at that industry. The depersonalization and I thin what Jonathan mentioned, in terms of, you know a relationship versus sex … You’re seeing people who grew up with internet porn having trouble with sexual relationships.
You’re seeing an increase of people trying to understand what it’s like to be in a true loving relationship. As someone who has a 13 year old boy, who … He knows what the deep web is. He knows how to get there when we have every restriction up. How
are we going to maintain a loving relationship in the face of all these tools. I also think, at the same time, we can engage people and build interventions and avenues toward intervention if we have a window into getting that information out. He gets
it. He gets the potential harm of diving into this in a way that’s uncontrolled. That’s my fear of it. At the same time, I think there’s a lot of hope.

Jon Shieber:
I’ve eaten into about 4 minutes of the questions from the floor. Are there questions from the floor. I can keep asking these people all sorts of stuff. Anybody? Really? Nothing? Sir, right over there.

Audience Member 1:
Earlier you mentioned some of the ethical dilemma that were going to be coming up in the future. The gentleman on the right. I wish you could elaborate on what some of those might be.

Jonathan Gratch:
I think you touched on it. One of the issues … There’s actually a lot of question around Facebook. Is Facebook something that helps us grow as individuals or is it something that is nihilistic and has us reflect only
upon ourselves. There’s a fair amount of research around that. It seems to play some role in self affirmation, so Jeff Hancock who is at Cornell now at Stanford has looked at how does it change your decision making. How does it change your mood when
you read your own wall and see how other people’s connections about you influence … Do you actually feel better? Do you actually change your notion of self. There’s also work that people who use a lot of Facebook are some of the loneliest people,
by Cacioppo at the University of Chicago.

This technology enables different ways that people use it. It does seem to have long-term consequences for behavior and it’s not very well understood what those consequences are. It’s certainly not being considered by the designers.

Jon Shieber:
I think at some point you get to question from the opposite side, the ethics of the AI’s themselves. What rights do they have? How much can you program them? When do they start to become legal entities? You’re seeing that
across a range of things. Not just with psychological ethics, but real moral dilemmas as we get into issues of autonomous vehicles and self-driving cars. Who you kill, the old man on the road or the car full of kids or the puppy, right? At some point,
the AI is going to have to make that decision. What that looks like, no one knows yet. I think people are still grappling with in a real profound way. That’s just one example. There are hundreds. If you have AI who are doing your contracts, who is
legally liable when those contracts go bad or when the negotiation instance. Did anybody else want to weigh in?

Katie Aquino:
We already have AI’s controlling stock markets and things. We already have it in our lives. The only difference is that it’s going to be more abundant and more ubiquitous everywhere. It’s going to be from our cars to everything.
Our transportation, everything will be controlled by AI’s. I think that’s just where we are headed. I don’t think we should be afraid of it. Going back to the theme of what this discussion is really about. We’re talking about relationships with man
and machine. What is happening is this is about filling a void. A lot of us are overwhelmed by technology, yes. On the other had, yes, we do have more exposure to other people. That’s how perhaps Facebook is considered a negative is that people are
able to connect with one another. Then you have Tinder. You have these apps that area about instant connections, instant gratification. It is harming human to human relationships. Also, on the other hand, we have to take into account, there’s just
a lot of really lonely people. What’s the hurt if you have a synthetic companion who is programmed and is there for you, to help you, and you can trust this synthetic being.

I think that’s going to have a positive effect on the world. I think there’s a lot of single mothers … It’s going to become more advanced, more like a person. If that could fulfill that void in people, I think that would be a positive.

Jon Shieber:
Fred, the work that you are doing is incredibly positive when it comes to the relationship that folks are having with these SMS systems. Where there are responses that are positive that are keeping people from doing things
… Either regressing in their behavior or if they have regressed, getting them in front of people who can help them out. Keeping them on a path that is positive for them. Are there other questions, we’ve got about a minute and a half. Yes ma’am in
the front?

Audience Member 2:
Thank y’all so much for being here. This is so interesting. I love the evolution of the field. I’m mostly very curious about … Because I started my online practice and people were so concerned about classified information
and mandated reporting and things like that. Now that’s it gotten to stage with ER and AR, clearly y’all are looking at the data … When I think about the phases, what’s the ultimate goal for this data? Now that y’all know that people are more likely
to report … Are we going to design different elements to improve the human experience for example?

Jon Shieber:
Yes. Right?

Fred Muench:
As you can see from the panel, I would say I’m the Luddite of the panel. When you look at the amazing work that these guys are doing. I do think it’s going to constantly evolve. What we will find is what’s working and what’s
not. The mental health field, the medical field, we’re primarily risk-averse. This has allowed us to … The new digital medium has allowed us to really test and iterate in a much more agile way. It’s so exciting to see what’s going to happen, whether
it’s from a disclosure, safety planning, whatever it might be. We know that technology is the greatest thing in the world to help people who are suicidal. There’s nothing better, yet practitioners are often fearful. What if someone is suicidal. Technology
is good for this. We want technology for this. I guess the short of it is, I do think what’s going to happen is it’s going to constantly be evolving and I don’t think there’s any right answer.

Jon Shieber:
With that, we are officially over time. Thank y’all so much for listening. Thanks to the panel for participating.