Delray Beach’s Joe Mullings, a leading headhunter in the medical technology, or medtech, industry, discusses his own health scare as well as his globetrotting Web series TrueFuture in the May-June issue of Boca. Our wide-ranging conversation included many subjects we couldn’t fit in this article. Here are a few of his additional insights.
On the challenges of headhunting
When you have a dialogue with people, you’ve got to be able to talk about why is this market I’m talking to you about good? Why is this company that’s in this market good? Why is this leadership team that’s in this company that’s in this market good? And then finally, why are you going to show up every day in the parking lot? What’s the mission of the company. So I always talk about, market is the race, product is the horse, leadership is the jockey.
And so, you’ve got to be able to articulate that with a level of expertise to the “A” player who is an expert in their market. It takes a lot of skill, a lot of study, a lot of intelligence, a lot of grit. Because you get about 99.999 percent “no”s. “No” to you has to be inconsequential—we’re delaying your decision to move to another day. And that takes a special mindset.
On Palm Beach County as a potential or definite medtech hub
Most people don’t know this, even in the medtech community: The number of registered manufacturing companies in medtech has Florida as No. 2 in the country. There’s a big corridor up in Orlando and on the West Coast. Not as much in South Florida. What’s happening here is that while we’re really well-known for manufacturing, we’re still working our way through early stage R&D. … We’re light on venture capital when it comes to medtech down here, but that’s growing. There are universities down here, and we especially have a great hospital network down here. A lot of research takes place in those universities that are the birthplace of these emerging technologies. I would say that life science has got a little bit of a stronger hold, with Scripps, Max Planck.
That’s research, which is still a long way off from commercialization. In South Florida you’ve got Johnson & Johnson still, you’ve got Medtronic, you’ve got Zimmer—billion-dollar players that do have a stronghold. Now you’re starting to see northeast money coming down here. There was talk of Goldman Sachs looking at space in West Palm for their $8 billion asset management business. When you start to have that money come in, things are starting to shift.
On how COVID is changing medtech
Medtech, historically, has been overseen by the FDA. There was a very quiet force underneath, pre-COVID, of digital medicine and passive patient monitoring. And right before COVID, the monitoring of consumer products was a $10 billion a year industry on vitals within the consumer. Since COVID hit, we’ve become obsessed, and maybe awakened, with blood pressure, body temperature, blood oxygen levels—all over-the-counter devices. And now telehealth, which doesn’t just mean looking at your physician on the TV screen. It’s actually telemetry of biometrics coming off the body, sharing them with your health care provider. I think that bodes extraordinarily well for South Florida, because we have great software engineers, we have the gateway to South and Central America, we have a very strong gateway to Europe.
When you take away the oversight of the FDA, you still have an adjacent technology base, which I call health tech, where we’re monitoring everything that’s going on, from your toilet on your gut biome to your toothbrush on gum disease, to triangulation of simple biometrics on a daily basis. I look for that to be a large market down here in South Florida.
On the moral implications of artificial intelligence
We are taking an algorithm, and designing an algorithm to make decisions in software. And how you program that decision-maker, that algorithm, is as important as what ethics, morals and judgments and sub-pseudo-consciousness you give it; is as important as the STEM expert who wrote that code. I think we have enough STEM people; we can always use more. You can never have too much intelligence.
On differences in AI
AI is a general category. Unfortunately, a majority of the world has got AI with Arnold [“The Terminator”] and “Ex Machina.” Those are what we would call artificial general intelligence, AGI, where it can start to think for itself, and start to correct itself at a speed we can’t even fathom. But then you have narrow AI, which is things like self-driving cars, facial recognition. It is not designed to do anything other than think in a very rapid fashion within a certain finite set of machine instructions. That constitutes 99.999 percent of our AI today. Yet romantically, we think about Hollywood, and the monsters are coming to get us. And it’s fair, because we should be concerned about that.
Over the next five years, AI is going to … put a meaningful part of the workforce out of the job. Because we’re becoming very good and doing things that human beings do repetitively, predictively, and that sector of the population has not been trained to upskill. For some reason, most societies have decided to cease their education after high school or, at best, college. The biggest risk of AI right now is not that robots are coming to kill us, not yet at least: but we are going to put a meaningful part of the workforce out of work. And we have not put together a program to train them, so they can become useful contributors to the emerging workforces.
If I sat down at a computer screen and played chess against the best chess software program that was AI driven, I would get crushed in six moves, because that AI was built exclusively to learn within chess. And it will learn from itself. However, if I took a robot and then said, take the chess pieces off the shelf in the box and set up that chessboard for me in an analog fashion, it couldn’t do it. That would be general AI. That’s where you’re taking something and not giving it a finite track to run on, and asking it, just by itself, to think by the wet mass of meat we have in our head, to think laterally outside of its machine language. So that’s dangerous 15 years from now. What I’m most worried about is the societal impact of narrow AI on the workforce, which will lead to civil unrest, which will pit the haves and have-nots against each other.