EPISODE 1785 [INTRODUCTION] [0:00:01] ANNOUNCER: JFrog is a DevOps platform that specializes in managing software packages and automating software delivery. One of its best-known services is the JFrog Artifactory, which is a universal artifact repository. JFrog is also focused on rapidly emerging needs in the MLOps space. Bill Manning is a Senior Solution Architect at JFrog. He joins the podcast to talk about his background in startups and venture capital, and his current work in ML at JFrog. This episode is hosted by Sean Falconer. Check the show notes for more information on Shawn's work and where to find him. [INTERVIEW] [0:00:45] SF: Bill, welcome to the show. [0:00:47] BM: Well, thank you so much for having me, Sean. I'm really looking forward to this today. [0:00:50] SF: Yeah, absolutely. Thanks for being here. I'm excited to talk more about JFrog and DevSecOps and some of your areas of expertise. But first of all, I wanted to dive in a little bit of your background. Who are you? What do you do? Explain your role at JFrog. [0:01:05] BM: Yeah, absolutely. I've been with JFrog coming into almost eight years now. Prior to JFrog, I've been doing this for a long time. Started my career around '97. Yes, I'm that old then that's what I do. But I've had a very successful run. I've had three acquisitions. One of the things I've done since I began my career was always pick different technologies every time. First company I had was a first web-based CRM platform. A lot of the people that were there went on to form things like Marketo, and SugarCRM, and Salesforce. Next company I did was email encryption and security and we sold that to Cisco in 2006. Third company was IoT before IoT was even a thing. It was called 4Home. It was a connected device company. We sold that to Motorola and Google in 2010. I was a venture capitalist for a bit. I was with Vodafone Ventures. I was senior media partner. I did another company called XTV about media consumption to the public in Australia. I did some other work around after that with some startups. I'm also a mentor. And then I joined JFrog a little over eight years ago when their were a midway into the company. And I had some friends here and they said, "We'd love for you to come in and give your input." And since then, I've been a solution engineer, solution architect. Managed teams here. I do everything from a little bit of marketing, to sales, to you name it. Currently right now, I'm transitioning over actually into machine learning. And so our JFrog ML platform, which we just launched this year with the acquisition of Qwak, I have now joined that team on a go-to-market strategy. And on top of that, also to education and things around that. That's me. [0:02:31] SF: That's awesome. Yeah. I mean, it sounds like you have a whole breadth of experience so you can kind of be the ultimate gap filler regardless of what the problem area is. You can jump in there and add something to it. [0:02:41] BM: Jack of all trades in a way. I learned in my startup years how to do everything from legal to marketing. You name it. Because I had to. [0:02:48] SF: No choice. [0:02:50] BM: I had to do it. [0:02:50] SF: Yeah. I mean, I had a similar - I founded a company and I was the CTO of that company and had mostly experienced in engineering before that. And really the way I got my business degree and my marketing diploma was because there was no else to do those things and you're just forced to do it. And I think it's taken me on a career path that I probably never would have done if I hadn't been a founder as part of that experience. [0:03:14] BM: You got the practical MBA as I call it, right? You got the hands-on MBA. The things that they teach in the MBA school, you learned the hard way. [0:03:22] SF: Yeah, exactly. I mean, a lot of stupid mistakes. [0:03:24] BM: Oh, yeah. I'll be the first to admit it, too. And I'll be like, "Yup, that was dumb." [0:03:29] SF: Yeah. You had quite the run career in terms of picking the right kind of companies to be part of or companies that have had successful exits. What is your strategy there? How have you been able to pick these companies? [0:03:41] BM: I'm a very weird being in a lot of ways. A lot of friends say that about me. People that know me in the industry. One guy from a certain venture capital firm, I won't say who he is, but he calls me his lucky rabbit's foot. I seem to be able to sniff out technologies in advance. And sometimes I've actually had some of the technologies that I've worked with or with companies we've started or big things I've done. Or we might have been slightly ahead of the curve. I have a tendency to be able to sniff things out, and I don't know what it is. There's no real strategy. It's a feeling more than anything. And it also stems from just my inherent - the reason why I got into this industry in the first place is the evolution, the change. It's not stagnant. It's always flowing. And that was a huge kick for me was like, "Do I want to be in like -" here I am at 52 now. But like when I was in my early 20s, I was like, "Do I want to be the person that's doing the same job and gets a pension at the end of 50 years? Or do I want to be on the bleeding edge of stuff all the time and always be learning?" The thing is, is I like the fact that I'm in an industry where I have to constantly keep my skills honed. I constantly have to be learning what's out there. If you don't do that, you're not going to survive, especially in this industry. I mean, like I said, at my age, we're in an industry of ageism. And the thing is, is - and I hate to say it that way, but it's true. But the thing is the way I have to stay relevant is by staying relevant. Staying up with the trends. Learning the details. Not honing on skills that I learned a long time ago as being bedrock and solid, but always evolving and changing. [0:05:05] SF: Yeah, I've certainly worked with people, especially on the engineering side in the past, that have had - they have sort of their toolkit and they don't necessarily want to expand that toolkit. They want to go and they apply that toolkit. And they can do it very effectively, but that's kind of what works for them. I think I'm more also aligned with you where I've always been driven by sort of the educating and pushing myself to figure out. I want to work on the edge of technology essentially and that's why I spent a long time - too long in school trying to educate myself and really work on the edge of technology. [0:05:38] BM: Come on, let's think about it, right? I mean, you know this. The thing is, is when you read about something, you start learning something and then you start applying the practical portions of it and you get that first light. We used to always say in my first - every startup I've done, like, "What's the first light? Where you get the product to be? Where is that first moment where you could sit around and look at each other and say, "Damn, we did it?" You know what I mean? This is the start. We know this is something. And I still love that. Like I said, right now, I'm working on a couple of things outside of JFrog to keep my skills up. And when every time I have one of those little epiphanies, and actually one of the companies that I've had was Epiphany, but one of the epiphanies is when you get to that moment, you go, "Okay. Yeah. You know what? This is practical. I get it. What's next?" And then in my head, whenever I've done this in the past, my floodgates open on what's possible. And also, too, what are the restrictions? What are the constraints I'm within? And how do I break down those constraints and barriers? [0:06:27] SF: Yeah. What's kept you at JFrog for eight years now? [0:06:32] BM: Number one, the people. I'm going to be honest. I love the technology. I love the platform. I get to work with some of the biggest tech companies in the world. That's one part of it. But the thing is, is I love the people I work with. Like every other company, you have to be able to work with the people you're around, right? You need to be able to get along. And we have very interesting hiring policies here. Are you a frog or not a frog? I thought in the beginning, I was like, "Oh, that's cute." You know what I mean? But now I've learned over time that that's actually a real thing. We have quality on who we bring into the company. And the thing is, is that, I'll tell you, some of the hardest situations we've had here, I've had some of the most delightful and fun discussions because in some cases, when things get tough, humor is the best way to alleviate some of that strain. And being able to have people that are in the same similar mindset where you can have that is exceptional. For me, having people of the same mindset around me has been one of the major things that has kept me here. Secondly, like I said, I get to work with some of the largest companies in the world and always be on the edge of creating something that will do something. And the thing is, too, is I always make the joke whenever I talk to say customers or prospective customers, especially if I'm a consumer of their products, I want to be able to say, "Hey, you know what? I'm here to help you make yourself better, compliant, safer, because I'm your customer." Right? I want to ensure that my own personal data, my own personal safety is a top notch. And this gives me a chance to have some sort of influence into that. And so, it's a lot of different things. And also, JFrog has afforded me the ability to be out there, right? The big joke is sometimes people say, "I'm one of the faces of JFrog. I do a lot of our public speaking and webinars and things like that." And I enjoy that. For a person who get stage fright, I enjoy it. And the thing is there's a lot of different aspects. But like I said, it also encourages and nurtures employees here to take those steps and keep going. And I also love being able to mentor others like I was mentored when I was younger and bring my perspective. And there's a lot of people here that I think that I love having those conversations. I love the diversity. I love every little bit of it. And that's one of the things that keeps me here, to be honest. Actually, this is the longest I've ever been at a company. I've been one of those ones that's constantly evolving and going. But this company constantly evolves and goes. And so, it falls in line with my mantra. And it gives me the ability to also excel and exceed with the product and, like I said, work with large companies and work with companies that are making a difference. I always make the joke that people ask me like, "Yeah, from Chick-fil-A to SpaceX." We have this whole large swath of things that we do. And that's exciting. [0:09:07] SF: Yeah. That's awesome. John Maxwell, I think, said, "People quit people, not companies," I think is a quote. And it's true. I think very few people are going to stay in a company that is up into the right if they hate their day to day and they can't stand their coworkers. It's just going to be a miserable - maybe some people are willing to do that, but it's going to be a miserable existence. And the great thing about working in technology, especially on the technical side, you have a lot of options. You don't have to kind of put up with that stuff if you don't want to. And I could tell you seem very passionate about this. I can understand why they have you out there as the face of the company from time to time. Where was JFrog in terms of their size and development eight years ago when you joined versus where it is today? [0:09:51] BM: - Oh, absolutely. When I joined eight company years ago, I mean, I was employee number 123. And now we're up to almost 2000 employees. We were like 35 million in revenue that year when we knew that year when we announced it. And since then, you can see where we're at now, heading towards massive revenue and scale. When I joined, we were just shy of I think about 2,000 customers utilizing our platform, which was exciting. And now we're almost at 8,000. The thing is, is that over time, watching this grow and expand and have different locations around the world. It's fun to be in the roller coaster as it's going. And in this case, the roller coaster doesn't have an end. It keeps going. But it's got thrills, and chills, and ups and downs. But the thing is though, is that you're there constantly. And watching JFrog grow and being there in the front seat with some amazing people that have gone along for the ride with you and having that shared trench story. You're in the trenches together. Times are tough, times are good. For me, it's been exciting watching the growth. And the thing is, is we've adapted. And we've always been kind of at the bleeding edge of what's next in terms of that. And so, we're able to meet the market where it meets. In some cases, we might be a little ahead of the curve, but we're laying the groundwork, right? And that's the thing is, is now, even for me, jumping into this whole idea of JFrog ML and MLOps. And it's exciting. Every company is going to have AI eventually integrated at some point or some sort of machine learning into their application. It's already starting. It's been starting for a while. But be able to actually have - and I look at where that is compared to like DevOps was years ago, whereas this massive playing field of all different technologies. And with JFrog, one of the things that also attracted me was the universality that we've actually embedded into our messaging, which is I was equated to we're the most perfect conveyor belt. Right? Raw materials on one side. That's the raw code, the third-party transit dependencies in that and some sort of manufactured product at the far end. And when between there, we provide that layer of consistency. And the thing is you have all these different tool sets. And the thing is, is that I always had this map that I always show. I'm like here's the terrifying realm of DevOps. Every single tool in every single category. From ERP, to like CICD, to security. And then platform engineering came in, right? It was like, "Okay, let's start regulating this down into some sort of consolidated platform." And it's the same thing going on right now with ML. You look at all the different tool sets. All the different things for tuning, and RAG, and all these things. And we're trying to address that same level of market. For me, watching the company evolve and say, "Hey, you know what? We want to be that layer. We want to be that base foundation. The pillar of technology that you build your software on." And whatever you choose - because every software company is different. And that's exciting too, by the way, is the fact that this company has evolved with that and preached about the idea of we're going to help make it more efficient, more effective, make it more secure. We're going to be able to deliver your software better. You're going to be able to go ahead and update those devices in the IoT realm. Now we're saying, "Okay. You know what? We're going to bring compliance. We're going to bring speed and accuracy and efficiency now to ML and MLOps." Because the thing is, let's face the hard facts, 85% of ML technologies out there that are developing, 85% of those never make it to production as you're doing this. And how do you increase the fact that that should be lower, right? It should be half that or even less than that. And you should be able to say, "Let's build something. Let's evolve it." Like I said, for me, JFrog has been an extension of that, of that natural trend of the actual industry and our ability to be that consistent base layer that allows them to do what they need to do as a company and ensure that they have a consistent set of tools, accuracy, and efficiency. And I'm fortunate enough to work with people that want to provide that to them. [0:13:43] SF: Yeah. I think to your point, there's so much new tooling essentially going on in the ML space right now. I was looking at even just with agents, the agent market landscape chart the other day. I think it was maybe from Menlo Ventures or one of the VC firms. There was easily 200 logos on this thing. And agents are like the new, new thing in Gen AI. If you expand that to the entire plethora of stack, it's like thousands. I work in the space and it's hard for me to even stay on top of it, let alone somebody who's just thinking about, "Oh, maybe we should kick off an AI project." Where do you even begin? [0:14:17] BM: And even now, right? Even those are starting to segment. Even those layers of that landscape are starting to segment. That's the thing is, is that, initially, when people have this - there's always a way, right? There's always this massive umbrella of what something is. And then inside of that umbrella, things start to segment and factions start to come about. And like I said, agents in my opinion is the new battlefield. When you look at it, everybody's vying for the perfect thing. And the thing is, is that when you look at it, it's like you said, it's now - whenever technology - this is the way I always look at it, right, is that when there's a segment of technology that gets more involved, people see that this is actually a value add. They want to get into this space. Ideas start happening. It's not a bad thing, right? I mean, this happened with the dot-com bubble, right? It was like initial thing that happened where people were like, "Hey, you know what? This is new. This is exciting. Retail, commerce, education, information." And then it got into these segmented markets of little bit players all over the place. Then it went into the next part, like mobile. Same thing. Now, ML is the same way. It's like, here started off with a small consolidation of things, but then people go, "I got an idea." And then the idea starts to spread. And then you're going to start to see is that 200 logos probably shrink down to 15 or 20 logos. And then eventually work its way down to like five or six major players, right? The way it's going to shake out, there'll be mergers, acquisitions, people will drop off. Some tools will come across as strong, but they're inferior. You know what I mean? There's that natural shake off. And that's what the other part. Like I said, one of the other things I love about this industry is that if you want to look at natural selection and natural ability of - our industry is the living industry. And I love that. The fact that it does change, it does evolve. You don't really see that with CPA work. [0:15:59] SF: Yeah. And I think I've been in the industry long enough, and you have as well, where we've seen multiple cycles of these. Essentially, whenever there's a new sort of shift, there is this massive expansion, essentially, of players. Even you go back to social and people going around for this might not remember it. But there was endless essentially takes on social networks. Facebook overtook Myspace and sort of leading the charge there. But there was a lot of competitive players that had their own takes on that. And now we've really consolidated the market to a handful of players. [0:16:33] BM: I was like, "Do you remember Friendster?" [0:16:35] SF: Yeah, exactly. Friendster couldn't scale, basically. Their MySQL database fell over, and they didn't have the tech talent to scale it. And they basically died. [0:16:44] BM: Right. And that's the thing is, actually, if you look back at it - I was talking to a friend about this recently. It's so funny you brought this kind of stuff up because I was talking to a friend about this. We always said, actually, if you looked at the design and everything that was around Friendster, it was actually a superior platform in terms of usability and design. But like you said, the back-end of it couldn't support the heavy weight in which they enacted. They have a bunch of brilliant front-end, a bunch of brilliant UX and UI and interoperability in people. But on the other side of it, they couldn't compete. And the thing is - by the way, remember, that was early - there was no real cloud scale. There was no real Kubernetes. There was no way to provide an infrastructure without physical service. I mean, I make the joke of trying to explain to younger engineers, I'm like, "You don't understand. We had this one startup where we needed servers. And I literally went to some really sketchy location in Fremont, to some warehouse where some guys sold me 10 servers, and I had to scrape the DOJ stickers off the servers and wipe them out and bring them back to my server location because that's all we could afford." And now it's like I can go in and be like I can just auto-scale anything I want. I mean, it's amazing. I've always been a big fan of adoption of things like Cloud Native and all that. And like I said, one of the things I really enjoyed about coming here at JFrog was we're governance board members of the Cloud Native Foundation, helping define that roadmap for all these companies that have that successful scale. Not to fall into the Friendster trap of tipping over at that one million and one user, right? Plus, one that just killed it all. But on the other side of it too is we worked with a guy - we have this guy, his name is Rimusz. Rimusz is one of the co-creators of the Helm. He works here. Amazing, right? It's like tech royalty, in my opinion. Other people are like, "Oh, that's cool." I'm like, "That's super cool." I get to see the guy who's like, "What was your inception? What were you thinking when this was coming about? How did this orchestration?" I had so many questions. I was like a fanboy. [0:18:39] SF: That's awesome. I mean, I think that's one of the great things about working for some of these larger tech companies in the Bay Area. It doesn't have to be in the Bay Area. But you do send a run into some of these people who have invented technology that's widely adopted as an industry standard, and they're just there as an employee, and you can talk to them and pick their brain. I want to transition a little bit to get into some stuff on DevSecOps. Just to start, for those that are maybe less familiar with the space, how do you define DevSecOps? And where does it kind of fit into a modern software lifecycle? [0:19:12] BM: Oh, absolutely. It's essential, in my opinion, right? The thing is, is if you're not adopting this ability of constant - here's the thing, is we've always talked about, whenever I discuss DevSecOps, and I get a lot of talks on this over time, I always start off with just the speed of builds, right? When I started in the industry, we did quarterly builds. We did quarterly releases. I mean, we did builds and we did testing, but we only really started software once a quarter. And it was a big announcement, "Hey, we spent 90 days working on this piece, this upgrade to what we're doing. And, yeah. You know what? We build it occasionally. We test it and things like that. And then, of course, everything started to progress, built up and cycles got faster as automation became more available." And then, of course, in 2006 when we started getting into the whole idea of DevOps, you build it, you run it. And tools like Docker and Orchestration and all that, once again, it was a Wild West show. You looked at all the tools that are out there to do something. But the thing was, is that there was this essential idea of DevSecOps. Being able to have faster release cycles. Being able to have security enabled into it, right? Originally, it was just DevOps. And then security came in and said, "You know what, we should really do something to make sure that things are safe, secure, and compliant." And the thing is, is that having these iterative cycles, having the ability to create things that allow you to build more rapidly, deploy more rapidly is essential, right? I don't see organizations living without it. It's funny is, in some cases, you'll see technology - well, not technologies, but more like trends like DevOps or others, ebb and flow and change or disappear and they evolve into the next thing. Look at Agile, right? I mean, Agile came about. And now people are like, "Is Agile dead?" It's all these things. But DevOps and DevSecOps is essential for speed, accuracy, and go-to-market. And having the security behind it ensures your ability to perform for your customers, non-dependent of silo of industry that you're in, but by providing not only the best software but the most secure and compliant software, which is we know, as you know this, you look at the trends and the numbers. The number of increased in supply chain attacks that have happened that compromise people's software. We always go back to SolarWinds. It's a use case that'll be studied for decades. That was a fourth-level transit dependency in the cycle. You know what I mean? It was deep, deep, deep into the recesses of this actual application. And it caused $100 billion worth of whatever problems around the world. But the thing is, is that companies need to adopt this because it allows them to automate more, do more. Make sure the things that they're doing are correct and they're compliant and safe and secure. Ensuring the stability. Ensuring security. Ensuring resiliency. All the words that you hear when people talk about DevSecOps are true. The thing is you need to adopt them in a way that works best for your company. And at the same time, it adheres to some sort of level of standards that can also be applied, that when you bring people in, you're not bringing them into some jarring sort of thing. You're bringing them into something that's similar or something that falls in line at least with minimal deviation what the definition of a DevSecOps is. [0:22:23] SF: Can you have DevOps today without security? Are these really two independent things? Or should it just be one concept? [0:22:30] BM: No, they should be the same thing. Here's the thing, is I see this a lot. There's a lot of companies that are like we'll talk security with them and they'll be like, "Well, we need to bring the security team in." And some of the questions I always have when I'm emailing the security team is like how much delay? How much hindrance does it put on your organization by having a separate security team? The thing is, is that here's an example. Everybody talks about shift left. Where do you put the tooling where it matters most? And where it matters most actually is at the developer level, right? This is your entry level. Now you're not going to have a security person sitting over their shoulder every time somebody codes. This isn't a bullpen, right? Or like one of those like margin called bullpens where everybody's sitting around in a cube working and there's a security guy behind them going, "No, you shouldn't do that." "Oh, you know what? That's actually an incorrect algorithm you're doing there." "Oh, you can be susceptible. You might do a SQL injection." Nobody doing that over your shoulder. Being able to have DevSecOps as part of it, security integrated in at every phase. Here's the thing. It's not a point solution. Security should never be a point solution. It should be an iterative solution that goes through the entire SDLC, from the developer level. Even, we have here, we have a product called Curation, right? Which is like a firewall for your organization to intercept requests. And if those requests come in, these are rules and definitions by companies that say this is a compliance issue. This is a longevity issue. Is it operationally a risk-averse? Are there malicious packages? But it should be all the way down to runtime. And the thing is, is that every segment along the way is susceptible in one form or another to some sort of security attack. When people say, "Is security separate from DevOps?" I say no. Because if it is a single point of failure, then you're failing. Because the thing is, is that there's a lot more space in that spectrum of what SDLC, software development lifecycle. And that should be attributed to every segment of the SDLC, from shift left as part of the developer experience, down to even code, right? Being able to actually go through and say - in Git, we had a big announcement with GitHub this year around things like auto-fix. How do you auto-fix potential security threats and issue? We worked with them on that. Copilot integration. How do I query and find out, before I even do my job, if something I want to use to build my software, one of these libraries that's essential - 85% of what I do as a developer is someone I don't know, right? These third-party transit dependencies. Is that going to affect me? Part of the CI process, it should be part of that too, right? Constantly iterating through. And as you build, it should be doing security checks. Looking for things like secret detection. Are my applications configured correctly? Did I forget to enable TLS on a connection that I'm doing through the service? It should be part of the QA process even. Even as QA is QA-ing it, they should also be successfully testing it, too. And even when it gets to production, you should give one little chef's kiss of security before you deploy it. And then once it's actually been deployed to say something like a Kubernetes set or node, you should be constantly monitoring that too. The thing is, we're a CNA, right? At JFrog, we're a CNA, which I adore. I think it's one of the coolest things we do. And we have this massive research team. I love going through - it's such a nerd thing. I love going through their research site and seeing what they've discovered. And you'd be shockingly surprised how much security has now become a normal scene, right? Which it should have always been there. It should have never been those guys in the black glasses with the black hats and the shirts that hang out in a room and come out of nowhere and say, "You need to fix that because your code is bad." And the thing is, is that there's none of the humans. Usually, it's a hundred developers to like one security person. It's crazy ratio. The thing is security should be part of it. It should automate it. It should be behind the scenes. Plus, it allows the security people to focus on what's important as opposed to a bunch of just minutiae that's out there. They get inundated by this stuff. They're flooded by CVEs constantly. How do you make sure that they're focusing on the right ones? How do you focus on if something is contextually relevant to the issue that's actually being brought up by, say, a vulnerability? How do you know that you're being affected by it? [0:26:41] SF: Yeah, I think one of the other challenges as well when you have security, and I think this goes for privacy as well, is these sort of separate teams that are like satellite it in at the end of a launch to make sure that they bless the thing. It becomes somewhat of a antagonistic relationship as well where they can be seen as sort of the office, because they're coming in at the very last point after people have invested all this time to say, "No, you need to go back and redo these types of things." But just to play sort of devil's advocate with this, does moving releases to DevOps, to the engineering team, to those that are building, we're also moving some parts of security there? Are we overloading essentially the engineering teams with too much responsibility by also giving them sort of security to think about? [0:27:26] BM: You know what? I'm glad you brought that up, right? Actually, that's the thing that we - most organizations, most people I've talked with over time in this industry that are friends of mine and other colleagues - and I've been to so many conferences and we've had this debate, "How much is too much?" And my opinion - and this might be controversial. But my opinion is it's never too much. But it needs to be done intelligently. Okay? It's not just is there an issue? This is the nuance. Because the thing is to say I'm a developer. Now, security, you're telling me now I'm a security engineer besides being a software engineer. Not only do I have to sit there and come up with creative ideas and aspects behind the features and things I'm trying to create, but now you want me to be a security guy? Well, how do you do it in a way that's not intrusive? How do you do it in a way that it feels almost gamified in a way? But also, too, as a way to help the coder actually become a better coder? And that's the thing. It's a fine nuanced line you have to ride. And I believe it's every developer's responsibility to do this. I mean, even in the days, we didn't have things like SAST, right? That lets me know that you're a bad coder. You did something terrible, right? You shouldn't have done it that way. It would take usually some sort of pen testing. It would take some sort of automated testing maybe down the road or maybe a physical test. I mean, I remember the days when we didn't have the automation. We literally had to go through and record the clicks on the website you were doing. And then it would do the injections and testing like that was a very manual process. By giving the developer the ability to look at their IDE, the world that they live in, give them all the potential tools that they need to do their job in the world, their house, right? I mean, your IDE is your house. You're in your house. You have different rooms in your house. You have your source code. You have the code you're working on. You have the dependency libraries that you're monitoring. You have all these things. But what if you were able to add that layer of protection to that to highlight for you? Take the ambiguity out of it? And that's the beauty of being able to use tools like Copilot, right? With Copilot, the stuff we're doing and we did with GitHub, the best part about this is, is as a developer, I can go in and say, "Show me a better way. Or maybe bring up and show me some of the code." And then what they came to us for at JFrog was, "Well, what kind of libraries can I use? What are the safe and compliant libraries? Why don't I query in advance? Why don't I go find these things in advance so that will reduce that actual cognitive load I need to do in the security realm?" And that's the thing is - when it comes to it, I think that there's never enough, but it has to be done in a way that it's not like suddenly - I love the fact. What did you call it? The Ministry of No? Or the Department of No? [0:30:07] SF: Yeah, Department of no. [0:30:08] BM: We're going to go with the Ministry of No though. I like that. Because that sounds like a very - but the idea is very - like I said, when we get back to the idea of like you can't have a security person hovering over your shoulder, whispering in your ear like creepily saying, "You shouldn't do that." That could be bad, right? This is one of those ones that I've shown developers. Like I said, "That's my background." I started off as a coder and done everything. And like I said, I wish I had half the tools. I always go to my IDE. I bring a VSCode, which has always been my one of choice after the acquisition, because it was before. Anyway, but I go in and I show you, "Look, I can still do my job." But at the same time, I now have a bunch of rich information that allows me to do it, number one, better, right? Takes a lot of the time that would take me to write certain algorithmic functions and stuff and do it for me, so I'm not just missing a parentheses or whatever. And then on the other side of this, making sure that the things I need to do my job properly, the 85% to 90% of things I consume are safe, secure and compliant. And it puts you on a sum, it's 100 times more expensive to find a security issue or security object or some sort of potential threat, nefarious component, whatever in production than it is at the developer. [0:31:24] SF: Given that we have the growth of an interest in DevSecOps and also no shortage of security tools in market and more secure ways of developing software, the problems or challenges that companies face with security incidents, data breaches are not only not going away, they're actually getting worse. Why do you think that there's still this problem despite sort of better tools and practices in place? [0:31:52] BM: It's interesting, because one of the things that we addressed here - and this is actually when we were working on it a couple of years ago, my eyes ablazed when I heard what we were going to do and I looked at it. Because one of the problems is, is the deluge, right? Yes, this is a systemic problem. It's been a problem for decades, by the way, right? This is not new. It's just become more prevalent. And I think the thing is, is that I don't think we have the tools to see, at the time, the level of the potential exposure that companies were having. And I think that's the reason why the numbers are so high in a lot of cases. I just think we're more aware, right? We're now more aware. The problems have happened. The knee-jerk reactions are out of the way now. Now we're on standards, right? When you're looking at things, like I said, like SolarWinds or Log4j, right? Log4j. I mean, let's go back and look at that for a minute, right? I mean, it's the darling of that library. Everybody knows Log4j. And then suddenly, it became the most toxic mess on the planet, you know? And the thing is, is the software bill material standard came from that and SolarWinds, right? The government's reaction was like, "Shhh. We got to do something about this." I don't have to say something. I curse like a trucker sometimes. But this is the thing that we need to react around it. And when we look at this, it's not that it's increased suddenly. I just think we're more aware of it. We have data to back that. And the thing is, is that for us, what made me excited is that this constant delusion that developers are like, "We started talking to customers and we started looking at reports." Like, CVEs aren't that too important in a way because there's so many. It wasn't that they're not important. It's just that how do you constantly - you have an avalanche coming at you of CVE. How do you pick the ones that you want to address? And what we did is we created up this thing called contextual analysis. And with contextual analysis, we actually look at the actual threat parameters of, say, an issue. And we say, "Are those parameters, are those conditions being met that would cause the potential neferious exploit." Right? And this cuts down the amount of actual CVEs and the amount of potential threats and security issues that our companies we work with have to go after. I showed one customer, they gave us a Docker image. And then I ran it through our thing and I came back with like 458 CVEs. Something ridiculous. And we're like, "How long is it going to take you to go through that?" And just to find out if you're being affected, not even at what level you're being affected. But are you being affected? And they go, "Ah, probably not. We'll probably go for the critical ones and the CVSS score of 9.7s and above." And I said, "What if I could tell you that 92% of these are not affecting you with certainty? Those conditions for that exploit are not being met." Now, you have a small segment of our security threats that you can address that's manageable and ineffective. Now, like I said, that's the reason why I said, even if there's an increase in security vulnerability attacks doesn't mean you're being affected by it. Because in most cases, it's a singular condition, right? It's a condition that gets met. Was it a parameter? A variable? Something that's passed into it? Whatever that causes the exploit. Or is it chained to the exploit? It doesn't matter. But the thing is, is that, yeah, there's a sheer scary number out there. That's terrifying. 240-something percent increase year-over-year. It's something stupid. You go, "That's terrible." I mean, you're like, "What?" You know what though? Just because there's an exploit doesn't mean you're being affected by it. It's not going to affect you directly. That's the thing is, is when I look at this, I said, "Yeah, you know what? Yeah, it's terrifying. Those numbers are terrifying." But here's the thing, too, by the way, just quickly, is it's not only are you being affected by it. But if you are being affected by it, how long have you been affected by it? Also, too, what other products in your product line might be affected by this? How do you have an audit trail? How do you have accountability behind those components and pieces you use to effectively protect your company or ensure that you have been protecting or to mitigate a disaster that might be PR coming out when somebody finds out that you're being affected by this and X number of millions of user data has just been exposed? Right? These are terrifying threats that companies think about all the time. And like I said, just because there's a sheer number of increased attacks on the supply chain, doesn't mean it's affecting you, but you should be prepared for it. [0:36:12] SF: Yeah. So I know that JFrog works with some of the largest banks in the US. When we think about like DevSecOps and regulated environments, I guess what tooling does JFrog provide those companies that help solve some of these problems that we're talking about? [0:36:29] BM: Oh, absolutely. And that's actually some of the things we were just talking about, right? We actually provide various different levels. And our banks that utilize our platform use large swaths of what we do. Our security product called JFrog Security is actually segmented into four different products. We have our standard JFrog Xray, which is different - by the way, compared to any other security tool, most security tools are action-based. You need to do something to get some sort of results back to enact some sort of action plan to fix that. With our Xray product, because of the way Artifactory is symbiotically linked to Artifactory and Xray, it's constantly scanning. It's constantly evaluating. Also, by the way, remember, it's not just one security facet that it should be affected by, right? It's not just CVEs or CVSS, right? There's malicious packaging on top of that. There's licensing and compliance. Read those licenses, right? You'd be surprised at how many crappy licenses there are out there, right? That are just made-up licenses that say you use us and all your code belongs to us. Small print, right? And then on top of that is also two things like operational risk. How old or outdated or abandoned are the libraries you're using? Over 80% of the libraries that are out there in the library world, depending on a Python, NPM, whatever, are old, abandoned or outdated. Or I mean, that affects you too. If you have an issue with a library that you're utilizing that has been updated since 2011, good possibility that you're going to have to go hire a company that will rebuild it for you with the fix in place. There's a couple of companies out there that do that. Or you're just going to have to deal with it because it's never gonna get fixed. You can write rules on that too. Now we also have advanced security, and advanced security allows us to go through and filter through the actual CVEs that are affecting companies, right? Are you being affected by this potentially nefarious moment? Also, too, are you doing anything in terms of secrets? Are you exposing? I give you an idea, I downloaded a Docker image, I ran it from Docker Hub, pulled it down, I ran it through our security product and found out I had access to this developer's AWS keys, his GitHub, everything. I had access. I contacted the guy and he's like, "That's impossible. It's clean." I go, "Did you check your root history?" And you go into the image and you type in history and had everything, everything. And he's like, "Oh, my God." On top of that, we also tell you are there services or applications that can be potentially threatening in, say, an image that you're doing. We also do infrastructure as code analysis. I messed up once. I was learning Terraform. I did all this stuff. I made my first. I was patting myself on the back. And none of the ACLs, none of the security actions I put in were there. And when I ran it through, I looked, I had authorization equals none. Totally rookie mistake. But our product found that and then let me know, I was like, "Oh, I'm not crazy." But we also go to the frontline defense on this, just for them, right? We have curation, which is like a firewall. Because Xray and our advanced security is once those binaries, once those libraries, once those nefarious, potentially nefarious components are inside your environment already, potentially, zero days, right? Our curation product acts as a firewall and intercepts the requests for that. It says, "I want to bring this library in. But what I'm going to do is I'm going to do a dry run to see what things are coming in and stop it before it begins." And then we have companies now that are using, we released this year, runtime. In other words, now we can trace that security issue from before it comes in, while it's in. Giving the developer everything they need in the IDE with SAST and all that. All the way down to is this thing in my runtime environment right now? And how much of a potential threat is it, right? This is the stuff that we're bringing to banks. And the thing is, is that by having a single holistic security solution allows it so that you're not trying to correlate data after the fact, right? This is correlated data in the same platform allowing you to have that consistency. People ask me I were to delineate everything we do in JFrog down to one word, I would always say consistency. We provide a level of consistency to organizations so they can reduce the tooling, which means that they don't have tool sprawl, which also means that there's security gaps and holes. The more tools you have, to be honest, the more holes you have because those are interoperable, right? They don't work together. They're non-interoperable actually with each other. You have to do manual correlation. And any time - I'm sorry to say this. But we get humans involved. Unless it's an innovative kind of thing, we're actually one of the linchpins in something falling apart. [0:41:02] SF: Yeah. You mentioned the guy who accidentally had a bunch of credentials in image history. And I think that's like one of those things where you read about some data breach that was due to weak credentials or something like that. And you're like, "Oh, my God. How could these people be so be so dumb?" But it's such an easy mistake to make as people. And like you said, a lot of times people are the linchpin in this. I guess in terms of security issues that relate to social engineering or a person being the falter point, are there things that JFrog or DevSecOps practices help prevent things like that? [0:41:39] BM: Well, it's funny you just said that, right? I had a chance to meet Kevin Mitnick when he was alive, right? I have one of my prized possessions in my home on my shelf by his. I even have it on a little stand. I have one of his lockpick business cards. I don't know if you ever saw those. Because Kevin Mitnick was the quintessential social engineering hacker. I mean, he started with the bus system of LA when he was what? 11? And then worked his way up. And the whole idea is most of his hacks were done over the phone. That's a harder thing. We don't prevent that. I mean, let's be honest. We're the code side. We're the ones and zeros, the binary side. Organizations, when it comes to the social engineering aspect, that comes down to - we've all taken those courses. Are you compliant? Sally sent you an email and sent you a request to say, "Hey, I need your social security number, your mom's maiden name and your first dog." Right." It's like, "Oh, I'll answer that." Those kind of things. Or somebody calls and like - All right. I was watching - I decided for some reason to start watching Hackers when I was working out this morning. And he's like, "Oh, yeah." He's like, "Look at the little box next to the computer." He's like, "My BLT drive crashed. And I'm AWOL on this." And I'm laughing so hard watching. 1995, right? But the thing is when it comes to social engineering, that's more of education. That's more of like concise guidelines, putting in processes and things like that. We do it from the automated side. We're the robotics side of the house, right? We're the automated thing. We're the people who give you the notifications that you're doing something terrible and you should fix it. [0:43:17] SF: Hackers and Kevin Mitnick. We're going back in time. You've really taken me back to my early days on the internet. [0:43:25] BM: I still have a free Kevin sticker in my office at home. [0:43:28] SF: Awesome. Well, relearning - you were talking about how one of the things that separates or differentiates JFrog is sort of this more proactive approach versus reactive. Now you're making investments into ML as many, many companies are. How does that potentially help with proactive? We're entering this era of like agentic AI and agentic systems. Suddenly, could we actually have some of these security tools that are actually not only monitoring and alerting but going and fixing the problems? [0:43:58] BM: I'm glad you said that because actually this is one of the things that we're doing with the - we just introduced - it was actually an acquisition like we talked about before. It was a company called Qwak. And now it's our JFrog ML product. We also just released our Frog ML, which is an SDK to help around this. But the thing is, is that what we've done is, is we've taken our knowledge of, say, DevSecOps, our ability to do this, and MLOps. We're like, "No, this is just an extension of DevSecOps." It really is. When you draw the cross parallels between standard software development engineering and ML engineering, yes, they're different, right? But there are still similarities, right? You're still using Python or R to do your data analysis, your EDA, and all that kind of stuff. You're pulling in data sources that are out there, right? Hugging Face. Look at the Hugging Face. Millions of repositories. A repository of millions of LLMs and things like that. We proxy those in our company already. We did this before even the acquisition. And we use our security product to scan for things like malicious packaging and licensing, and things like that. Making sure that you're not bringing in data sets. You're learning to train your potential models with something nefarious, something malicious, something that could cause a skew of biases, a hallucination in data. But also, too, remember, things are built on Docker. They're actually deployed. I mean, whether you're working with a GPU or you're working with a standard CPU. And then being able to go down and have some sort of level of accountability. I mean, we talked about SBOM a little bit over there before, right? The reaction was software bill of materials. Now there's ML-BOM, right? The idea of like what did I use to train this model, right? There's going to be accountability behind that too. And now I think industry is now taking it more seriously. And so for us, we're providing tool sets that are proactive. We're building a proactive modeling security in this case. We just released our machine learning repository so that people can customize, build, tune, and have a place where they can version models as opposed to shoving three things into an S3 bucket, which makes no sense. We're going back to the days of where we stored third-party transitive dependencies before products like Artifactory came out, right? And the idea is simple. Now we need to take those approaches. ML is in the Wild West phase like DevOps was in the early days. Look at the number of tool sets. You talked about that. Just Google. Just to be like ML tool sets or ML things. And you get those maps like we used to have - remember the maps you look at DevOps and there's like here's all the quadrants of crazy crap and there's logos you can't read because they're like the size of a dot, right? These kind of things. And then on top of that, they have the size of a dot. And now we have the same thing with ML. [0:46:47] SF: Yeah, absolutely. And also, I think as much as some of these technologies can bring to automating and help us even automatically fix some security issues, we're also introducing potentially a lot of security vulnerabilities, especially ads like - it is the Wild West. We have so many you know new tool toolstacks. They're going to all have these dependencies. It could open up a lot of potential supply chain attack scenarios as well. It's just like tools that are going to the market. And security is not job one that they're necessarily thinking about. They're just like, "Hey I need to get something out there in order to meet demand and stay on top of sort of the hype cycle." [0:47:24] BM: Yeah, absolutely. And the thing is, is now we're going to go to the next phase of this, right? Which is accountability. And now with the accountability side, people are going to start demanding more. And when I say more, I'm going to say more information on what comprises the things that we utilize and interact with. Look at some of the things that are going on even like at OpenAI, right? With like ChatGPT4 and now 5. The iteration of the models. I know that we've already gone past - I don't think they've told us everything on sentience yet. But the thing is, is that, yeah, not only is it preemptive and not only is it iterative through the process, but I think we're heading into the accountability phase, which is we want to know how the soup's made. People are going to want to know, from a legal perspective, what kind of challenging data has been attributed to the marvel in which I'm interacting with, right? It's like there's still a lot of things that I think is shaking out. It's the same thing with DevOps, right? And then DevSecOps. And then we started getting into ML and MLOps. And what's the next phase? I'm excited. I don't know. I've got some ideas. But I like the idea that I don't know. And I hate to say it though, something is going to have to happen to catalyst that next step. It always does. [0:48:44] SF: Well, Bill, I want to thank you so much for being here. This was really fascinating. I love your passion. It's really energizing. [0:48:51] BM: Thank you for having me. I really appreciate it. Like I said, we've all been through a lot in our careers, right? And the thing is, is that I love the ambiguity. I love the future. I love the uncertainty. I love a little bit of that chaos engineering that our life is. Yes, I'm glad to be here and I appreciate it. And this has been a really fun talk. You have some really good questioning and I appreciate it. [0:49:13] SF: All right. Well, thank you so much. Cheers. [0:49:14] BM: Cheers. Have a great day. [END]