May 02, 2023

Delivering Petabyte Scale Storage with Scality | Episode #64

In this episode I talk with Paul Speciale, CMO at Scality. Scality's software-based storage delivers billions of files to five hundred million users daily with 100% availability. They make standard x86 servers scale to hundreds of petabytes and billions of objects. Scality has transformed the way organizations store and manage their data with their flagship product, Scali…

In this episode I talk with Paul Speciale, CMO at Scality. Scality's software-based storage delivers billions of files to five hundred million users daily with 100% availability. They make standard x86 servers scale to hundreds of petabytes and billions of objects. Scality has transformed the way organizations store and manage their data with their flagship product, Scality RING, and their more recent launch, Scality ARTESCA. Paul and I discuss the company's history, the adaptive storage approach, and how Scality partners with industry giants like HPE, Veeam, Dell, and Cisco. The episode also delves into the growing threat of #ransomware and how Scality's expanded capabilities can help combat it.

The player is loading ...
Great Things with Great Tech!

Breaking through the #scalabilit barrier with purpose built Object Storage for increased availability, durability and performance!

In this episode I talk with Paul Speciale, CMO at Scality. Scality's software-based storage delivers billions of files to five hundred million users daily with 100% availability. They make standard x86 servers scale to hundreds of petabytes and billions of objects. Scality has transformed the way organizations store and manage their data with their flagship product, Scality RING, and their more recent launch, Scality ARTESCA. Paul and I discuss the company's history, the adaptive storage approach, and how Scality partners with industry giants like HPE, Veeam, Dell, and Cisco. The episode also delves into the growing threat of #ransomware and how Scality's expanded capabilities can help combat it.

The company was founded in 2009 in France and is headquartered in European Union (EU).

☑️  Support the Channel by buying a coffee? - https://ko-fi.com/gtwgt  

☑️  Technology and Technology Partners Mentioned: Veeam, OpenStack, Swift, Object Storage, S3, AWS, HPE, DELL, Cisco, VMware  

☑️ Web: https://scality.com ☑️ Crunch Base Profile: https://www.crunchbase.com/organization/scality

☑️ Interested in being on #GTwGT? Contact via Twitter @GTwGTPodcast or go to https://www.gtwgt.com   ☑️ Subscribe to YouTube: ⁠⁠⁠https://www.youtube.com/@GTwGTPodcast?sub_confirmation=1⁠⁠⁠

Web - https://gtwgt.com Twitter - https://twitter.com/GTwGTPodcast Spotify - https://open.spotify.com/show/5Y1Fgl4DgGpFd5Z4dHulVX Apple Podcasts - https://podcasts.apple.com/us/podcast/great-things-with-great-tech-podcast/id1519439787,

☑️  Music: https://www.bensound.com

Transcript

RAW WITHOUT EDITS

they were the really one of the first two or three vendors that came out with this idea to do scale out object storage but I think they were ahead of the pack and thinking let's really decouple from the platform hello and welcome to episode 64 of great things with great Tech the podcast highlighting companies doing great things with great technology my name's Anthony Spiteri and in this episode we're talking to a company dealing in petabytes scale software-defined storage revolutionizing the way organizations
store and manage their data a company transforming data management and infrastructure across Industries while focusing on combating the growing threat of ransomware that company is scality and I'm speaking to Paul Speciale CMO at Scality welcome to the show Paul thanks Anthony it's a pleasure to be here excellent so just before we go back to a topic that I love talking about which is software-defined storage just as a reminder if you love great things with great Tech and would like to be on future episodes you can click on the
link on the show notes or go to jtwgt.com and register your interest just as a reminder all episodes of jtw GT are available on all good podcasting platforms Google Apple Spotify all hosted and distributed by Spotify podcasts and just as a reminder go to YouTube go to gtwjt podcast hit that like And subscribe button and you will get all future episodes as well as being able to see all previous 63 episodes with that Paul welcome to the show and I guess let's start with doing a bit about your background um you know and the facts yeah that
you've been at scality for you know two-thirds of its life basically um let's start with you and then we'll get into the history of scality sure so to kind of boil my career down I've been in the tech industry now for almost 30 years I'm afraid to say I kind of focused the original 10 years on data management database companies but the last 20 I've been really focused on data storage a little bit of cloud mixed in as well and my roles have started as a developer so I actually was a c plus
developer then I became a product manager I was the chief product officer here and now I'm the chief marketing officer so I've kind of run the gamut cool but you end up realizing it's all very interconnected anyway so each one built on on the previous yeah excellent and the fact that you're you're your code about right is is also interesting as well right so that right it points to some great Evolution but I think you know being a coder product manager and now you know CMO points to you know just
intrinsic you know value of all that knowledge that you've built up over your 30 year career so it always helps yeah there you go hey so it's scality a company started um born out of Paris France in 2009 but let me just give a bit of a background as to the founding of the company itself and why the company um was founded back by um I think it was before for four co-founders back in 2009.
yeah it's actually five Founders they were all uh based in Paris France and it turns out that they were all already Affiliated through a previous company they were working on uh um mail email security software especially for service providers so they really got to know the service provider space very well um they started working with the big ones you know like orange in France and telenet in Belgium and Comcast in the US it was actually those companies that started saying to them hey look we've got a big data storage problem and
that's kind of the spark that started them thinking about could we do something that's Innovative in software-defined storage so indeed it started in Paris France and you know it's become kind of a mini Global after that yeah I mean you know how kind of Co headquartered out of the US and France so it's quite interesting in that sense um you know the company itself started in a period of time where you know software-defined storage and SDS was all the rage um so what was the you know original and
he said you had a really really good solid foundation in the service provider space I'm actually interested so what what was the nature of the software um before that um in terms of what they were working on well they were working on anti-spam Solutions so they really understood kind of this whole big you know scalability problem in terms of Webmail scale problems um you know they understood things like distributed systems that was a lot of core technology uh if you build a system like this you have to bring together a
lot of domains right it's data storage it's networking it's a restful protocols there's so much of it that comes together so they were able to borrow that expertise but of course the team broadened out and became more experts in uh in data storage the core idea really was to provide scalable petabyte scales storage and to do it on Open systems on commodity platforms uh you know even up until that time you had storage systems that were just proprietary they were custom Hardware software and so the
Innovation was to decouple that to really do this in software and give the customer freedom of choice on the platform yeah I think that's a good key of you know what software-defined storage was trying to achieve you know back in the day I mean we are talking a while ago now and we'll talk a little bit about where the industry is today you know in regards to SDS but you know back then um I would have guessed that you know if I think back to that time frame we had you know nutanix starting to sort of come out of its ways we were
having um you know even even before that though I think um you know well actually after that VMware were doing their own sort of thing uh with vsan but that was more 2012-13 period but we had an inkling something was happening but what I mean so for my workings out scality was almost one of the first to really come out and say we're going to be software defined we're going to truly decouple our software from our Hardware am I kind of right in saying that given the timeline that I'm working out you're
absolutely right they were the really one of the first two or three vendors that came out with this idea to do scale out object storage but I think they were ahead of the pack and thinking let's really decouple from the platform uh they went pretty far you know this is in 2010 before I joined they were actually delivering software for Linux packages so the customer could actually say I'm running Centos I'm running Red Hat I'm running some other distro they could pick their distro and pick their
Hardware so they really were software defined right you had other vendors in fact I was with one of them they were called Ampla data where we said it's software but it's targeted at a certain fixed Hardware right so they really pushed the envelope in terms of giving that customer the freedom of choice you know you're right it's now 13 years later than that and we still look at why people buy and I'll tell you if you look at the top two or three reasons people still say I don't want to be vendor
locked yeah I just don't want to be vendor locked right I want to give have my choice to choose my platform today and even in the future when I grow and we've had that happen a few times where people jumped to a different vendor as they scaled the system yeah I think if I think back to you know those days where I was deep in the in the weeds of architecting you know hosting platforms you know I remember the the monolithic storage systems that we had to deal with and the problems that arose with it um it was always problematic so we were
always looking for this new what's next type of solution and to its to its end you know backhand scalability um and ease of use was huge right um and then it was even better if you could take a certain cost of a storage system that you had done and repurpose it in some way I think that's a big benefit of uh being Hardware agnostic and software defined is you give your customers freedom of choice to be able to not only choose the hardware moving forward as a design new platforms but maybe be able to realize investments in
older hardware and add that and keep that going a little bit longer is that what you see with with scality as well in terms of your customer base yeah we do in fact I have an interesting anecdote since I was with Ampla data and they were later acquired we went back to one of the customers I had in Japan who had the old hardware and said can the scality ring run on this these old servers and the answer was yes so it really provided kind of investment protection but the other thing you see with customers is that they want to know
that they can make choices at any point for example a lot of vendors Force the obsolescence of the hardware platform right they'll say in three years or five years move to our new model yep or we're going to start charging you some higher support fees that goes away right because we don't control the hardware so they have full control do they want to run it for five six seven years and then you're organically growing as well right there's no disruption as you pick the next generation of servers that you want
to run on so it's organic growth not only across your choice but even across different generations of servers yeah I get that and I think one of the challenges in the early days going back and again I've maybe repressed a few of those memories as well because that's what that's what we do right um was just the promise of scalability was was was was was always there but the reality of the hardware that was at the at the time specifically the discs right and right how you dealt with the physics
of still SSD not being you know front and center nvme not really being even possible to be bought right so in those early days how did you do how did scality deal with that there's obviously got to be some smarts in the software that get around the inherent problem of physics yeah there you know there's been some real changes you're you're reminding me of the day that I started you know working on this problem in two thousand 10.
we were dealing with 10 and 20 gigabyte drives yeah right so the design Point even for the scality ring at the time was to think about scaling to tens of thousands of nodes right because to make multiple petabytes out of these small boxes you had to start thinking about those numbers right today it's an entirely different world right you can get a petabyte in a single chassis so you know running something on two or three servers might make perfect sense for some customers but you really need to design things that are innately scalable don't have a single point of
failure right and that's not only for scale but it's for high availability yeah and then you need to put these wrappers on it to make it like you said easy to manage right because ultimately you're going to have you know people that need to run storage systems but also need to know how to manage it with their networks and their applications so to put that thinking on you know an operational scale was really the uh the the thing that they had to come up with yeah and scalability and resiliency together
um typically in the early days didn't didn't go hand in hand you almost had to have one or the the other for memory right and a lot of um Hardware vendors specifically I think the ones that went for you know trying to go to a specific Appliance and tying everything in together really struggled um to achieve that that perfect sort of triangle of performance scalability and reliability um so yeah so how did scarletti get over that hurdle well I I don't know if you remember but back in this 2009-2010 time
frame there was also the emergence of what we now think is common which is Erasure coding yes right we all started seeing that there was the end of the reliability of raid 5 raid 6 for large scale systems right it works perfectly well when you have a few dozen drives but now think about a system that has thousands of drives right things are going to fail all the time and you don't want to be constantly in this degraded mode rebuilding things so we started pioneering the idea of doing resiliency over these different
protocols right using Erasure codes variable parity schemes and even doing geo geographic spreading of that data so that you could have psych failure tolerance right so those were kind of the innovations that were in intrinsic into the system from the beginning the ability to have thousands of nodes participating with thousands of drives um and to do kind of this Geo spread Erasure coding so that things can fail you just expect them to fail that's okay uh if they fail you have enough redundancy in the system and by the way
we can self-heal um you know so if a drive fails we'll just rebuild the missing data to another Drive in the cluster yeah that's correct because I think even even um systems that were based a lot of storage systems were based on ZFS and and you know all that sort of Technology as well but the painfulness of actually a single draft failure and that on a rebuild and a z-pull and all that kind of stuff was was painful like you said so we needed a better way to be able to tolerate that at scale because yeah you're right like
I remember specifically systems you know degrading for months while exactly while the rebuild was happening and it was just antenna but you had to deal with it at the time so I think that's where a lot of the software to find object storage eraser coding came in a play and they realized that it was a better way there was a better way you ended up realizing you you just expect failures to happen right it's kind of the hyperscaler model there's always going to be component failures and it's a normal mode of operation by the way it
shouldn't even be an emergency you don't need an admin standing by to replace a drive if something happens right you just let it happen and you rob around it that's it that's it so in terms of products obviously you've got the scarletty ring and and then we'll talk about uh tesca a little bit later on yeah so ring has ring been there from the start as the product name it has so ring you know it's the name comes from the underlying topology uh there was actually um a distributed peer-to-peer protocol
called chord that MIT published in the 2008 something like that time frame that describes a peer-to-peer ring technology with a ring circular key space so that's the origin of the name it's always been the product name uh the product actually launched in 2010 and it's been our Flagship product since yep uh it started I should say as a pure Object Store uh but at some point we may want to talk that we also introduced the distributed file system right into the object store so it's today presented as both file and
object storage yeah okay and obviously offering the ability to you know back in those days I think would it be fair to say the target was you know VMware virtualization platforms back then or even back then we were looking at more larger sets of data to store because even the concept of Big Data um and you know the growth of data itself is is relatively new into terms of the scale of it and I think back then we were just really interested in how much performance and scalability could we get into our virtual platform with
our storage yeah yeah I would say that VMware was always a factor right there was always this thinking that virtual machines needed data storage but honestly the use cases that drove object storage more at the beginning you started thinking about images and videos right um I I mentioned email that was kind of one specific use case that the service providers wanted to do because they were still hosting their own email services for their consumers but then you had photo sharing Services video sharing Services backup for consumers that's
what the service providers wanted to build and they didn't want to build it on EMC symmetrics right they wanted to build it on something that was agile designed for billions of objects um you know just a sense of scale today we have one service provider that has 220 billion unique objects in one ring okay right that's an entirely different level of scale than you're going to get from uh previous technology so the use cases that drove it were these kind of rich media things because that's where you start getting petabytes
right and then you could start thinking about you know service providers that certainly want to host backup as a service or you know storage as a service offerings that that's really where the roots of this work now that's a good way to approach it right so yeah because if you thought back in those you know say 2010 to 2015 days the size of a virtual machine was maybe it got up to 100 gigs maybe you know what I mean the storage in those machines wasn't traumatic right so yeah you guys targeting those higher
you know targets of storage I think from what I'm sort of putting together has given you the ability to get ahead of the game in terms of the architecture the design the actual performance aspect of it yeah remember also at this time we already had a generation of scale out Nas devices and they were focused on things like median entertainment storage and you know internal editing transcoding type use cases right but that's a certain level of internal scale as soon as you put it on the cloud you start realizing that you know now you've
got a hundred thousand users and you've got like I said a few billion objects it's a different level of scale and that's what we always started focusing on right yeah so the sweet spot for ring really became people that had a petabyte or more but they were quickly thinking about getting to tens of petabytes that's the level of scale where this system really made sense and still does today and now the use cases have just expanded right because at the time we weren't thinking Medical Imaging we
weren't thinking of government surveillance right there's just so much more today that makes sense at this very large scale yeah absolutely and and I think that's going back to the whole concept of software-defined storage or SDS it was obviously a buzzword for a number of years they arrived and then again VMware jumped on it nutanix jumped on it everyone was you know if I think about it today um in the last sort of four years how would I parallel it it's like okay it was crypto we've got AI happening at the
moment there's always these buzzers that occur in Tech yeah and SDS was definitely one of those so how do you see this was one yeah how do you see the evolution of that from where it was back then to where it is today settled you know and and fairly mature in terms of its Market where people don't even really talk about it as a as a leading you know sort of descriptor of of what they're doing but it's still very important yeah let me you know my way of categorizing it is the following right we had block file and object SDS
software-defined storage right we were playing in this Arena of object software-defined storage I'll give you one other thing though that really happened to kind of move this into an accelerated form at the time that the ring launched the acceptance of the Amazon S3 protocol as the de facto standard was not clear yeah so many of us including scality many vendors we're looking at what's the default protocol for doing object storage right and we thought about you know you remember the old Swift protocol EMC had Sentara they
had their own API you had snia the storage networking Consortium that came up with their own called cdmi around 2013-14 you start realizing that people are embracing S3 and moreover the application vendor started using it to interface with object storage right so you you started seeing this emergence of people saying aha S3 is here to stay Amazon's S3 started getting really big and I'll tell you what that was really a spark to make object SDS really accelerate now applications were written to it and isvs independent software
vendors embraced it yeah and I think you know uh yeah Swift price had forgotten about that but I remember when um back in the day when I launched um our first object storage platform which wasn't that great it was based on Seth but we had to offer both protocols it was Swift and or S3 you know what I mean exactly and that that today has gone the way of the dodo so yeah yeah and we've supported them all and as well as Azure blob so you know there's been so many choices and all this but this one's here to stay and you
know with vendors like veeam now really fully embracing it it makes complete sense absolutely it's become um the future we talk about that a little bit a lot actually now in terms of what's the future of object storage it's it's not just talking about cloud or you know a way to put your archival storage it's primary use cases which I've I've argued as well um when I've when I've said that I've in my head I've been thinking well it's actually been primary use case for
a longest time as well because you know you think about face air nutanix um uh there was a few others out there that were there it was all based on object storage rather under the surface right and it's still kind of yeah so it's always been there um just just talk a little bit more about the the scarletty ring and its capability sure then we'll move on to our tesca yeah no problem so again the ring was designed to be this very scalable but also very flexible uh SDS solution for object storage right these early
adopters in the service provider space certainly embraced it to build their cloud services but the company realized that to really penetrate the market in a bigger way we had to go after the Enterprise so there was always going to be this Enterprise space that was going to build a private cloud and have their own internal workloads but what's the problem right at the time they were still very much stuck with older Legacy applications that were based on file protocols so we made this rather big decision to actually integrate a file
system into the object store and that was a unique decision the other vendors in the space actually decided to plug in kind of a Gateway technology yes so you have NFS or SMB speaking to the object store right but now you've got posix built into the object store kernel so you can present NFS SMB or S3 on an equal footing they can all scale out in capacity and in performance and that really got things rolled right so now you're in the Enterprise you can start talking to people about you know Legacy backup on SMB right all the vendors
supported that and it creates this bridge to object storage so now they're comfortable with the solution they see the scale and they start onboarding other apps right so that was really the the promise of it so today the ring is scale out file and object storage it's super rich in the S3 API it actually emulates Amazon in its identity management uh model okay so there's a whole idea of I am if you know how to manage administer users and accounts and groups and roles and policies in Amazon it's exactly the way it looks in ring
okay so this whole security you know authentication access control stuff looks identical from a file system perspective it's what you know you mount NFS you mount SMB exactly the same okay the system is very very feature-rich we're investing a lot in both sides so today it can do you know distributed replicated deployments um it can start small it can grow online it does online online growth it does online updates the system is just hugely reliable and now it has this 13-year track record of running at massive scale
you know 100 petabyte Plus in a single system yeah because you're up to version nine from memory we are at version nine we introduced that at the end of last year we do kind of this Tech Train model so it'll conclude with a long-term support version this year and then it's on to ring 10.
so yeah we have a road map going forward at 2030. that's a great yeah oh it's amazing it's a great um Cadence as well in terms of if you think about from 2009 first released in 2010 so nine major versions um or probably with significant feature upgrades over 13 years that's pretty impressive so in version nine what would you say are the biggest new features in version nine well we've been super focused on this whole ransomware problem so everything from object storage immutability at the API API layer things like S3 object
locking that's been really really important for you know the backup use case backup is more and more sensitive to ransomware um we've done multi-target replication so that people can have a you know a home system and one or more remote Disaster Recovery sites that's been a really high demand the ability to make operations easier you know simplified consoles to manage distributed systems in an easier way from a single console and now you have this concept of the edge starting to roll in so people want
to have our tesca on the edge ring in the center how do I manage that from a single console those are kind of the focus areas so actually one of the questions I was going to ask is what's the link between ring and artistica yeah yeah so the the real Common Thread is that they're both S3 API products right but what we realized with ring is that it's this very high-end flexible product what we needed was something that was sort of very friendly for the mid-range very very simple right and what one of
the ways to simplify it is to do a little bit less and one of the big things we chose to do less was not do a file system right that ends up putting a lot of you know a lot of extra work a lot of extra baggage to administer uh the product so it's pure object storage um we did start architecting parts of ring in containers and deploying on kubernetes but artesca takes that a step further so our test is actually completely Cloud native it's fully micro Services based it deploys on kubernetes you know so
it's the future in terms of the architecture eventually that can become more and more of the architecture of ring at least at the top end right they sort of differ at the storage layer but that's the the main unit the main shared uh technology is really S3 we were one of the first to actually put an S3 open source server uh you know into the into the open source domain both of our systems use the same Tech in terms of their S3 API capabilities and uh you know we leverage we leverage both in both systems yeah so as our Tesco only
Deployable in a containerized environment or is it other options for it no so it's software only today uh we're also starting to roll it out as an OVA for VMware okay okay so there's two choices and a little bit of a hint as there may be more options coming in the next few weeks excellent good stuff you know I was wondering what the the differences are because obviously it's object storage throughout but then right yeah you just effectively you know taken a little bit of the the overhead of of I've won over
the maybe even complexity is not the right word but you've just basically simplified the ring to broaden the appeal out to to a bigger mind's share right yeah the other fundamental difference of course is that you've got this distributed peer-to-peer system under ring so it starts on a cluster right it's never going to run on once one host or one server our tesca can right okay the way it does it is it has a little bit of a different data durability model it can do local data protection and distributed data
protection so it understands that it might be deployed as a Singleton right and it needs to be reliable in its own sense but then it can grow and change its durability scheme to distribute it yeah I understand that so um and that was I mean I read somewhere that it was co-designed with HP but I know that's not directly sort of accurate but just just talk a little bit about the partnership with with Hewlett pack up there in terms of how that came to be yeah with hpe we've been you know engaged with them in
you know increasing waste since about 2014. that was the time at which we started the partnership they actually did a we did an agreement for them to resell rain so today if you buy object storage from hpe it's scality ring um they did an investment in the company through their Pathfinder ventures in 2016 and we continued to engage right so there was a time in about 2019 where they they said they would like an object store that runs on all flash uh version of the hpe Apollo that was really the collaboration we built the software they
provided the first platform and we were they were sort of our inaugural launch partner I would say for our tesca uh so you know we offered it on not only Apollo the big one and the small one but also on the DL servers all Flash and Hybrid models okay so they saw it as a different offering for that that they could offer as a lightweight Object Store for some of these uh mid-market use cases yeah very good um just as a bit of a segue there in terms of just the General State of storage you know the Paradigm of you
know where we were 10 years ago versus today with nvme being where it is and even you know more advanced sort of storages that are out there how how has that helped scality you know in terms of achieve what it's wanted to achieve the reliability scalability and ease of use yeah so flash has been a very constant thing for us in one way right even since the very beginning ring has always used a small component of flash for metadata storage right whether it's posix metadata or S3 metadata we use it for
that for accelerating you know the metadata store and doing lookup operations we went a little further and we started to use Flash very aggressively for internal indexes you know so we containerize the data on disk we need an index that helps us locate where the files are all of that's in Flash right so we do everything we can to Shield the spinning disks from IOS until we actually need the data so in fact we can deploy ring in a very high performance way you know if you need a lot of throughput we'll still in all
likelihood recommend Spinning Disk right so it's the small amount let's say one or two percent of the overall capacity on flash the rest on Spinning Disk but now you have this ability to go further with flash let's say you have an all flash server what can we do we can make small iOS quicker we can make random iOS quicker and that start started becoming important when we started seeing workloads like in Analytics more and more things like Splunk uh came at us and Apache spark right now you're not
always going to have you know multi-megabyte video data you're going to have small log files you're going to have event streams so that's where all flash comes in and both ring and our tesca can take advantage of it but again we're not saying that HDD is gone right it still works really well for the workloads where you need uh aggregate throughput or streaming sequential i o absolutely yeah that's correct and I think you know you've pointed to a pointed in part of the industry now it's
hugely growing in importance with regards to log analytics that's all part of the ransomware conversation and we're going to have a bit of a longer conversation shortly on that right so yeah so to be able to leverage the technology of the disks as it moves through you know Flash and nvme for those smaller billions of objects sort of use cases that's kind of setting yourselves up for a really good spot like you say in the 2030 which isn't too far away no it's scary it's not far away and and I should say the artesca
durability model this idea that you do local and distributed Erasure coding was in mind with the fact that high density flash was coming at us so you know to rebuild a disk very very quickly you don't want to always do Network iOS you want to be able to do it locally and that that's kind of the way that we're optimizing our tesca yeah good stuff hey just talk a little bit about the partnership with service providers obvious that interests me I mean I'm I'm I know that you guys do very well in the
service provider space I know that you've just uh released a VMware object storage extension for VMware Cloud director as well so you're really focusing on that area so so talk about how service providers are very important to you guys in terms of Partnerships yeah absolutely so we've always as we talked about historically had these relationships with the the mega service providers right the Comcast and the charters of the world moreover what's happening now is you're starting to see this mid mid Market of service providers
right they want to serve a regional audience or they want to serve a specific vertical audience in Europe what we're seeing is the service providers want to be a sovereign Cloud there's always been this concern about being too dependent on you know foreign technology so they want to ensure that the data stays local for example that's the market we're serving and they've been the ones demanding that we do more and more integration with vcloud director you know so vcloud now has an optic storage extension that lets us be
managed through the vcloud console you can actually provision the storage through that uh we've done you know all the different phases of work that it took to get there but really the Embrace that we're seeing now is in this kind of mid-market service provider that's building their own cloud services usually for their specific B2B audience yeah good stuff and again offering a bit of flexibility in terms of the protocol whether you've got to bring whether you got a tesca that that's yeah big as well
um in terms of other Partnerships so obviously you're broad you've got use cases for the backup industry so just and it obviously broader than that but just look who are your key partners that you work with overall yeah so let me divide them into a couple of categories on the platform side we mentioned hpe but they are not the only ones right we've partnered with Cisco for many years they resell us on their UCS line of servers uh super micro Lenovo uh there's a lot of Hardware platform partners that we work with on the cloud
side we've been very very close with Microsoft Azure we've done a lot of collaborative work with them everything from object storage for Azure stack to building a S3 to Azure blob uh translator okay so that's an ongoing relationship and then on the isv side we've been really busy with the leading data protection vendors like veeam but also with Veritas and CommVault and cohesity um in the analytics side that's kind of the burgeoning one for us it took us a long time but we've done all of the
validation with Splunk now in multiple deployment models by the way this is single site stretched replicated that was a lot of work to become validated and we used to have some huge joint deployments with them uh but there's also Cloudera there's microfocus you know there's a lot of other vendors in that space the last one I'll call out is the medical imaging Community this is a industry that generates tons of data so think about all the picture archiving packs vendors and vendor neutral archives uh they're very you know they
like the ring because they can store petabytes of data for patient Lifetime right so it's it's an ideal solution for them for that kind of data that's good and I think that shows that the broadness of of the appeal you guys have which you know it's a lot of uh vendors will focus in particular verticals but you guys have this broad appeal which I think is part of your advantage right um right yeah let's talk about ransomware and that's a big thing right um and you know you mentioned the partnership with
vaim and obviously me being a vein be remiss me to talk about you know of course the way the immutability and object storage and um you know the the way that not only for backup but only for a primary workload perspective we are protecting against cyber crime and ransomware attacks the sophistication of what's coming you know how scality you know dealing with that on the SDS side of it yeah so this has become a real Focus area for us for now the last you know year year and a half to two years we did a lot of early work in making
sure that the systems were hardened but it became clear to us that this whole area of ransomware protection behind a solution like veeam is something we could really add value right so you have high-end customers that can fit with the ring but you have an entire mid-market of customers that can also use a solution like our tesca right so start with you know 50 terabytes or 100 terabytes of data and then growth from there that's not something the ring could have done before so we started hardening the product for this use case
with things like object lock capabilities so you get immutability right you can actually say lock the data for 60 days 90 days six months whatever you want it to be but we went further we started adding retention policies we added compliance mode once you set the the locks you can't override them even if you're the super admin so that you know you're starting to think about the internal malicious threat that may come about very important that very important that I overlooked almost very important
yeah but more than that you know you can Harden the operating system you can make it possible to not log into the root right you can lock down Network ports so you can slim down the OS so you don't have as many cves in the packages right there's so much work you can do here multi-factor authentication encryption of the data right so all of that is in all the products right now and it's something that we've worked very closely with many vendors but I would I would really say veeam the most we were one of
the inaugural launch partners for vmv12 so we started testing it last year uh so we support direct to object storage we support the SOS API and now we've gone further right we've actually made simplified installers for veeam we've done security policies for veeam and we started performance tuning for veeam yeah and I think that's that's kind of the future in terms of what we've what we've been talking about at the company which is really focusing on object storage as as a primary workload
not only for you know that those Landing Zone type of backup but also through a life cycle of capacity and archives so that's that's cool that you guys are doing that with us and now we can't we can't talk about it today because this episode is going live before veeamon 2023 but there are going to be some really good announcements there as well at the show so looking forward to that um that's right just finish off I've got a couple of minutes left I just wanted to you know ask you about you know how
are you guys going to continue to innovate and disrupt the market moving forward in a in a market where I feel like storage has become you know a kind of like table Stakes however it's still very important for a lot of people but platforms like kubernetes and you know we're talking about wessim coming into it into into play have kind of taken the storage Focus away so how do you guys remain you know there are thereabouts in this sort of new world over the next four or five years yeah I think we look
at you know we we look at a lot of different ways right but moreover the customer's voice and the customer does have a long-term view about how they want to use their data right you can't say that the storage is strategic although it is it's the data that's strategic right absolutely so what do we hear we hear three themes okay number one is data is going outside of the traditional data center going to the edge right and if you think about a lightweight Object Store it's very ideal for Edge deployments right it can run on
a VM it can run on kubernetes that's really perfect right and we're starting to see this pattern of hundreds or thousands of edge locations that collect data but need to return some amount of that data back to a central data center right so you can see in our tesca to ring combination in that okay so that's one pattern making management of these big Federated systems easier is one of our strategic directions so that's that's something we're working on very actively uh the second one is life cycle
management of the data from back to cold right data comes in it's hot where do you want to put it you want to put on a flash right uh you might want to retire it at some point to a more cost effective high density flash like a qlc over time maybe it makes sense to put it on spinners right but it goes further you've also got cloud storage you've got cold storage in the cloud and you've still got banks that are running on tape right so this idea of single endpoint single S3 endpoint that manages your data across these
different temperatures and locations is something that you can innovate in so that's another thrust for us I'll throw one more out at the risk of overloading things you're storing billions of files what good is it if you can't find the data right so in some blending of intelligent search query with data storage right makes a lot of sense going forward amazing well what Futures brought for you guys I really love what you guys are doing again your your mature you're set um you've got a great reputation in the
market and looking forward to you know what scality you're going to do over the next uh five to seven years as we get into 20 2030 as scary as it is again um hey Paul thanks for being on the show um just as a final reminder if you would like to be on the show please go to gtwgt podcast you can find me at Anthony's materi or go to gtwgt.
com and with that I'd like to thank Paul and scality for being on episode 64 of great things with gray Tech thank you foreign