Webinar Recap – Three Ways to Slash Your Enterprise Cloud Storage Cost

Webinar Recap – Three Ways to Slash Your Enterprise Cloud Storage Cost

The above is a recording and follows is a full transcript from the webinar, “Three Ways to Slash Your Enterprise Cloud Storage Cost.” You can download the full slide deck on Slideshare.

My name is Jeff Johnson. I’m the head of Product Marketing here at Buurst.

In this webinar, we will talk about three ways to slash your Enterprise cloud storage cost.

Companies trust Buurst for data performance, data migration, data availability, data control and security, and what we are here to talk about today is data cost control. We think about the storage vendors out there. The storage vendors want to sell more storage.

At Buurst, we are a data performance company. We take that storage, and we optimize it; we make it perform. We are not driven or motivated to sell more storage. We just want that storage to run faster.

We are going to take a look at how to avoid the pitfalls and the traps the storage vendors use to drive revenue, how to prevent being charged or overcharged for storage you don’t need, and how to reduce your data footprint.

Data is increasing every year. 90% of the world’s data has been created over the last two years. Every two years, that data is doubling. Today, IT budgets are shifting. Data centers are closing – they are trying to leverage cloud economics – and IT is going to need to save money at every single level of that IT organization by focusing on this data, especially data in the cloud, and saving money.

We say now is your time to be an IT hero.

There are three things that we’re going to talk about in today’s webinar.

 

We are going to be looking at all the tools and capabilities that you have for on-premises solutions and moving those into the cloud and trying to figure out which of those solutions are already in the cloud or not.

We’ll be taking a look at reducing the total cost of acquisition. That’s just pure Excel Spreadsheet cloud storage numbers, which cloud storage to use that don’t tax you on performance—speaking of performance, reducing the cost of performance because some people want to maintain performance but have less expense.

I bet you we could even figure out how to have better performance with less costs. Let’s get right down into it.

Reducing that cost by optimizing Enterprise data

We think about all of these tools and capabilities that we’ve had on our NAS, on our on-premise storage solutions over the years. We expect those same tools, capabilities, and features to be in that cloud storage management solution, but they are not always there in cloud-native storage solutions. How do you get that?

Well, that’s probably pretty easy to figure out. The first one we’re going to talk about is deduplication. This is inline deduplication. The files are compared block to block and see which ones we can eliminate and just leave a pointer there. To the end-user, they think they have the file, but it’s just a duplicate file.

 

 

We could eliminate…in most cases reduce that data storage by 20 to 30% less storage, and this becomes exponentially critical in our cloud storage.

The next one we have is compression. Well, with compression, we are going to reduce the numbers of bits needed to represent that data. Typically, we can reduce the storage cost by 50 to 75% depending on the types of files that are out there that can be compressed, and this is turned on by default with our SoftNAS program.

 

 

The last one we want to talk about is Data Tiering. 80% of data is rarely used past 90 days, but we still need it. With SoftNAS, we have data tiering policies or aging policies that can move me from more expensive, faster storage to less expensive storage, to even way back to ice-cold storage.

 

 

We could gain some efficiency in this tiering, and for a lot of customers, we’ve reduced their Enterprise cloud storage cost with an active data set by 67%.

What’s crazy is we add all these together. If I take a look at 50 TB of storage at 10 cents per GiB, is $5,000 a month. If I dedupe that just 20%, it brings it down to $4,000 a month. Then if I compress that by 50, I can get it down to 2,000 a month. Then if I tier that with 20% SSD and 80% HDD, I can get down to $1,000 a month, reducing my overall cost by 80% from 5,000 to $1,000 a month.

Again, not everything is equal out in the cloud. With SoftNAS, obviously, we have dedup, compression, and tiering. With AWS EFS, they do have tiering – great product. With AWS FSx, they have deduplication but not compression and tiering. Azure Files doesn’t have that.

Actually, with AWS infrequent storage, they charge you to write and read from that cold storage. They charge a penalty to use the data that’s already in there. Well, that’s great.

Reducing the total cost of acquisition is just use the cheapest storage.

Now I see a toolset here that I’ve used on-premise. I’ve always used dedupe on-premises. I’ve always used compression on-premises. I might have used tiering on-premises, but it’s really like NVME type of disk, and that’s great.

I see the value in that, but TCA is a whole different ball part here. It’s self-managed versus managed. It’s different types of disks to choose from. We take a look at this. This is like I said earlier. It’s just Excel Spreadsheet stuff  — what do they charge, what do I charge, and who has the least cost.

We take a look at this in two different kinds of buckets. We have self-managed storage like NVME disks and block storage. We have managed-storage as a service like EFS FSx and Azure Files.

If we drill down that a little bit, there are still things that you need to do and there are things that your managed storage service will do for you. For instance, of course, if it’s self-managed, you need to migrate the data, mount the data, grow the data, share the data, secure the data, backup the data. You have to do all those things.

 

 

Well, what are you paying for because if I have a managed storage service, I still have to migrate the data? I have to mount the data. I have to share and secure the data. I have to recover the data, and I have to optimize that data. What am I really getting for in that price?

The price is, block storage, AWS is 10 cents per Gig Per month. In Azure, it’s 15 cents per Gig per month. Those things that I’m trying to alleviate like securing, migrating, mounting, sharing, recovery, I am still going to pay 30 cents – three times the price of AWS SSD; or FSx, 23 cents; or Azure File, 24 cents. I’m paying a premium for the storage, but I am still having to do a lot on the management layer of that.

 

 

If we dive a little bit deeper into all that. EFS is really designed for NFS connectivity, so my Linux clients. AWS/FSx is designed with CIFS for the Windows clients with SMB, and Azure Files with CIFS for SMB. That’s interesting.

If I have Windows and Amazon, if I have Windows and Linux clients, I have to have an EFS account and an FSx account. That’s fine. But wait a second. This is a shared access model. I’m in contention with all the other companies who have signed up for EFS.

Yeah, they are going to secure my data, so company one can’t access company two’s data, but we’re all in line for the contention of that storage. So what do they do to protect me and to give me performance? Yeah, it’s shared access.

They’ll throttle all of us, but then they’ll give us bursting credits and bursting policies. They’ll charge me for extra bursting, or I can just pay for increased performance, or I can just buy more storage and get more performance.

At best, I’ll have an inconsistent experience. Sometimes I’ll have what I expect. Other times, I won’t have what I expect – in a negative way. For sure, I’ll have all of the scalability, all the stability and security with these big players. They run a great ship. They know how to run a data center better than all on-premises data centers combined.

But we compare that to self-managed storage. Self-managed, you have a VM out there, whether it’s Linux or Windows, and you attach that storage. This is how we attached storage back in the ‘80s or ‘90s, with a client-server with all its attached storage. That wasn’t a very great way to manage that environment.

Yeah, I had dedicated access, consistent performance, but it wasn’t very scalable. If I wanted to add more storage, I had to get a screwdriver, pop the lid, add more disks, and that is not the way I want to run a data center. What do we do?

We put a NAS in between all of my storage and my clients. We’re doing the same thing with SoftNAS in the cloud. With SoftNAS, we have an NFS protocol, CIFS protocol, or we use iSCSI to connect just the VMs of my company to the NAS and have the NAS manage the storage out to the VMs. This gives me dedicated access to storage, a consistent and predictable performance.

 

 

The performance is dictated by the NAS. The bigger the NAS, the faster the NAS. The more RAM and the more CPU the NAS has, the faster it will deliver that data down to the VMs. I will get that Linux and Windows environment with scalability, stability, and security. Then I can also make that highly available.

I can have duplicate environments that give me data performance, data migration, data cost control, data availability, data control, and security through this complete solution. But you’re looking at this and going, “Yeah, that’s double the storage, that’s double the NAS.” How does that work when you’re talking about Excel spreadsheets kind of data?

 

 

Alright. We know that EBS storage is 10 cents per GiB per month. EFS storage is 30 cents per GiB per month. The chart is going to expand with the more…two more terabytes I have in my solution.

If I add a redundant set of block storage, redundant set of VMs, and then I turn on dedupe and depression, and then I turn on my tiering, the price of the SoftNAS solution is so much smaller than what you pay for storage. It doesn’t affect the storage cost that much. This is how we’re able to save companies huge amounts of money per month on their storage bill.

 

 

This could be the single most important thing you do this year because most of the price of a cloud environment is the price of the storage, not the compute, not the RAM, not the throughput. It’s the storage.

If I can reduce and actively manage, compress, optimize that data and tier it, and use cheaper storage, then I’ve done the appropriate work that my company will benefit from. On the one hand, it is all about reducing costs, but there is a cost to performance also.

Reducing the Cost of Performance

No one’s ever come to me and said, “Jeff, will you reduce my performance.” Of course not. Nobody wants that. Some people want to maintain performance and lower costs. We can actually increase performance and lower costs. Let me show you how that works.

We’ve been looking at this model throughout this talk. We have EBS storage at 10 cents with a NAS, a SoftNAS between the storage and the VMs. Then we have this managed storage like EFS with all of the other companies in contention with that storage.

It’s like me working from home, on the left-hand side, and having a consistent experience to my hard drive from my computer. I know how long it takes to boot. I know how long it takes to launch an application. I know how long it takes to do things.

But if my computer is at work in the office and I had to hop in a freeway, I’m in contention with everybody else who’s going to work who also needs to work on their hard drive at the computer in their office. Some days the traffic is light and fast, some days it’s slow, some days there’s a wreck, and it takes them twice as long to reach there. It’s inconsistent. I’m not sure what I am paying for.

If we think about what EFS does for performance, and this is based on their website, you get more performance or throughput with more storage that you have. I’ve seen some ads and blog articles that a lot of developers.

They say, “If I need 100 MB of throughput for my solution and I only need one terabyte worth of data, I’ll put an extra terabyte of dummy data out there on my share so that I can get the performance I want.” I put another terabyte at 30 cents per GiB per month that I’m not even going to use just to get the performance that I need.

Then there’s bursting, then there is throttling, and then it gets confusing. We are so focused on delivering performance. SoftNAS is a data-performance company. We have levels or scales of performance, 200, 400, 800, to 6,400. Those relate to throughput, so the throughput and IOPS that you can expect for the solution.

We are using storage that’s only 10 cents per GiB on AWS. It’s a dedicated performance that you can determine the performance you need and then buy that solution. On Azure, it’s a little bit different. Their denominator for performance is of vCPUs. A 200 is a 2 vCPU. A 1,600 is a 20 vCPU. Then we publish the IOPS and throughput that you can expect to have for your solution.

Of course, reducing cost performance, use a NAS to deliver the storage in the cloud. Use a predictable performance. Use attached storage with a NAS. Use a RAID configuration. You can focus on read and write cache even with different disks that you use or with a NAS on the amount of RAM that you use.

Pay for performance. Don’t pay more for the capacity to get the performance. We just took a real quick look at three ways to slash your storage cost – optimizing that storage with dedupe, compression, and tiering, making less expensive storage work for you, right, and then reducing the cost of performance. Pay for the performance you need, not for more storage to get the performance you need.

What do you do now? You could start a free trial on AWS or Azure. You can schedule a performance assessment where you talk with one of our dedicated people who do this 24/7 to look to how to get you the most performance you can at the lowest price.

We want to do what’s right by you. At Buurst, we are a data-performance company. We don’t charge for storage. We don’t charge for more storage. We don’t charge for less storage. We want to deliver the storage you paid for.

 

 

You pay for the storage from Azure or AWS. We don’t care if you attach a terabyte or a petabyte, but we want to give you the performance and availability that you expect from an on-premises solution. Thank you for today. Thank you for your time.

At Buurst, we’re a data-performance company. It’s your time to be this IT hero and save your company money. Reach out to us. Get a performance assessment. Thank you very much.   

Learn the new rules of cloud storage

Learn the new rules of cloud storage

SoftNAS is now Buurst, and we’re about to change the enterprise cloud storage industry as you know it.

Watch the recording of our groundbreaking live webinar announcement on 4/15/20 and learn how:

  • To reduce your cloud storage costs, and save up to 80% on cloud storage costs and increase performance (yes, you read that right!)
  • Applying configuration variables will maximize data performance, without storage limitations
  • Companies such as Halliburton, SAP, and Boeing are already taking advantage of these rules and effectively managing Petabytes of data in the cloud

Who should watch?

 

  • Cloud Architects, CIO, CTO, VP Infrastructure, Data Center Architects, Platform Architects, Application Developers, Systems Engineers, Network Engineers, VP
    Technology, VP IT, VP BI/Data Analytics, Solutions Architects
  • Amazon Elastic File System (EFS) customers, Amazon FSx customers, Azure NetApp File System (NFS) customers, Isilon customers
Get your On-Premises NAS in the Azure Cloud

Get your On-Premises NAS in the Azure Cloud

 “Get your On-Premises NAS in the Azure Cloud”. Download the full slide deck on Slideshare

Looking to transition your enterprise applications to the highly-available Azure cloud? No time/budget to re-write your applications to move them to Azure? SoftNAS Cloud NAS extends Azure Blob storage with enterprise-class NAS file services, making it easy to move to Azure. In this post, we will discover how you can quickly and easily:

– Configure Windows servers on Azure with full Active Directory control
– Enable enterprise-class NAS storage in the Azure cloud
– Provide disaster recovery and data replication to the Azure cloud
– Provide highly available shared storage

Get your On-Premises NAS in the Azure Cloud

My name is Matt Blanchard. I am a principal solutions architect, we’re going to talk about some of the advantages of using Microsoft Azure for your cloud storage devices inside the cloud and helping you make plans to move from your on-premise solution today into the cloud of tomorrow.

This is not a new concept. This is what we’ve seen in trend for the last several years. The bill versus buy aspect is where we’re going to have a great economy of scale whenever we buy assets or we buy an OpEx partner and we are able to use that type of partnership to advance our IT needs versus a low economy of scale. If I have to invest my own money to build up the information systems and buy large SAN suppliers in networking, storage networks, and so forth. Hosting that and building that all out myself makes a lot of capital investment. This is the paradigm.

On-premise vs the cloud architecture.

On-premise vs the cloud architecture.

A lot of the things that we see that we have to provide for ourselves on-premise are things that are assumed and given to us in configurations in the cloud, such as with Microsoft Azure giving us the availability to have full-fledged VMs running inside of our Azure repository and accessing our SoftNAS virtual SAN. We are able to give you network access control towards all your storage needs within a packaged small useable space.

On-premise vs in the cloud

On-premise vs in the cloud

I don’t have to build my own data center. I can have all my applications running in the cloud on-services versus having them on-premise running physically and having to maintain them physically on datasets.

If you think about rebuilding applications for the next generation of databases or having the next generation of server componentry that we’re going to install that may not have the correct driver sets for our applications and having to rebuild all those things. It makes it quite tedious to help move forward with your architecture.

However, when we start to blow those lines and move into let’s say a hosting provider or cloud services, those dependencies on the actual hardware devices and the physical device drivers start to fade away because we’re running these applications as services and not as physically supported sideload architectures.

This movement towards Azure in the cloud makes quite a bit of sense whenever you start looking at the economies of scale, how fast we could grow in capacity, and things like bursting control whenever we have large amounts of data services that we’re going to have to supply on-demand versus things that we have on a constant day-to-day basis.

Say we are a big software company or a big game company that’s releasing the next new Star Wars game. I’ll have to TM that or something in my conversation. You’ll have to see us. It might be some sort of online game that needs extra capacity for the first weekend out just to support all the new users who’re going to be accessing that.

This burst ability and this expandability into the cloud make all the sense in the world because who wants to spend that money on that hardware to build out that infrastructure for something that may or may not continue to be that large of an investment in the future? If we can scale that down over time or scale it up over time, either way. Maybe we undersized our built. You can think of it in that aspect.

It really makes sense – this paradigm switched into the cloud mantra.

Flexible, Adaptable Architecture

Flexible, Adaptable Architecture

At Buurst, we’ve built our architecture to be flexible and adaptable inside of this cloud architecture. We’ve built a Linux virtual machine; it’s built on CentOS. It runs ZFS as our file system on that kernel. We run all of our systems on open controllable systems. We have staff on-site that contribute to these open-source amalgams to make these systems better into CentOS and ZFS. We contribute a lot of intellectual property to help advance these technologies into the future.

We, of course, run HTML5 as our admin UI, we have PHP, and Apace is our web server. We have all these open systems to allow us to be able to take advantage of a great open-source community out there on the internet.

We integrate with multiple different service users. If you have customers that are currently running in AWS or CenturyLink Cloud and they are looking to migrate into Azure — make a change — it’s very easy for us to come in and help you make that data migration change because inserting SoftNAS service into both of those service providers and then simply migrating that data is very simple and easy to do the task.

We really do take in responses. We want to be flexible. We want to be open. We want to have all of our data resources that have multiple use cases. We are able a full-featured NAS service that does all of these things in the data services tab.

Block replication, we can do inline deduplication, caching, storage pools, thin provisioning, writable snapshots, and snap clones. We can do compression and encryption. With all of these different offerings, we are able to give you a single packaged NAS solution.

Once again, all the things that you think you’ll come back in like, “I’m going to have to implement all of that stuff. I’m going to have to buy all these different componentry and insert them into my hardware,” those are things that are assumed and used and we are able to go ahead and give you directly in our NAS solution.

How does SoftNAS work?

How does SoftNAS work.png

To be very forthcoming, it’s basically a gateway technology. We are able to present storage capacity whether it be a CIFS or SMB access medium for Windows users for some sort of Windows file share or if it’s an NFS share for some Linux machines or even just an iSCSI block device or an Apple File Protocol for entire machine backups.

If you have end-users or end devices that need storage repositories of multiple different protocols, we are able then to store that data into say an Azure Blob Storage or even a native Azure storage device. We are able then to translate those protocols into an object protocol, which is not a native language. We don’t speak in object whenever we’re going through a normal SMB connection, but we do also speak native object directly into Azure Blob. We offer the best of both worlds with this solution.

Just the same as native block devices, we have a native block protocol that we are to talk directly into Azure disks that directly attach to these machines. We are able to create flexible containers that make data unifiable and accessible.

SoftNAS Cloud NAS on Azure

SoftNAS Cloud NAS on Azure

What we’re basically going to do is we’re going to present a single IP point of access that all of these file systems will land on. All of our CIFS access, all of our NFS exports, and all of the AFP shares will all be enumerated out on a single SoftNAS instance and they will be presented to these applications, servers, and end-users.

The storage pools are nothing more than conglomerations of disks that have been offered up by the Microsoft Azure platform. Whether it’s Microsoft Blob or it’s just native disks, if it’s even another type of object device that you’ve imported into these drives, we can support all of those device types and create storage pools of different technologies.

And we can attach volumes and LUNs that have shares of different protocols to those storage pools so it allows us to have multiple different connection points to different storage technologies on the backend. And we do this as a basic translation and it’s all seamless to the end-user or the end device.

NFS/CIFS/iSCSI Storage on Azure Cloud

NFS-CIFS-iSCSI Storage on Azure Cloud

A couple of use cases where SoftNAS and Azure really make sense. I’m going to go through these and talk about the challenge. The challenge would be a company needs to quickly SaaS-enable a customer-facing application on Azure but the app doesn’t support blob. They also need LDI or LDAP Integration for that application. What would the solution be? Basically, the solution will be rewriting your application to support blob and AD authentications. That is highly unlikely that it would ever happen.

Instead of rewriting that application to support blob, continue to do business the way you always have. That machine needs access via NFS, fine. We’ll just support that via NFS through SoftNAS.

Drop all that data on a Microsoft Azure backend, store it in the blob, and let us do the translation. Very simple access so then we could have access for all of our applications on-premise or in the cloud directly to whatever data resources they need and it could be presented with any protocol that’s listed – via CIFS, NFS, AFP, iSCSI.

Disaster Recovery on Azure Cloud

Disaster Recovery on Azure Cloud

Maybe you have a big EMC array at their location that you have several years of support left on. You need to be able to meter the use of it, but you need to be able to have a simple integration solution. What would be the solution?

It would be very easy to spin up a SoftNAS instance on the premise, directly access that EMC array, and utilize the data resources for SoftNAS. We can then represent those data repositories to their application servers and end-users on site and replicate all that data using Snapreplicate into Microsoft Azure.

We would have our secondary blob storage in Azure and we’d be replicating all that data that’s on-premise into the cloud.

What’s great about this solution is it becomes a gateway drive when I get to the end of support on that EMC array and says, “We need to go buy a new array or we need to have support for that array.” We’ve got this thing running in Azure already, why don’t we just cut the code? It is the exact same thing that’s running in Azure. We could just start directing our application resources to Azure. It’s a great way to get you moving into the cloud and get a migration strategy moving forward.

 

Hybrid on-premise storage getway to Azure Cloud

hybrid on-premise storage getway to Azure Cloud

The last one is hybrid on-premise usage and I alluded to this one earlier about the burst to cloud type of thing. This is a company that has performance-sensitive applications that need a local LAN. They need off-site protection or capacity. The solution basically would be to set up replication to Azure and then have that expand capacity. So basically whenever they run out of space on-premise, we would then be able to burst out into Azure and create more and more virtual machines to access that data.

Maybe it’s a web services account that has a web portal UI or something like that that needs just a web presence. Then we’re able to multiple copies of different web servers that are load balanced all accessing the same data on top of Microsoft Azure through SoftNAS AZUR NAS Storage Solution.

 

Best Practices Learned from 2,000 AWS VPC Configurations

Best Practices Learned from 2,000 AWS VPC Configurations

Best Practices Learned from 2,000 Amzon AWS VPC Configuration. Download the full slide deck on Slideshare

The SoftNAS engineering and support teams have configured over 2,000 Amazon Virtual Private Cloud (VPC) configurations for SMBs to Fortune 500 companies. In this guide, we share the lessons learned in configuring AWS VPCs, including undocumented guides, tips, and tricks.

Amazon’s Virtual Private Cloud enables you to launch Amazon Web Services (AWS) resources, like EC2 instances, into a virtual network that you’ve defined. They are flexible, secure and a core building block of AWS deployments.

 

In this Guide, we covered:

  • How do packets really flow in an Amazon VPC?
  • Common security group misconfigurations.
  • Why end-points are good things.
  • To NAT or not?
  • VPNs and VPCs: a good thing?
  • Best practices for AWS VPC management

We’ve configured over 2,000 Amazon AWS VPC

In this post, we’ll be talking about some of the lessons that we’ve learned from configuring over 2,000 VPCs for our customers on Amazon Web Services (AWS_). We’ve configured over 2,000 Amazon VPC for our customers and some of the customers that we’ve configured the VPCs for are listed out here.

configure Amazon AWS VPC

We’ve got a wide range of experience in both the SMB and the Fortune500 market. Companies like Nike, Boeing, Autodesk, ProQuest, and Raytheon, have all had their VPCs configured by SoftNAS.

Just to give you a brief overview of what we mean by SoftNAS. SoftNAS is the product that we use for helping manage storage on AWS. You can think of it as a software-defined NAS. Instead of having a physical NAS as you do on a traditional data center, our NAS is software-defined and it’s based fully on the cloud with no hardware required. It’s easy to use.

You can get up and running in under 30 minutes and it works with some of the most popular cloud computing platforms so Amazon, VMware, Azure, and CenturyLink Cloud.

What is an AWS VPC or a Virtual Private Cloud?

It can be broken down in a couple of different ways, and we’re going to break this down from how Amazon Web Service looks at this.

What is an AWS VPC or a Virtual Private Cloud

It’s a virtual network that’s dedicated to you. It’s essentially isolated from other environments in the AWS Cloud. Think of it as your own little mini data center inside of the AWS data center.

It’s actually a location where you launch resources into and allow you to virtually logically group them together for control. It gives you configuration flexibility to use your own private IP addresses, create different subnets, routing, whether you want to allow VPN access in, how you want to do internet access out, and configure different security settings from a security group. I’m an access control list point of view.

 The main things to look at that I see are around control.

  • What is your IP address range? How is the routing going to work?
  • Are you going to allow VPN access?
  • Is it going to be a hardware device at the other end?
  • Are you going to use Direct Connect? How are you going to architect your subnets?

These are all questions. I’m going to cover some of the tips and tricks that I have learned throughout the years. Hopefully, these will things that will help everyone because there is not really a great AWS VPC book out there or dot guidance. It’s just a smattering of different titbits and tricks from different people.

Security groups and ACL as well as some specific routing rules. There are some specific features that are available only in VPCs. You can configure multiple NIC interfaces.

You can set static private IPs so that you don’t ever lose that private IP when the machine is stopped and started, and in certain instances such as the T2s and the M4s, their primary purpose is to be lost within a VPC.

This is the way that you could perform your hybrid cloud setup or configuration. You could use Direct Connect, for example, to securely extend your premise location into the AWS Cloud, or you could use your VPN connection on the internet to also extend your premise location into the cloud.

You can peer the different VPCs together. You can actually use multiple different VPCs and peer them together for different organizational needs. You can also peer them together with other organizations for access to certain things — think of a backend supplier potentially for inventory control data.

Then there is a bunch of endpoint flow logs that help you with troubleshooting. Think of this, for those of you that have a Linux background or any type of networking background, like a TCP dump or a Wireshark ability to look at the packets and how they flow can be very useful when you’re trying to do some troubleshooting.

Amazon AWS VPC Topology Guidance 

Just some AWS VPC topology guidance so hopefully, you’ll come away will something useful here. Our VPC is used in a single region but will be in multiple availability zones.

AWS VPC Topology

It will extend across at least two zones because you’re going to have multiple subnets. Each subnet lives in a single availability zone. If you configure multiple subnets, you can configure this across multiple zones. You can take the default or you can dedicate a specific subnet to a specific zone.

All the local subnets can reach each other and route to each other by default. The subnet sizes can be from a /16 to a /28 and you can choose whatever your IP prefix is.

How can access traffic within the AWS VPC environment?

How can access traffic within the virtual private cloud environment? There is multiple difference between these gateways. What do these gateways mean and what do they do? Do you hear these acronyms IGW, VPG, and VGW? What does all this stuff do?

access traffic AWS VPC environment

These gateways generally are provisioned at the time of VPC creation, so keep that also in mind. The internet gateway is an ingress and egress for internet access.

You can essentially in your VPC point to specific machines or different routes to go out over the internet gateway to access resources outside of the VPC or you can restrict that and not allow that to happen. That’s all based on your organizational policy.

A virtual private gateway, which is the AWS side of a VPN connection, if you’re going to have VPN access to your VPC, this is the VPC on the AWS side of that connection and the CG is the customer side of the VPN connection within a specific VPC.

From a VPN option, you have multiple subsets. I mentioned Direct Connect which would essentially give you dedicated bandwidth to your VPC. If you wanted to extend your premise location up into the cloud, you could leverage Direct Connect for your high-bandwidth lower latency type of connections. Or if you wanted to just be able to make a connection faster and you didn’t necessarily need that level of your throughput or performance, you can just tap up a VPN channel.

Most VPN vendors like Cisco and others are supported and you can easily download a template configuration file for those major vendors directly.

Amazon Web Services (AWS) VPC Packets Flow

Let’s talk a little bit about how the packets flow within an AWS VPC. This is one of the things that I really wish I had known earlier on when I was first delving into configuring SoftNAS instances inside of VPCs and setting up VPCs for customers in their environments.

 AWS VPC Packets Flow

Its, there is not really great documentation out there on how packets get from point A to point B under specific circumstances. We’re going to back to this a couple of different times, but you’ve got to keep this in mind that we’ve got three instances here — instance A, B, and C — installed on three different subnets as you can see across the board.

How do these instances communicate with each other?

Let’s look at how instance A communicates to instance B. The packets hit the routing table. They hit the node table. They go outbound to the outbound firewall.

AWS VPC Packet Flow Instance A and Instance B

They hit the source and destination check that occurs and then the outbound security group is checked. Then the inbound security group source and destination check in the firewall.

This gives you an idea if you make different configuration changes in different areas, where they actually impact, and where they actually come into play. Let’s just talk about how instances would talk B to C.

Go back to the first diagram. We’ve already shown how A would communicate with B. How do we get over here to this other network? What does that actually look like from a packet flow perspective?

This is how it looks from an instance B perspective to try to talk to instance C, where it’s actually sitting on two subnets and the third instance (instance C) is on a completely different subnet.

AWS VPC Packet Flow Instance b and Instance c

It actually shows how these instances and how the packets would flow out to a completely different network and this would depend on which subnet each instance was configured in.

Amazon AWS VPC Configuration Guide

Some of the lessons that we’ve learned over time. These are personal lessons that I have learned and things that I wish if, on day one, somebody handed me a piece of paper. What would I want to have known going into setting up different VPCs and some of the mistakes that I’ve made throughout my time?

Organize AWS Environment

Number one is to tag all of your resources within AWS. If you’re not doing it today, go do it. It may seem trivial, but when you start to get into multiple machines, multiple subnets, and multiple VPCs, having everything tagged so that you can see it all in one shot really helps not make big mistakes even bigger.

Organize AWS Environment

 Plan your CIDR block very carefully. Once you set this VPC up, you can’t make it any bigger or smaller. That’s it, you’re stuck with it. Go a little bit bigger than you think you may need because everybody seems to really wish they hadn’t undersized the VPC, overall. Remember that AWS takes five IPs per subnet. They just take them away for their use. You don’t get them. Avoid overlapping CIDR blocks. It makes things difficult.

Save some room for future expansion, and remember, you can’t ever add any more. There are no more IPs once you set up the overall CIDR.

AWS Subnet Your Way to Success

Subnet Your Way to Success with aws vpc

Control the network properly. What I mean by that is use your ACLs, use the things in the security groups. Don’t be lazy with them. Control all those resources properly. We have a lot of resources and flexibility right there within the ACLs and the security groups to really lock down your environment.

Understand what your AWS subnet strategy is.

Is it going to be smaller networks, or you’re just going to hand out class Cs to everyone? How is that going to work?

If your AWS subnets aren’t associated with a very specific routing table, know that they are associated with the main routing table by default and only one routing table is the main. I can’t tell you how many times I thought I had configured a route properly but hadn’t actually assigned the subnet to the routing table and put the entry into the wrong routing table. Just something to keep in mind — some of these are little things that they don’t tell you.

I’ve seen a lot of people configure things and aligning their subnets to different tiers. They have the DMZ tier, sloe the proxy tier, and the subnet. They are subnets for load balancing, subnets for application, and subnets for databases. If you’re going to use RDS instances, you’re going to have to have at least three subnets, so keep that in mind.

Set your subnet permissions to “private by default” for everything. Use Elastic Load Balancers for filtering and monitoring frontend traffic. Use NAT to gain access to public networks. If you decide that you need to expand, remember the ability to peer your VPCs together.

Endpoint configuration

Also, Amazon has endpoints available for services that exist within AWS such as S3. I highly recommend that you leverage the endpoint’s capability within these VPCs, not only from a security perspective but from a performance perspective.

Understand that if you try to access S3 from inside of the VPC without an endpoint configured, it actually goes out to the internet before it comes back in so the traffic actually leaves the VPC. These endpoints allow you to actually go through the backend and not have to go back out to the internet to leverage the services that Amazon is actually offering.

Control Your Access

Control Your AWS Access

Do not set your default route to the internet gateway. This means that everybody is going to be able to get out. And in some of the defaulted wizard settings that Amazon offers, this is the configuration so keep it in mind. Everyone would have access to the internet.

You do use redundant NAT instances if you’re going to go with the instance mode, and there are some cloud formation templates that exist to make this really easy to deploy.

Always use IAM roles. It’s so much better than access keys. It’s so much better for access control. It’s very flexible. Here in the last 10 days, now you can actually attach an IAM role to a running instance, which is fantastic, and even easier to leverage now that you don’t have to deploy the new compute instances to attach and set IAM roles.

How does SoftNAS use Amazon VPC?

How does SoftNAS actually fit into using AWS VPC and why is this important?

softnas aws vpc high availability ha

We have a high-availability architecture leveraging our SNAP HA feature which provides high availability for failover cross zones, so multiple AZ high-availability. We leverage our own secure block replication using SnapReplicate to keep the nodes in sync, and we can provide a no downtime guarantee within Amazon if you deploy SoftNAS with the multiple AZ configuration in accordance with our best practices.

Cross-Zone HA: AWS Elastic IP

cross-zone HA AWS ELASTIC IP

This is how this looks and we actually offer two modes of high availability within AWS. The first is the Elastic IP-based mode where essentially two SoftNAS controllers can be deployed in a single region each of them into a separate zone.

They would be deployed in the public subnet of your VPC and they would be given elastic IP addresses and one of these elastic IPs would act as the VIP or the virtual IP to access both controllers. This would be particularly useful if you have on-premises resources, for example, or resources outside of the VPC that need to access this storage, but this is not the most commonly deployed use case.

Cross-Zone HA: Private Virtual IP Address

cross-zone HA private virtual IP address

Our private virtual IP address configuration is really the most common way that customers deploy the product today, and this is probably at this point about 85 to 90-plus percent of our deployments is in this cross-zone private approach, where you deploy the SoftNAS instance in the private subnet of your VPC.

It’s not sitting in the public subnet, and you pick any IP address that exists outside of the CIDR block of the VPC in order to be able to have high availability, and then you just mount your NFS clients or map your CIFS shares to that private virtual IP that exists outside of the subnet of the CIDR block for the overall VPC.

SoftNAS AWS VPC Common Mistakes

SoftNAS AWS VPC Common Mistakes

Some common mistakes that we see when people have attempted to deploy SoftNAS in a high availability architecture in VPC mode. You need to deploy two ENIs or Elastic Network Interfaces on each of the SoftNAS instances.

If you don’t catch this right away when you deploy it…Of course, these ENIs can be added to the instance after it’s already deployed, but it’s much easier just to go ahead and deploy the instances with the network interface attached.

Both of these NICs need to be in the same subnet. If you deploy an ENI, you need to make sure that both of them are in the same subnet. We do require ICMP to be open between the two zones as part of our health check.

We also see that the other problem is that people are providing access to S3. We actually as part of our HA provide a third-party witness and that third-party witness is an S3 bucket. So, therefore, we require access to S3, so that would require an endpoint or access out of your data infrastructure.

For Private HA mode, the VIP IP must not be within the CIDR of the VPC in order to overcome some of the networking limitations that exist within Amazon. Taran, I’m going to turn it back over to you. That concludes my portion of the presentation.

we suggest you have a look at our “AWS VPC Best Practices” blog post. In it, we share a detailed look at best practices for the configuration of an AWS VPC and common VPC configuration errors.

SoftNAS Overview

Softnas for AWS vpc best practices

Just to give everyone a brief overview of SoftNAS cloud. Basically, what we are is a Linux virtual appliance that’s available on the AWS marketplace. You are able to go to SoftNAS on AWS and spin up an instance and get up and running in about 30 minutes. As you can see in the image, our architecture is based on ZFS on Linux. We have an HTML5 GUI that’s very accessible and easy to use. We do work on a number of cloud platforms including AWS, as well as Amazon S3 and Amazon EBS.

AWS NAS Storage Solution

SoftNAS offers AWS customers an enterprise-ready NAS capable of managing your fast-growing data storage challenges including AWS Outpost availability. Dedicated features from SoftNAS deliver significant cost savings, high availability, lift and shift data migration, and a variety of security protection.

SoftNAS AWS NAS Storage Solution is designed to support a variety of market verticals, use cases, and workload types. Increasingly, SoftNAS NAS deployed on the AWS platform to enable block and file storage services through Common Internet File System (CIFS), Network File System (NFS), Apple File Protocol (AFP), and Internet Small Computer System Interface (SCSI). Watch the SoftNAS Demo.

How To Reduce Public Cloud Storage Costs

How To Reduce Public Cloud Storage Costs

How To Reduce Public Cloud Storage Costs. Download the full slide deck on Slideshare

John Bedrick, Sr. Director of Product Marketing Management and Solution Marketing, discussed how SoftNAS Cloud NAS  is helping to reduce Public Storage Costs. In this post, you will get a better understanding of the data growth trends and what needs to be considered when looking to make the move into the Public cloud.

How To Reduce Public Cloud Storage Costs

The amount of data that has been created by businesses is staggering – it’s on an order of doubling every 18 months. This is really an unsustainable long-term issue when we see what IT budgets are growing at compared to businesses.

IT budgets on average are growing maybe about 2 to 3% annually. Obviously, according to IDC, by 2020 which is not that far off, 80% of all corporate data growth is going to be unstructured — that’s your emails, PDF, Word Documents, images, etc. — while only about 10% is going to come in the form of structured data like databases, for example. And that could be SQL databases, NoSQL, XML, JSON, etc. Meanwhile, by 2020, we’re going to be reaching 163 Zettabytes worth of data at a pretty rapid rate.

How to Reduce Public Cloud Storage Costs

If you compel that by some brand new sources of data that we hadn’t really dealt with much in the past, it’s really going to be challenging for businesses to try to control and manage that when you add in things like the Internet of Things, big data analytics, all of which will create gaps between where the data is produced versus where it’s going to be consumed, analyzed, backed-up.

Really, if you look at things even from a consumer standpoint, almost everything we buy these days generates data that needs to be stored, controlled, and analyzed – from your smart home appliances, refrigerators, heating, and cooling, to the watch that you wear on your wrist, and other smart applications and devices.

If you compel that by some brand new sources of data that we hadn’t really dealt with much in the past, it’s really going to be challenging for businesses to try to control and manage that when you add in things like the Internet of Things, big data analytics, all of which will create gaps between where the data is produced versus where it’s going to be consumed, analyzed, backed-up.

Really, if you look at things even from a consumer standpoint, almost everything we buy these days generates data that needs to be stored, controlled, and analyzed – from your smart home appliances, refrigerators, heating, and cooling, to the watch that you wear on your wrist, and other smart applications and devices.

If you look at 2020, the number of people that will actually be connected will reach an all-time high of four billion and that’s quite a bit. We’re going to have over 25 million apps. We are going to have over 25 billion embedded and intelligent systems, and we’re going to reach 50 trillion gigabytes of data – staggering.

On the meantime, the data isn’t confined merely anymore to traditional data centers so the gap between where it’s stored and where it’s consumed and the preference for data storage is not going to be your traditional data center anymore.

Businesses are really going to be in a need of a multi-cloud strategy for controlling and managing this growing amount of data.

If you look at it, 80% of IT organizations will be committed to hybrid architectures and this is according to IDC. In another study from the “Voice of Enterprise” by the 451 Research Group, it was found that 60% of companies will actually be upgrading in a multi-cloud environment by the end of this year.

Data is created faster than the IT budgets grow

Data is created faster than the IT budgets grow

While data is being created faster and the rate of the IT budget is growing, you can see from the slide that there’s a huge gap, which leads to frustration from the IT organization.

Let’s transition to how do we address and solve some of these huge data monsters that are just gobbling up data as fast as it could be produced and creating a huge need for storage.

What do we look for in a public cloud solution to address this problem?

Well, some of these have been around for a little while.

Data storage compression.

Now, for those of you who haven’t been around the industry for very long, data storage compression basically removes the whitespace between and in data for more efficient storage.

Data storage compression.

If you compress the data that you’re storing, then you can get a net benefit of savings in your storage space and that, of course, immediately translates into cost savings. Now, all of this cost is subject to the types of data that you are storing.

Not all cloud solutions, by the way, include the ability to compress data. One example that comes to mind is a very well-promoted cloud platform vendor’s offering that doesn’t offer compression. Of course, I am speaking about Amazon’s Elastic File System, or EFS for short. An EFS does not offer compression. That means; you either need to have a third-party compression utility to compress your data prior to storing it in the cloud on EFS or solutions like EFS, and that can lead to all sorts of potential issues down the road. Or you need to store your data in an uncompressed format; and of course, if you do that, you’re paying unnecessarily more money for that cloud storage.

Deduplication

Another technology is referred to as deduplication. What is deduplication? Deduplication sounds and is exactly what it sounds like; it is the elimination or reduction of data redundancies.

Deduplication

If you look at all of the gigabytes, terabytes, and petabytes that you might have of data, there is going to be some level of duplication. Sometimes it’s a question of multiple people who may be even storing the exact same identical file on a system that gets backed up into the cloud. All of that is going to take up additional space.

If you’re able to deduplicate the data that you’re storing, you can achieve some significant storage-space savings, translation into cost savings, and that, of course, is subject to the amount of repetitive data being stored. Just like I mentioned previously with compression, not all solutions in the cloud include the ability to deduplicate data. Just as in the previous example that I had mentioned about Amazon’s EFS, EFS also does not include native deduplication.

Either, again, you’re going to need a third-party dedupe utility prior to storing it in EFS or some other similar solution, or you’re going to need to store all your data in an un-deduped format on the cloud. That means you’re, of course, going to be paying more money than you need to.

Object Storage

Much more cost-effective

Object Storage

Let’s just take a look at an example of two different types of storage at a high level. What you’ll take away from this image, I hope, is that you will see that object storage is going to be much more cost-effective, especially in the cloud.

Just a word of note. All the prices that I am displaying in this table are coming from the respective cloud platform vendors on the West Coast pricing. They offer different prices based on different locations and regions. In this table, I am using West Coast pricing. What you will see is that the more high-performing public cloud block storage costs are relatively more expensive than the lower-performing public cloud object storage.

In the example, you can see ratios of five or six or seven to one where object storage is less expensive than block storage. In the past, typically what people would use that object storage for would be to put less active storage and data into the object storage. Sort of more of a long curve strategy. You can think of it as maybe akin to the more legacy-type drives that are still being used today.

Of course, what people would do will be putting their more active data in block storage. If you follow that and you’re able to make use of object storage in a way that’s easy for your applications and your users to obtain too, then that works out great.

If you can’t…Most solutions out in the market today are unable to utilize access to cloud-native object storage and so they need something in between to be able to get the benefit of that. Similarly, being able to get cloud-native access to block storage also would require access to some solution for that and there are a few out in the market and, of course, SoftNAS is one of those.

High Availability With Single Storage Pool

Relies on The Robust Nature of Cloud Object Storage

High Availability With Single Storage Pool

 If you’re able to make use of object storage, what are some of the cool things you can do to save more money besides using object storage just by itself? A lot of applications require high availability. High availability (HA) is exactly what it sounds like. It is maintaining a maximum amount of up-time for applications and access to data.

There has been an ability to have two computing instances access a single storage pool — they both share access to the same storage pool –in the past, on legacy on-premises storage systems and it hasn’t been fully brought over into the public cloud until recently.

If you’re able to do this as this diagram shows — having two computer instances access an object-storage storage pool — that means you’re relying on the robust nature of public cloud object storage. The SLAs typically for public cloud object storage are at least 10 or more 9s of uptime. That would be 99.999999% or better, of up-time, which is pretty good.

The reason why you would have two computer instances is that the SLAs for the computing are not the same SLAs for the storage. You can have your compute instance go down in the cloud just like you could on an on-premises system; but at least your storage would remain up, using object storage. If you have two compute instances running in the cloud, if one of those — we’ll call it the primary node — was to fail, then the rollover or fell over would be to the second compute instance, or as I’m referring to on this diagram as the secondary node, and it would pick up.

There would be some amount of delay switching from the primary to the secondary. That will be a gap if you are actively writing to the storage during that period of time but then you would pick back up in a period of time — we’ll call it less than five minutes, for example — which is certainly better than being down for the complete duration until the public cloud vendor gets everything back up. Just remember that not every vendor offers this solution, but it can greatly reduce your overall public cloud storage cost by half. If you don’t need to have twice the storage for a fully high-available system and you can do it all with object storage in just two compute instances, you’re going to save roughly 50% of what the cost would normally be.

High-speed WAN optimization

Bulk data transfer acceleration

High-speed WAN optimization

The next area of savings is one that a lot of people don’t necessarily think about when they are thinking about savings in the cloud and that is your network connection and how to optimize that high-speed connection to get your data moved from one point to another.

Traditional ways of filling lots of physical hard drives or storage systems, then putting them on a truck and having that truck drive over to your cloud provider of choice. Then taking those storage devices and physically transferring the data from those storage devices into the cloud or possibly mounting it can be,, very expensive and filled with lots of hidden costs. Plus, you really do have to recognize that you run the risk of your data getting out of sync between the originating source in your data center and the ultimate cloud destination, all of which can cost you money.

Another option, of course, is I’ll lease some high-speed network connections between my data center or my data source and the cloud provider of your choice. That also could be very expensive. Needing a 1G network connection or a 10G network connection, those are pricy. Having the data transfer take longer than it needs to means that you have to keep paying for those leased lines, those high-speed network connections, longer than you would normally want.

The last option which would be transferring your data over slower more noisy error-prone network connections, especially in some parts of the world, is going to take longer due to the quality of your network connection and the inherent nature of the TCP/IP protocol. If it needs to have a retransmit of that data, sometimes because of those error conditions or drops or noise or latency, the process is going to become unreliable.

Sometimes the whole process of data transfer has to start over right from the beginning so all of the previous time is lost and you start from the beginning. All of that can result in a lot of time-consuming effort which is going to wind up costing your business money. All of those factors should also be considered.

Automated Storage Tiering of Aged Data

Match Application Data to Most Cost-effective Cloud Storage

Automated Storage Tiering of Aged Data

The next option I’m going to talk to you about is one that’s interesting. That is, assuming that you can make use of both object storage and block storage and be able to use them together. Creating tiers of storage where you are making use of the high-speed higher performing block storage on one tier and then also using other tiers which would be less performance and less expensive.

If you can have multiple tiers, where your most active data is only contained within the most expensive higher performing tier, then you are able to save money if you can move the data from tier to tier. A lot of solutions out in the market today are doing this via a manual process. Meaning that a person, typically somebody in IT, would be looking at the age of the data and moving it from one storage type to another storage type, to another storage type.

If you have the ability to create aging policies that can move the data from one tier to another tier, to another tier, and back again, as it’s being requested, that can also save you considerable money in two ways.

One way is, of course, you’re only storing and using the data on the tier of storage that is appropriate at the given time, so you’re saving money on your cloud storage. Also, if it could be automated, you’re saving money on the labor that would have to manually move the data from one tier to another tier. It can all be policy-driven so you’ll save money on the labor for that.

These are all areas in which you should consider looking at to help reduce your overall public cloud storage expense.

SoftNAS can offer in helping you save money in the public cloud.

Save 30-80% by reducing the amount of data to store  

SoftNAS provides enterprise-level cloud NAS Storage featuring data performance, security, high availability (HA), and support for the most extensive set of storage protocols in the industry: NFS, CIFS/SMB-AD, iSCSI. It provides unified storage designed and optimized for high performance, higher than normal I/O per second (IOPS), and data reliability and recoverability. It also increases storage efficiency through thin-provisioning, compression, and deduplication.

SoftNAS runs as a virtual machine, providing a broad range of software-defined capabilities, including data performance, cost management, availability, control, backup, and security.

Webinar: Consolidate Your Files in the Cloud

Webinar: Consolidate Your Files in the Cloud

Consolidate Your Files in the Cloud. You can download the full slide deck on Slideshare

Consolidating your file servers in AWS or Azure cloud can be a difficult and complicated task, but the rewards can outweigh the hassle.

Consolidate Your Files in the Azure or AWS Cloud

In this blog, we will deliver the housekeeping and an overview of what we’re talking about here will be file server storage consolidation pieces. What we want to talk about here today is, everyone is on this cloud journey and where we are on this cloud journey will vary from client to client, depending on an organization, and maybe even parts of the infrastructure or the applications that are moving over there.

The migration to the cloud is here and upon us. You’re not alone out there with it. We talk to many customers. SoftNAS has been around since 2012. We were born in the cloud and that’s where we cut our teeth and we’ve done our expertise with it. We’ve done approximately 3,000 AWS VPC. There’s not a whole lot in the cloud that we haven’t seen, but at the same time, I am sure we will see more and we do that every day with it here.

What you’re going to find out there right now as we talk to companies, we know that storage is always increasing. I was at a customer’s website, for a month ago, a major health care company. Their storage grows 25% every year so they double every refresh cycle out of it.

This whole idea of this capacity growth is there with us. They talk about 94% of workloads will be in the cloud. 80% of enterprise information in the form of unstructured data will be there.

A lot of the data analytics and IoT that you’re seeing now are all being structured in the cloud.

Four or five years ago, they were talking about cloud migration and about reducing CAPEX. Now, it’s become much more strategic. The areas that we solve with them are to try and understand.

 

“Where do we start?”

Where do we start

If I am an enterprise organization and I’m looking to move to the cloud, where do I start? Or maybe someone out there that is already there but looking to try to solve other issues with this.

The first use case we want to talk about as you saw in the title is around file server and storage consolidation. What we mean by that is that these enterprise organizations have these file servers and storage arrays, and I like to use the term that is rusting out.

What I mean by that is probably around three sections.

One, you can be coming up on a refresh cycle because your company is going on a normal refresh cycle due to lease or just due to the overall policy in the budget, how they refresh here.

Two, you could be coming up on the end of the initial three-year maintenance that you bought with that file server or that storage array. And, Getting ready for that fourth year, which if you’ve been around this game enough, you know that fourth and fifth year or fourth and subsequent years are always hefty renewals.

That might be coming up, or you may be getting to a stage here where the end of services or end-of-life (EOL) is happening with that particular hardware.

What we want to talk about here today and show you, how do you lift that data into the cloud and how we move that data to the cloud so I can start using it.

That way, when you go to this refresh, there is no need to go refresh those particular file servers and/or storage arrays, especially where you’ve got secondary and tertiary data where it’s mandatory to keep it around.

I’ve talked to clients where they’ve got so much data it would cost more to figure out what data they have, versus just keeping it. If you are in that situation out there, we can definitely help you with this.

The ability to go move that now, take that file server towards array, take that data, move it to the cloud, and use SoftNAS on it to be able to go access that data is what we are here to talk to you about today and how we can solve this.

This also can be solved in DR situations, even the overall data center situations too. Anytime you’re looking to go make a start to the cloud or if you’ve got this old gear sitting around.

You’re looking at the refresh and trying to figure out what do I do with it?

Definitely give us a ring here with that too.

As we talk about making the case for the cloud here, the secondary and tertiary data is probably one of the biggest problems that these customers deal with because it gets bigger and bigger.

It’s expensive to keep on-premise, and you got to migrate. Anytime you’re having a refresh or you buy new gear, you got to migrate this data. No matter what toolsets you have, you’ve got to migrate every time you go through a refresh.

Why not just migrate one to get it done with and just be able to just add more as needed with time?

Now, the cloud is much safer, easier to manage, and much more secure.

A lot of the missing knowledge we’ve had in the past about what’s going on in the cloud around security aspects has all been taken care of.

SoftNAS the only solution that makes the cloud perform as well as on-premise

Consolidate Your Files in the Cloud

What you’ll find here with SoftNAS and what makes us very unique is that our customers continue to tell us, “By using your product, I’m able to run at the same speed or see the same customer experience in the cloud as I do on-prem.” A lot of that is because we’re able to tune our product. A lot of it is because of the way we designed our product, and more importantly, are smart people behind this that can help you make sure that your experience in the cloud is going to be the same you have on-prem with it.

Or if you’re looking for something lesser than that, we can help you with that piece here too. What you’re going to see that we’re going to be offering around migration and movement of the data, speed, performance, and high availability, which is a must, especially if you’re running any type of critical applications or even if this data has to be highly-available with it too.

You’re going to see us scalable. We can move all the way up to 16 PB. We’ve got compliance so anyone around your security pieces would be happy to hear that because we take care of the security and the encryption with the data as well to make sure it works in a seamless fashion for you.

Virtual NAS Appliance

Tune to your performance and cost requirements

softnas virtual nas appliance

SoftNAS is a virtual NAS appliance. It’s a virtual appliance whether or not it’s in your cloud platform or if it’s sitting on-prem in your VMware environment. We are storage agnostic so we don’t care where that storage comes from. If you’re in Azure, from cool blob all the way to premium. If you are on-prem, whatever SSDs or hard drives that you have in your environment that’s connected to your VMware system, we also have the ability to do that.

As much as we are storage agnostic on the backend, on the front end, we’re also application agnostic. We don’t care what that application is. If that application is speaking general protocols such as CIFS, NFS, AFP, or iSCSI, we are going to allow you to address that storage.

And that gives you the benefit of being able to utilize backend storage without having to make API calls to that backend storage. It helps customers move to the cloud without them having to rewrite their application to be cloud-specific. Whether or not that’s AWS or Azure cloud. SoftNAS NAS Filer gives you that ability to have access to backend storage without the need of talking to APIs that are associated with that backed storage.

Tired Storage Accoress Multi-cloud Storage Types

Benefits that come out of utilizing SoftNAS as a frontend to your storage. Since I’ve been at this company, our customers have been asking us for one thing specific and that is can we get a tiering structure across multiple storage types?

Tired Storage Accoress Multi-cloud Storage Types

 This is regardless of my backend storage. I want my data to move as needed. We’re going at multiple environments and what we see for multiple environments is that probably about 10% to 20% of that data is considered hot data, with 90 to 80% of it being cool if not cold data.

Customers knowing their data lifecycle, allows them to eventually save money on their deployment. We have customers that come in and they say that “My data lifecycle is the first 30 to 60 days, it’s heavily hit. In the next 60 to 90 days, somebody will touch it or not touch it. Then the next 90 to 120 days, it needs to move to some type of archive care.” SoftNAS gives you the ability to do that by setting up smart tiers within that environment. Due to the aging policy associated with the blocks of data, it migrates that data down by tier as need be.

If you’re going through the process, after your first 30 to 60 days, the aging policy will move you down to tier two. If afterward that data is still not touched after 90 to 120 days, it will move you down to an archive tier, giving you the cost savings that are associated with being in that archive tier or being in that lower-cost tier two storage.

The benefit also is that, as much as you could migrate down these tiers, you could migrate back up these tiers. So you get into a scenario where you’re going through a review, and this is data that has not been touched for a year. However, you’re in the process of going through a review.

Whether it’s a tax review or it’s some other type of audit, what will happen is that as that data continues to be touched, it will first migrate. It will move from tier three back up to tier two. If that data continues to be touched, it will move back all the way up to tier one and it will start that live policy going all the way back down.

Your tier three could be multiple things. It could be object storage. It could be EBS magnetic or cold HDDs. Your tier, depending on the platform that you’re on, it could be EBS throughput optimized, GP2 magnetic, and vice versa.

Your hot tier could be GP2 provisioned IOPS, premium disk, or standard disk – it depends to what your performance needs would be.

High Availability (HA) Architecture

SoftNAS has patented high availability (HA) on all platforms that we support. Our HA is patented on VMware, it’s patented within Azure and also within AWS.

High Availability (HA) Architecture

What happens within that that environment is that there is a virtual IP that sits between two SoftNAS instances. The two SoftNAS instances have their data stores that are associated with the instance.

There is a heartbeat that goes between those two instances; the application is talking to the virtual IP between them. If there is an issue or a failure that occurs, what will happen is that your primary instance will shut down and that data will be moved to your secondary instance which is now turned back to your primary instance.

The benefit of that is that it’s a seamless transition for any kind of outage that you might have within your environment.

It’s also structured with AWS best practice, so that’s to have those instances be placed in an availability set or within different availability zones so that you have the ability to utilize the SLAs that are associated with the provider.

SoftNAS Disaster Recovery (DR) and High Availability (HA)

SoftNAS Disaster Recovery and High Availability (HA)

In this scenario, we have your HA setup and your HA setup is within the California region. Within the California region, you have the system set up SnapReplication and HA because that’s what’s needed within that region.

It allows you to go ahead and do failover in case there is any issue that happens to an instance itself. In utilizing the Azure environment by you having it within an availability set, what happens is that neither one of those instances will exist on the same hardware or the same infrastructure and it will allow you to do five 9s work of durability.

Within AWS, it’s structured so that you can actually do that by using the availability zone. By using availability zones, it gives you application durability and it is up to five 9s there, also within AWS. Up until a year ago, you could say that an availability zone or a region had never gone down for any one of the providers. But about a year ago and about a month apart from each other, AWS had a region go down and Azure also had a region go down. A customer came to us and asked for a solution around that.

The solution that we gave them was DR to a region that’s outside of that availability zone or that region altogether. That’s what it shows in this next picture. It’s that although you have SmapReplicate within that region, to be able to protect you, you also have DR replication that is centered entirely outside the zone to ensure that your data is protected in case a whole region fails.

Automated Live Data Migration with continuous sync

Automated Live Data Migration with continuous sync

The other thing that our customers have been asking for, as we come to their environments, they have tons of data. The goal is to move to the cloud. The goal is to move to the cloud either as quickly as possible or as seamlessly as possible.

They’ve asked us for multiple solutions to help them to get that data to the cloud. If your goal is to move your data as quickly as possible to the cloud — and we’re talking about Petabytes, hundreds of Terabytes of data — your best approach at that particular point in time is taking the FedEx route.

Whether it’s Azure Data Box or AWS Snowball, being able to utilize that to be able to send the data over to the cloud and then import that data into SoftNAS makes it easier for you to be able to manage the data.

That is going to be across cut, over. That means that at some point in time, on-prem, you’re going to have to stop that traffic and say that “This is what is considered enough. I’m going to send this over to the cloud, and then I’m going to populate and start over with my new data set in the cloud and have it run that way.”

If you’re looking for one cut-over, that’s what you’re going and we’re not talking about Petabytes worth of data. The way that we explain to customers and give customers ways of actually using it would be Lift & Shift. By you using SoftNAS and the Lift & Shift data migration capability of SoftNAS, what you could actually do is you could do a warm cut-over so data still running on your legacy storage servers is being copied over to the SoftNAS device in the cloud.

Then once that copy is complete, then you just roll over to the new instance existing in the cloud. SoftNAS has congestion algorithms that we use around UltraFAST that basically allow that data to go over highly-congested lines.

What we’ve seen within our testing and within different environments is that we could actually push data up to 20X faster by using UltraFAST across lines.

This is where it comes. You need to make that decision. Are you cold cutting that data to send it over to the cloud via Azure Data Box or Snowball, or is it a possibility for you to use Lift & Shift and do a warm cut-over to the SoftNAS device that you would have in the cloud?

 

SoftNAS Lift and Shift Data Migration

SoftNAS’s Cost-effective Lift and Shift data migration solution allow you to move data to the cloud for economies of scale and disaster recovery, all while reducing data storage expenses.

 

Lift & Shift data Migration

It’s a key feature of SoftNAS Cloud NAS, enabling users to migrate data from one platform to another, whether from on-premise to the cloud or between different cloud providers while maintaining continuous synchronization. SoftNAS® is a Cloud Data orchestration product, focusing on simplifying pain points within the marketplace.

As businesses continue to look for ways to increase efficiency and improve their bottom line, more and more look to the cloud. With increased flexibility, and the ability to cut out the high cost of hardware and hardware maintenance, the cloud is seen as the solution of the future, even though many are uncertain how to implement it.

Buurst hopes to make navigating the cloud a great deal easier, so that organizations can leverage the simplified business continuity strategies, and reduce infrastructure, maintenance, and service costs, without requiring advanced software and platform training.