DP-300 Microsoft Azure Database – Configure Azure SQL Database resource for scale and performance

  • By
  • July 6, 2023
0 Comment

1. 9. configure Azure SQL Database for scale and performance

In this section we’ll be looking at how to configure Azure SQL database for scale and performance. So let’s go to Azure SQL and we create a new SQL database. So there are three different types. Single database. So this is obviously a single database. It contains one database. It’s a great fit for modern cloud bond applications. So if you’re starting from scratch batch and you’re going to have things in the cloud, this is where you can store the information. The storage is between half a terabyte and up to four terabytes depending on how you provision it.

Unless you go into the hyperscale where you can go up to 100 terabytes and maybe beyond you have the option for serverless compute or you can have provisioned compute whichever one you want and it’s fairly easy to manage. Now if you’ve got multiple databases, then you might want to consider an elastic pool. Now we’ll have a look I think in a couple of videos time, when you would use an elastic pool. What are the requirements? When is it a good idea? There is a fairly new version on the block called the database Server, used to manage groups of single databases and elastic pools won’t actually be getting into that in this particular course. There are any requirements in the DP 300 certification about it. So we’re going to have a single database and I’m going to click Create.

So you can set up your resource group. So this could be an existing resource group or you can create a new one just here. So resource group is just a container for where all of your resources for a particular project or maybe multiple projects are. So the advantage of having a results group at the end, if you don’t want the projects anymore, you just delete the results group and it deletes everything associated with the project.

You can also give it a database name and you can see it’s got to have at least one character and maximum of 128 characters shouldn’t contain reserved words, special patterns. And in this server, and I previously set up a server, this was the dialog box, if you remember, for setting up the server. The database name needs to be unique. So you can set up at this point an elastic pool. As I say, we’ll talk about that more in a couple of videos time. And you can also set up your storage redundancy.

So how do you want your backups to be stored? Do you want them to be locally zone, redundant or georedundant? And you can see with the word preview, these are relatively new. So georedundant backup storage is the default version and it will be geo replicated to the paired region. So what’s that? Well, we’ve got all of these regions across the world and the paired region is simply a specific region. So for instance, the east us is paired with the west us and vice versa.

So what’s the reason for this. Well, let’s suppose that there is a major disaster and both West US, East US and other regions will be offline at the same time. Well, rather than having your resources in all of the places, you might decide that’s the same. Microsoft might decide that East US comes up first because the paired region, West US will probably have much of the same things. And so we’re going to just do one concert, maybe another West US later on. So we’re trying to get one of the pair up to begin with. And everywhere that has a paired region and that currently is everywhere, has a paired region in the same sort of place as China. China, france. France. Europe. Europe. There is one that isn’t, and that is Brazil South and its regional pair is South Central US.

You may be wondering why they don’t use Brazil Southeast. This is actually at the point of recording a very new region if we quoted previous courses about Azure when Brazil Southeast wasn’t an option. So the only South American version was Brazil South. So it has to have a pair somewhere. So that is where your backup data goes.

Now, you can also configure the amount of compute and storage. So compute is the actual working power and storage, the amount that it can retain here in the configure database. I suggest you do that because the default is around 400 $450 a month. And that might be not as much as you want. It might be too much. So in the next video we’re going to do is talk about all of these various things that you can see on the screen here.

2. 9, 53, 56. vCore-based purchasing model

Now in this video we’re going to have a look at the service and Compute tier and lets us to see that there are six different tiers you can have. So at the Dtub based there’s basic standard and premium and then for the Vcore based there’s general purpose, business critical and hyperscale. There is a link if you want to have some sort of idea of what the differences are. But I don’t find this very easy immediately. So let’s just break this down into two vico and DTU. So Vico here we can specify the number of votes virtual cores, we can specify the memory and we can specify the amount and speed of storage. So I can have as you can see anywhere at this level from two to 80 V cores. Now notice how much the estimated cost goes up and down.

So it starts off at about $450 per month for this general purpose Vcore and then you can see doubles as soon as I get to four cores and then by the time I’m up to 80 calls I’m at around 16 $17,000. You’ve also got a database maximum size. Now notice as the vCores come up this moves to the left and the reason for that is the maximum size is partly dependent on the VCOs. It is also dependent on the hardware configuration. The standard hardware configuration nowadays is called Gen Five. So there are some alternatives but stick to Gen Five as the balanced memory and compute version. You can have a version which is more focused on Compute rather than balance.

So if you want but Gen Five is the standard name that you will hear. So with two cores we can get up to 1024gb but if I go all the way up to 80 cores then we can get up to 4096gb. And you notice the amount of log space allocated is directly proportional to the amount of the data maximum size. In fact it’s 30%. Now, while you can’t necessarily configure it separately, you’ve also got differences in IOPS that’s input output operations per second, I or PS, the concurrent workers number of requests that you can have and the backup retention as well.

So you can have a maximum of 80 vCores to the Gen Five configuration and a maximum data size of four terabytes. If you want more then instead of looking at a general purpose you’d be looking at hyperscale. So you can see the ability to change from hyperscale to something else is not supported. So if you’ve got hyperscale then you can go up to 100 terabytes is a standard thing and you can go up to 80 because now business critical is the one in the middle. So business critical is when you need a high transaction rate and high resiliency and you can see, you can go up to 80ft calls there and up to four terabytes. So which of these three should you be using?

Well, the general purpose is scalable compute and storage. And that’s for most business workloads. So the storage latency, the amount of time it takes to actually retrieve the data is about five to ten milliseconds. And to put that in perspective, that’s the same as an SQL server on a virtual machine. However, if you want a higher transaction rate and a higher resiliency, then you’ve got the Business Critical, but you can see the difference in the cost. So the general purpose starts off at about $450, whereas the Business Critical starts out at about 1100 US dollars. So use Business Critical when you need low latency input and output.

So we’re talking not five to ten milliseconds, but one to two milliseconds, or when there are frequent communications between the app and the database. You could also use Business Critical when there’s a large number of updates or long running transactions that modify data. You’ve got high resiliency and fast geo recovery and recovery from failures, and also advanced data corruption protection. And you also get a free of charge secondary read only replica. So let’s say I was in the west us. I can have a free of charge secondary read only replica in the East US.

And then when I wanted to access information to read it, I could go to the East US as opposed to constantly going to the West US, which is also for writing as well. So high to scale, that is when you need more than the four terabytes, or up to around 100 terabytes. So the advantage of the hyperscale is it’s the same rate as the standard as your SQL database. So you can see here the estimated compute cost around 800 $900, storage cost about $0. 13. Now that is the Veco and as we saw earlier, so if I just go back to the correct SQL database, you can see we can have Georgundant backup, storage zone redundant and local redundant.

These two are cheaper, but only if you need a single region data resiliency. In other words, if something goes wrong in a particular region, then that’s okay, you’re not going to pay the extra money to have it in more than one region. So that is the Vcore. So these are the top three general purpose, Business Critical, and Hyperscale. On screen you can see some of the different hardware configurations that you can have in the Vcore. And it’s interesting to note that there is a direct correlation between the number of vCores that you’re allowed and the Tempdb maximum data size. So Tempdb is where it stores temporarily things, and that correlation is that there is a new Tempdb data size for every vehicle.

So when there’s one vehicle, we have 32GB file, when there’s two, we have two files totaling 64GB and so on. So it keeps going up the server, up we go. So if we have 14 vehicles, then our Tempdb max data size will be four, four, eight. And that’s the case also for the general purpose layer. So you can see two vCores 64GB and same for the business critical layer as well. So you’ll also notice that there are increases in memory.

Again, it goes up in line with the number of VCOs storage for OLTP in memory size, that is. So what, it can retain cache rather than having to go out into a SSD, a solid state disk and the IOPS that’s input output per second. The number of processing it can do also increases, as does the maximum concurrent workers or requests, the maximum number of logins. But some things do remain constant. But generally, if you’re talking about storage, we’ve seen that there are some limitations on storage earlier down in the general purpose level, but when we’re talking about ten DB, it goes up.

When we’re talking about log size, it goes up. When we’re talking about the maximum data size, it goes up as well. So an increase in the number of VPNs also increases other things. Now, notice the I O latency, I referred to this earlier. So for compute we have an IO latency of between five and seven milliseconds for writing and five and ten milliseconds for reading. But for business critical, this is reduced to one to two milliseconds.

Comments
* The most recent comment are at the top

Interesting posts

Impact of AI and Machine Learning on IT Certifications: How AI is influencing IT Certification Courses and Exams

The tech world is like a never-ending game of upgrades, and IT certifications are no exception. With Artificial Intelligence (AI) and Machine Learning (ML) taking over everything these days, it’s no surprise they are shaking things up in the world of IT training. As these technologies keep evolving, they are seriously influencing IT certifications, changing… Read More »

Blockchain Technology Certifications: Exploring Certifications For Blockchain Technology And Their Relevance In Various Industries Beyond Just Cryptocurrency

Greetings! So, you’re curious about blockchain technology and wondering if diving into certifications is worth your while? Well, you’ve come to the right place! Blockchain is not just the backbone of cryptocurrency; it’s a revolutionary technology that’s making waves across various industries, from finance to healthcare and beyond. Let’s unpack the world of blockchain certifications… Read More »

Everything ENNA: Cisco’s New Network Assurance Specialist Certification

The landscape of networking is constantly evolving, driven by rapid technological advancements and growing business demands. For IT professionals, staying ahead in this dynamic environment requires an ongoing commitment to developing and refining their skills. Recognizing the critical need for specialized expertise in network assurance, Cisco has introduced the Cisco Enterprise Network Assurance (ENNA) v1.0… Read More »

Best Networking Certifications to Earn in 2024

The internet is a wondrous invention that connects us to information and entertainment at lightning speed, except when it doesn’t. Honestly, grappling with network slowdowns and untangling those troubleshooting puzzles can drive just about anyone to the brink of frustration. But what if you could become the master of your own digital destiny? Enter the… Read More »

Navigating Vendor-Neutral vs Vendor-Specific Certifications: In-depth Analysis Of The Pros And Cons, With Guidance On Choosing The Right Type For Your Career Goals

Hey, tech folks! Today, we’re slicing through the fog around a classic dilemma in the IT certification world: vendor-neutral vs vendor-specific certifications. Whether you’re a fresh-faced newbie or a seasoned geek, picking the right cert can feel like trying to choose your favorite ice cream flavor at a new parlor – exciting but kinda overwhelming.… Read More »

Achieving Your ISO Certification Made Simple

So, you’ve decided to step up your game and snag that ISO certification, huh? Good on you! Whether it’s to polish your company’s reputation, meet supplier requirements, or enhance operational efficiency, getting ISO certified is like telling the world, “Hey, we really know what we’re doing!” But, like with any worthwhile endeavor, the road to… Read More »

img