Advice/Information needed on Sybase raw device(s) - and also a backup/restore question.

I've inherited a Sybase 11.9.2 environment, and after reviewing the setup/layout, I'm confused by a few things.... I'm fairly new to Sybase, but learning quickly. Any thoughts as to this set-up in general or things that would improve the environment (performance, administration or otherwise) are all appreciated. Related to the points awarded for this thread, I am ultimately looking for the answers to the questions below.

The drives that contain the Sybase devices are SSA/18GB in size, but all the devices were created 2GB in size (except for one they obviously screwed up and made 1MB).
-Is there a 2GB device size limit, or some performance factor, explaining why all devices were created this way?
-If there is not (which my research says there is not); Can I condense the number of devices by combining them somehow? ...or can I expand them (make them larger) to better utilize disk space (or at least fix that 1MB mistake)? ...or am I stuck with these devices as they are?
-Should I guard against multiple Sybase devices on the same physical drive? Right now some drives have two or three devices on them, is there a limit? Some guideline? Can my next device be 8GB (for example) in size on the same drive as couple of the 2GB devices (if I can't make the 2GB devices larger)?

Can I convert away from raw devices to file systems?

What is the easiest way to backup this environment? (currently just doing a "dump database" to a tape device) Is there a script or utility of some sort that can be used to backup the database, which makes for easy restore? (i.e. would recreate the devices needed, raw or otherwise, etc.). A kind of "full featured" backup/restore utility?

Can I restore a backup of this DB on a separate machine with an entirely different physical disk structure (8 X 9GB drives)? I'd like to do this to create a test environment. There's obviously enough space - but I don't know about the device creation/layout....how that affects things? If I can do this - what is the easiest way?

Here is some information that I could think to provide. If there are other things that I can provide, just let me know what.

Machine running Sybase has 4GB memory and has quad 466MHz processors.
OS is AIX.
Sybase is using raw devices.
There are nine 18GB drives, with nine volume groups being used to house the devices - each of which is mirrored.

Sorry if this is a bit much - this is my first question, and I wanted to make it a good one. :)

##############################################################################
#
# Configuration File for the Sybase SQL Server
#
# Please read the System Administration Guide (SAG)
# before changing any of the values in this file.
#
##############################################################################

Complicated questions need complicated answers. Don't worry about it. 8-)
>> Given that I do desire to upgrade to 12.5.X ...and that this, upgraded via a full physical rebuild, is indicated as my "perfect

>> Is there a 2GB device size limit, or some performance factor, explaining why all devices were created this way?

Many early operating systems had a limit of 2Gb per file, but this obviously isn't an issue for file systems.

Another factor is that in ASE 11.9.2 each Sybase device has its own I/O queue, and a spinlock (mutex) protecting each queue. This means if you had, say, 32Gb devices, then each spinlock would be dealing with up to 16 times as many concurrent I/Os, which would lead to contention. So the advice in those days was to have many smaller devices rather than few larger devices. Note this issue went away in (I think) ASE 12.5.

This had to be balanced against the limit on the number of devices per database, which is 128 (256 devices total in the dataserver, but only 128 used for any single database) in all versions until ASE 15.0, I think. 2Gb devices means any one database can never be bigger than 256Gb, and no more than 512Gb of databases can live in this dataserver. In ASE 11.9.2 days that was a lot of disk! Today I have friends with more than that amount of disk at home. 8-)

>> If there is not (which my research says there is not); Can I condense the number of devices by combining them somehow? ...or can I expand them (make them larger) to better utilize disk space (or at least fix that 1MB mistake)? ...or am I stuck with these devices as they are?

In your current version: short of a full physical rebuild of your database(s) - bcp all data out, scripts to create all objects, drop database, drop devices, create devices, create database(s), create objects, bcp data in, create indexes - you can't do anything about this, and you're stuck with them as they are.

In ASE 12.5.x (12.5.1, I think) you can increase devices *IF* there is space for them. This is unlikely to be the case with raw partitions since they will correspond to an explicit number of cylinders on your disk spindles. Even ASE 12.5.x won't help you in that case, not without some clever shenanigans at a low level with LVMs etc.

Unless you're pushing that 256Gb size limit or expect to within the next five years, there is no need to worry about this. Leave them as they are. You'll do good things in terms of device I/O spinlock contention by having many small devices.

>> Should I guard against multiple Sybase devices on the same physical drive? Right now some drives have two or three devices on them, is there a limit? Some guideline? Can my next device be 8GB (for example) in size on the same drive as couple of the 2GB devices (if I can't make the 2GB devices larger)?

It depends. How busy are your disks? When you have JBOD (Just a Bunch Of Disks, or standalone) it is fairly easy to overwhelm a single physical disk. Check the specs for your disks - you want to find their "sustained transfer rate". (Ignore the "burst transfer rate"!) Unless you have 15,000RPM SCSIs there, you'll find even modern disks aren't happy above roughly 180 I/Os per second. (Note - number of I/Os, not amount of data.) 5 year old hardware will probably struggle to keep up with even 150. I've seen disks with wait states at even just a "mere" 100 I/Os per second.

How would you tell? You'll need to gather ASE statistics (use "sp_sysmon" and look at the Total I/O per second per device, and add up what that means for each physical disk), and then double-check these against operating system statistics (probably "iostat" or "sar"). If you see any physical disk with persistent wait states or double-digit queue depths, it's too busy. Expect that ASE tempdb and transaction log devices will be the busiest - these are the ones to separate from each other. Multiple data devices on the same physical disk is probably fine unless you have unusual hotspots in your I/O activity.

Note that you *can* "move" an ASE device from one raw partition to another using Sybase mirroring. (Do this at a quiet time.) Briefly, you mirror to the new location, then break the mirror (mode = remove, side = primary) which shifts where the device is considered to live. So rebalancing can be done, but it is slow and intrusive (all writes suspended while the mirror is written).

>> Can I convert away from raw devices to file systems?

You can, and I bet others here will argue with me, but at your version at least, I think you shouldn't.

ASE 11.9.2 can't guarantee writes to file systems. All writes go to file cache, so the O/S tells ASE the write completed, and if you lose the disk or the system at that point, your data is corrupted, possibly beyond repair.

ASE 12.0 and above fix this and give the ability to have guaranteed writes on file. Whether this is a good idea or not is beyond the scope of this question, but let's say it's still controversial. 8-) (I think you shouldn't, and that raw is better, but I'm in a minority on that one.)

>> What is the easiest way to backup this environment? (currently just doing a "dump database" to a tape device)

That's pretty much it!

>> Is there a script or utility of some sort that can be used to backup the database, which makes for easy restore? (i.e. would recreate the devices needed, raw or otherwise, etc.). A kind of "full featured" backup/restore utility?

None shipped by Sybase. You can pay money to third parties, but they're just issuing "dump database" commands under the hood, unless they're doing scripts plus BCP instead.

So long as you have good backups of the master database and your user database(s), you can get back from anything. It's considered good practice to also have copies of some of the system tables in master (sysdatabases, sysdevices, sysusages, syslogins, syssrvroles, sysloginroles) and the ASE configuration file, but these aren't necessary, they just make some things simpler.

>> Can I restore a backup of this DB on a separate machine with an entirely different physical disk structure (8 X 9GB drives)?
I'd like to do this to create a test environment. There's obviously enough space - but I don't know about the device creation/layout....how that affects things? If I can do this - what is the easiest way?

You can do this but you'll wish you hadn't.

If the segment mappings (briefly, which bit of which database goes on which device in which order) doesn't match exactly, they can get very scrambled. This could end up with data and log mixed (which prevents incremental backups), and/or really mess up the allocations (no real effect on your database but makes it harder to work with for some recovery tasks... and it looks ugly). This, by the way, is why it's a good idea to keep those system tables from master, because they tell you what your fragments looked like to prevent this happening in any restore.

The only absolute minimum for restoring is that the target must be at least as large as the source. However dump & load can't clean up and rearchitect your database for you. It usually makes things much worse. Remember at your ASE version there is good incentive to remain on small devices anyway, and with large devices you have fewer options for rebalancing a device from one drive to another.

Your perfect world scenario here is moving to an entirely new disk system with a lovely RAID 1+0 drive, so all load is automatically balanced over all physical disks, with (maybe) a 4Gb device size, on raw partitions, that you migrate to using a full physical rebuild as above... and is running ASE 12.5.3 (probably the best version to be on at time of writing). That actually isn't as much work as it sounds... not counting time taken for application testing, you could do it in a few days.

WOW. Joe, thanks for this excellent, detailed response. I feel bad posting follow up questions...as I feel like this has already taken more of your time than I wanted to (but somewhat knew was going to happen). I wish I could make the question worth more than 500 points, as it certainly deserves more IMO (heck, it probably deserves a consulting fee...lol). I'll try to keep the follow-ups short - although, I anticipate the answers to be comparitively lengthy. ....sorry.

Given that I do desire to upgrade to 12.5.X ...and that this, upgraded via a full physical rebuild, is indicated as my "perfect world scenario" - what is the best way to go about this? Are there any recommended ultilities to perfom all the functions that you listed? (hoping that it's not all needing done manually.... lol) There are just so many objects, that this task seems overwhelming to me.

I think that I would even be able to place it all within a RAID environment as my current system does have the appropriate RAID controller. ...although, (and forgive me- I am not a RAID expert) is moving to RAID 1+0 really benefiting me that much over my current set-up, which is that each physical drive is mirrored to another? (which is RAID 1, correct?). I understand that I am sacrificing disk space - but this system has worked well for me in the past in drive failure situations; The other drive just picks right up until I replace the failed drive. Guess I am just leaning towards what I am used to - but I want to do whatever is BEST - not easiest. A sub to this question would be- Will RAID 10 work just as well? As I understand it, this has some advantages over RAID 01...and more importantly - my system does not list 01, but it does show 10 as supported - unless it is known by another name. My system shows support for raid_0,raid_copy,raid_1,raid_5 and raid_10. ...and again, pardon me, as I know this RAID discussion is a little off topic - but it is related to the bigger picture.

I assume that a full physical rebuild of the DB is also my answer to attempting to create a backup (test) system onto an entirely different machine with a different disk structure - since you indicate that otherwise would result in a "wish I hadn't" situation? In fact, if I am understanding all facts as presented - the RAID set-up would avoid much of this, as the number of physical drives and volume groups etc. would be eliminated (outside of making sure there was enough space) from the equation of issues - since I would be joining multilple physical drives to make one, on which I would place all the Sybase devices?

...and on a "in the meantime" thought process - seperate from the rebuild option (and upgrading to 12.5.X at the same time), for creating my test environment on another machine; what would be the best way to copy the current 11.9.2 environment onto the other machine -without "wishing I hadn't"? 8-) I sorely need to create a test environment for myself (for a multitude of reasons).

Again, this is my first posted question - sorry it's so long ...had enough of me yet? haha

>> Given that I do desire to upgrade to 12.5.X ...and that this, upgraded via a full physical rebuild, is indicated as my "perfect world scenario" - what is the best way to go about this? Are there any recommended ultilities to perfom all the functions that you listed? (hoping that it's not all needing done manually.... lol) There are just so many objects, that this task seems overwhelming to me.

Sybase ships a number of free tools that will help with generating the scripts. The GUI client "Sybase Central" can do this, as can the command-line tool "ddlgen". (I'm not sure from memory if ASE 11.9.x comes with ddlgen, but certainly ASE 12.5.x does, and you can use its ddlgen against the 11.9.2 environment.)

There is also a third-party Perl tool called dbschema.pl which would be useful to you here. It's widely available, just search on its name.

While all these tools can also give you the scripts to recreate the database and devices - you don't actually want to recreate them as is. But if you're going with an object-level rebuild like this, the devices and databases can be any size and shape (so long as they're big enough to actually hold the data!).

>> [RAID]

RAID 0+1 or RAID 10 (similar but not the same, I'll explain in a bit) do provide a big performance gain over RAID 1, and cost exactly the same amount of space as overhead (ie. half). RAID 1 is simple mirroring - each disk is still used standalone, and mirrored to another standalone disk. If there are concerns about load balancing then *any* disk system using standalone disks is an ongoing maintenance issue. Just because we can balance the devices between the disks today does not mean they'll stay balanced, so we'd have to continue to prove it was as balanced as it could be.

It's much easier to let the RAID system automatically do the best balancing for us. RAID 0 and any of the RAIDs that use RAID 0 like RAID 0+1 or RAID 10 stripe the "logical disk" over all available physical disks. I/O is always balanced. However RAID 0 on its own has zero fault tolerance - lose one disk and you lose everything.

RAID 0+1 combines RAID 0 and RAID 1 - it takes a stripe of RAID 0 and mirrors it to another stripe of RAID 0. You get all the performance benefits of RAID 0 and all the redundancy of RAID 1. You can lose any one disk and everything's still ok. Lose two and it's all dead. RAID 10 is the other way around - you take a bunch of disks mirrored in RAID 1, and build a stripe across each mirrored pair. You can lose one disk per pair without bringing it all down.

RAID 5 is loved by IT Managers because it offers some striping performance, and some redundancy, but is cheap because it gives you more usable space. There's an argument (not fully accepted by everyone) that RAID 5 forces a performance hit for writes, and that a database is *always* doing writes, even in a read-only environment (tempdb, for instance).

I don't know enough about your proposed hardware to have an opinion about which is best for you. In all cases the *number* of disks is usually more important than the size per disk. It's better to have 10 x 36Gb drives (if you can find them) than 5 x 72Gb. The more physical disks, the more disk heads to spread I/O between.

>> I assume that a full physical rebuild of the DB is also my answer to attempting to create a backup (test) system onto an entirely different machine with a different disk structure - since you indicate that otherwise would result in a "wish I hadn't" situation? In fact, if I am understanding all facts as presented - the RAID set-up would avoid much of this, as the number of physical drives and volume groups etc. would be eliminated (outside of making sure there was enough space) from the equation of issues - since I would be joining multilple physical drives to make one, on which I would place all the Sybase devices?

Yep, exactly. A rebuild is the only way of getting away from your current disk layouts, and doing it first for a separate test system is excellent practice.

>> ...and on a "in the meantime" thought process - seperate from the rebuild option (and upgrading to 12.5.X at the same time), for creating my test environment on another machine; what would be the best way to copy the current 11.9.2 environment onto the other machine -without "wishing I hadn't"? 8-) I sorely need to create a test environment for myself (for a multitude of reasons).

"Best"? Well, the *easiest* is just to duplicate your current environment exactly. We don't actually care if the underlying disks are different, so long as everything at the Sybase level is the same. ie. it's fine to have fewer disks, so long as you end up with the same number of Sybase devices all of the same size. In fact strictly speaking you don't even have to have this! You could have fewer Sybase devices of a larger size each... but to avoid messing up the database(s), you'll have to make sure the database(s) is/are created with the same fragments in the same size in the same order.

You can't use "sp_helpdb" to achieve this, by the way, as it lists all fragments in alphabetical order of device name! You need to look at the master..sysusages table which shows all fragments for a given database (dbid) in order of allocation (lstart).

Another way of saying this is that if you reverse-engineer the DDL to (re)create the database, you want to make sure every clause in that statement is identical when you recreate it in test. We don't care if clauses were on separate devices before but are on the same device now, ie.

usually I don't find anything to add to your answers, but I think that you are wrong in that part about decreasing number of devices. I think, that it's possible to decrease number of devices without physical rebuilding of database - you just need to dump database out, drop database, drop devices, create bigger devices, create database on new devices but with the same layout of segments and then load database back. I have done this several times without any issues.

Tricky part is to keep the same layout of segments in new database. You may use some DDL generation tool (eg Generate DDL in Sybase Central) or write your own based on information stored in sysdatabase, sysusages a sysdevices system tables.

That's exactly what I was trying to say with my example of moving from 2Gb to 4Gb devices, but I didn't take it as far as your final, shorter, form because I didn't want to be confusing.

In fact Backup Server will collapse consecutive fragments with the same segment mapping on the same device into one continuous fragment.

That's a bit of a mouthful, but it's saying what both of us have now shown with our examples... that two consecutive fragments of 2000 each can be turned into a single fragment of 4000, provided they both had the same segment mappings (data, log, index, etc).

Highfive is so simple that setting up every meeting room takes just minutes and every employee will be able to start or join a call from any room with ease. Never be called into a meeting just to get it started again. This is how video conferencing should work!

OK guys.... The waters are starting to become a bit more clear - although that may just be an illusion created by the drastic increase in the depth. :) lol

I have a few more questions on what has been indicated, but I feel inclined to at least try to figure them out on my own (no truly better way to learn), as you have provided a very substantial starting point. There is still obviously many things I need to learn. lol. ...but this is why I love my work, and why I love this site.

This is going to be fun!!

....but I will ask this: You indicate that I should use master..sysusages to show all fragments for a given database (dbid) in order of allocation (lstart) to know how to recreate the devices - but when I list the contents of this table, the device names are not included here - how do I know which are which for recreation? ...or am I missing something obvious?

And my next question is, how is it best to leave the status of this question while I find time to try a few things? I don't want it to appear as though I have abandoned the question, but I know it will take me quite some time to play with the information you have provided - and I know with even greater certainty that I will have additional questions once I do... :) I just want to make sure I handle things properly here.

That second WHERE clause is how you relate fragments to the device they live on. We have to use a "between" join because the fragment might start in the middle of a device, not just at the start. The device is bounded by the low and high ends of the virtual (server-wide) page addresses it contains. Ordering my logical (database-specific) page addresses means we get the fragments in the order they were created.

In terms of question management, EE likes questions to be closed sooner rather than later, since a closed & accepted solution is more useful to other people searching for answers. If your original question has been answered, the question should be closed, and any follow-up raised in a new question. EE also wants you to spend more points on questions. 8-)

You can safely leave the question idle for usually up to a month before a Cleanup Volunteer will ask you to do something about it, though.

I'll just add, that segmap column of sysusages table tells you what kind of segment (data, log or mix) is placed on this piece of device. Usually 3 means data, 4 log and 7 mixed (data and log on the same device). It's a bitmask related to table syssegments, that describes segments existing in given database.

Featured Post

Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
- Increase transparency
- Onboard new hires faster
- Access from mobile/offline

Short answer to this question: there is no effective WiFi manager in iOS devices as seen in Windows WiFi or Macbook OSx WiFi management, but this article will try and provide some amicable solutions to better suite your needs.

Access reports are powerful and flexible. Learn how to create a query and then a grouped report using the wizard. Modify the report design after the wizard is done to make it look better. There will be another video to explain how to put the final p…

This video demonstrates how to create an example email signature rule for a department in a company using CodeTwo Exchange Rules. The signature will be inserted beneath users' latest emails in conversations and will be displayed in users' Sent Items…