Anda di halaman 1dari 3

Database Administration : Scenario 1

You receive a call at 18:15 from a client that someone deleted 10 rows of data in Table A and dropped
Table B 30 mins ago and demand they get their data back. There are full backup jobs running every week
on Sunday. Differential backups running every day at midnight. Backup logs running every 30 minutes
starting at midnight sharp.

1. Describe your process step by step to restore this data.

In this particular situation, If a disaster losing data at 18.15, we'd need to restore 17 backup files (1 full
backup and 16 log backups), plus the tail log backup This is a somewhat dangerous situation. This can
occur for reasons from disk corruption to backup failure. Also, if any of these transaction log backup files
are not usable, we cannot restore past that point in the database's history. Using a differential backup
can shorten the number of backup files to restore. If instead, I think, we'd only need to restore eight
files: the full backup, the differential backup, and six transaction log backups, plus a tail log backup In any
situation that requires a quick turnaround time for restoration, a differential backup is our friend. The
more files there are to process, the more time it will also take to set up the restore scripts, and the more
files we have to work with, the more complex will be the restore operation, and so (potentially) the
longer the database will be down. In this particular situation, transaction logs can be taken every 30
minutes. If we're able to take differential backups during the day, it can cut down dramatically the
number of files involved in any restore process. Step by Step Restore Data Step 1: Right-click your
database and select the following items from the drop-down menus: Tasks >> Restore >> Database Step
2: Click the “Timeline” button. Step 3: Select “Specific date and time” and enter your desired date and
time in the boxes below. You can also click in the green color bar or use the slider to set the time. The
example shows restoring to the 16:44:59 backup. Click “OK.” As you can see, have the full back up, 12:00
differential backup, 6 logs for 30 minutes transaction logs, plus a tail log backup. will get us the 16:44:59
data that we want. Step 5: Click “OK” to start the restore. You will see the progress indicator in the upper
left. First, it will count through the full backup and then each of the transaction logs before it finishes.

2. Client wants to know who is the culprit. What can you do to find this out?

Steps to find who deleted the User database in SQL Server Using SQL Server Schema Changes History
Report 1. Open SQL Server Management Studio and Connect to the SQL Server Instance. 2. Right click
SQL Server Instance and Select Reports -> Standard Reports -> Schema Changes History as shown in the
below snippet. 3. This will open up Scheme Changes History report which will have the details about who
deleted the SQL Server Database along with the timestamp when the database was deleted.

3. You find out database is in simple recovery model. How much data is lost?

Development server, VLDB, simple file architecture Here, we have a development machine containing
one VLDB. This database is not structurally complex, containing only one data file and one log file. The
developers are happy to accept data loss of up to a day. All activity on this database takes place during
the day, with a very few transactions happening after business hours. In this case, it might be
appropriate to operate the user database in SIMPLE recovery model, and implement a backup scheme
such as the one below. 1. Perform full nightly database backups for the system databases. 2. Perform a
full weekly database backup for the VLDB, for example on Sunday night. 3. Perform a differential
database backup for the VLDB on the nights where you do not take the full database backups. In this
example, we would perform these backups on Monday through Saturday night. Production server, 3
databases, complex file architecture, 30 minutes's data loss In this final scenario, we have a production
database system that contains three databases with complex data structures. Each database comprises
multiple data files split into two filegroups, one read-only and one writable. The read-only file group is
updated once per week with newly archived records. The writable file groups have an acceptable data
loss of 30 minutes. Most database activity on this server will take place during the day. With the
database operating in FULL recovery model, the backup scheme below might work well. 1. Perform
nightly full database backups for all system databases. 2. Perform a weekly full file backup of the read-
only filegroups on each user database, after the archived data has been loaded 3. Perform nightly full file
backups of the writable file groups on each user database. 4. Perform minutely log backups for each user
database; the log backup schedule should start after the nightly full file backups are complete, and finish
30 minutes before the full file backup processes start again

Database Administration : Scenario 2

You have Server A and Server B on different virtual machines on the same underlying VM Host and
configured AlwaysOn High Availability solution for a mixed OLTP and OLAP application. Currently, all
workload is run on Server A (Primary).

1. What recommendations would you give to improve the performance of the database servers?

If two SQL logins with the same name are created on different machines, the underlying SIDs will be
different. So, when we move a database from Server A to Server B, a SQL login that has permission to
access Server A will also be moved to Server B, but the underlying SID will be invalid and the database
user will be "orphaned." This database user will need to be orphaned before the permissions will be
valid. This will never happen for matching Active Directory accounts since the SID is always the same
across a domain. We should, audit each and every login – never assume that if a user has certain
permissions in one environment they need the same in another; fix any internal user mappings for logins
that exist on both servers, to ensure no one gets elevated permissions perform orphaned user
maintenance – remove permissions for any users that do not have a login on the server to which we are
moving the database Don't let these issues dissuade you from performing full restores as and when
necessary. Diligence is a great trait in a DBA, especially in regard to security. If you apply this diligence,
keeping a keen eye out when restoring databases between mismatched environments, or when dealing
with highly sensitive data of any kind, then you'll be fine.

2. What is the critical flaw of this architecture?


A critical flaw is an Critical information about the error's and flaw base that causes a program to abort. A
critical flaw might occur in an operating system, a running application, program or software. In this
process, the operation that was being performed is aborted, and data may be lost. This may even result
in freezing or spontaneous rebooting of the computer. Therefore, we need to full database backup
includes the complete permission set for the database. Each user's permissions are stored in the
database and are associated to the login that they use on that server for avoid losing critical data.

Database Administration : Scenario 3

You receive a call from a client that the database is very slow to respond.

1. Describe your process to investigate this performance issue and/or share one of your experiences
troubleshooting this issue.

Practice test restores for your critical databases on a regular schedule, regularly back up per day or week,
need to be 100% sure that it's going to work. If the base backup isn't refreshed regularly, typically on a
weekly basis, the differentials will start to take longer to process. As I have known, A fragmented log file
can dramatically can make slow down any operation that needs to read the log file. For example, it can
cause slow startup times (since SQL Server reads the log during the database recovery process), slow
RESTORE operations, and more. Log size and growth should be planned and managed to avoid excessive
numbers of growth events, which can lead to this fragmentation. There is no more space within the log
to write new records, there is no further space on the disk to allow the log file to grow, and so the
database becomes read-only until the issue is resolved. If the root cause of the log growth turns out to
be no log backups (or insufficiently frequent ones), then perform one immediately. An even quicker way
to make space in the log, assuming you can get permission to do it, is to temporarily switch the database
to SIMPLE recovery to force a log truncation, then switch it back to FULL and perform a full backup

Anda mungkin juga menyukai