#1
|
|||
|
|||
![]() Hi,
Last night my tester had this error in the log from a backup. Is there any way to tell what the primary key of the record causing the issue is ? Is this related to data in the source or the destination ? Should we do a recover on the table in the source db ? Is there a general explanation of what the stats mean ? I'm curious about databases = 10, as I'm only handling 2 databases with this server, so I'm wondering if it points to something I'm doing wrong in closing databases, or something else Thanks, Sue Log excerpt sent to me ------------------------------------------------------------------------ ------ 2015-05-01 02:00:27.592-10:00 66363234 1704 [$2601/9729] ERROR default Exception Cvt: NexusDB: Key violation. Operation: BulkCreate File: "C:\ProgramData\PULS\1.0\DatabaseBackup\_REST. nx1" Index: "PRIMARYKEY_PtPMHi" Key: 20885 RefNr: 25769803777 [$2601/9729] ExceptionData: ClassName: TnxServerRestructureTaskInfo Destroying: False Session: ClassName: TnxServerSession Destroying: False Options: _CurrentClientInfo: C:\Program Files (x86)\PULS\bin\MeterService.exe;Size:8593408;Date: 2015-04-30 06:51:12;Computer:PULS2;User:PULSSrvDB Timeout: 10000 ServerEngine: ClassName: TnxServerEngine DisplayName: Server Engine Stats: License Key Status: Standard Server (non AWE) Uptime: 0.09:59:40 Sessions: 11 Databases: 10 Transaction Contexts: 10 Cursors: 87 Statements: 18 Executing Statements: 0 Executed Statements: 21,841 Active Folders: 2 Inactive Folders: 1 Active Tables: 16 Inactive Tables: 83 AWE Edition: No Block Cache Available: 987,136 kbyte Block Cache Used: 18,840 kbyte Block Cache Miss: 87,355 Block Cache Hit : 17,935,943 Block Cache Eviction : 0 Transactions Commited: 117,446 Transactions Commited Nested: 10,206 Transactions Rolledback: 3,612 Transactions Rolledback Nested: 3,612 Transactions Deadlocked: 0 Transactions Corrupted: 0 Blocks Read: 87,355 Blocks Written: 111,305 Temporary Storage Total Size: 0 kbyte Temporary Storage Used Size: 0 kbyte Temporary Storage Total Written: 0 kbyte Temporary Storage Total Read: 0 kbyte State: Started StateTransition: None UserName: Genie Authenticated: False ConnectedFrom: PULS2 CleanedUp: False Cancelled: False Executing: True 2015-05-01 02:00:27.594-10:00 66363234 1704 [$2601/9729] ERROR default Exception Cvt: NexusDB: Key violation. Operation: BulkCreate File: "C:\ProgramData\PULS\1.0\DatabaseBackup\_REST. nx1" Index: "PRIMARYKEY_PtPMHi" Key: 20885 RefNr: 25769803777 [$2601/9729] ExceptionData: ClassName: TnxServerRestructureTaskInfo Destroying: False Session: ClassName: TnxServerSession Destroying: False Options: _CurrentClientInfo: C:\Program Files (x86)\PULS\bin\MeterService.exe;Size:8593408;Date: 2015-04-30 06:51:12;Computer:PULS2;User:PULSSrvDB Timeout: 10000 ServerEngine: ClassName: TnxServerEngine DisplayName: Server Engine Stats: License Key Status: Standard Server (non AWE) Uptime: 0.09:59:40 Sessions: 11 Databases: 10 Transaction Contexts: 10 Cursors: 87 |
#2
|
|||
|
|||
![]() In the BackupController example, the flag KeepIndexes is set to true.
What are the advantages / disadvantages of this ? My database error seems to have occurred on an autoinc primary key. Could the flag have something to do with this ? Looking at forum posts, it seems that keep indexes should not be set and they are recreated when a table is restored, so I'm confused. Sue Sue King wrote: > Hi, > > Last night my tester had this error in the log from a backup. > > Is there any way to tell what the primary key of the record causing > the issue is ? > > Is this related to data in the source or the destination ? > > Should we do a recover on the table in the source db ? > > Is there a general explanation of what the stats mean ? I'm curious > about databases = 10, as I'm only handling 2 databases with this > server, so I'm wondering if it points to something I'm doing wrong in > closing databases, or something else > > Thanks, > > Sue > > Log excerpt sent to me > ---------------------------------------------------------------------- > -- ------ > > 2015-05-01 02:00:27.592-10:00 66363234 1704 [$2601/9729] ERROR > default Exception Cvt: NexusDB: Key violation. > Operation: BulkCreate > File: "C:\ProgramData\PULS\1.0\DatabaseBackup\_REST. nx1" > Index: "PRIMARYKEY_PtPMHi" > Key: 20885 > RefNr: 25769803777 > [$2601/9729] > ExceptionData: > ClassName: TnxServerRestructureTaskInfo > Destroying: False > Session: > ClassName: TnxServerSession > Destroying: False > Options: > _CurrentClientInfo: C:\Program Files > (x86)\PULS\bin\MeterService.exe;Size:8593408;Date: 2015-04-30 > 06:51:12;Computer:PULS2;User:PULSSrvDB > Timeout: 10000 > ServerEngine: > ClassName: TnxServerEngine > DisplayName: Server Engine > Stats: > License Key Status: Standard Server (non AWE) > Uptime: 0.09:59:40 > Sessions: 11 > Databases: 10 > Transaction Contexts: 10 > Cursors: 87 > Statements: 18 > Executing Statements: 0 > Executed Statements: 21,841 > Active Folders: 2 > Inactive Folders: 1 > Active Tables: 16 > Inactive Tables: 83 > AWE Edition: No > Block Cache Available: 987,136 kbyte > Block Cache Used: 18,840 kbyte > Block Cache Miss: 87,355 > Block Cache Hit : 17,935,943 > Block Cache Eviction : 0 > Transactions Commited: 117,446 > Transactions Commited Nested: 10,206 > Transactions Rolledback: 3,612 > Transactions Rolledback Nested: 3,612 > Transactions Deadlocked: 0 > Transactions Corrupted: 0 > Blocks Read: 87,355 > Blocks Written: 111,305 > Temporary Storage Total Size: 0 kbyte > Temporary Storage Used Size: 0 kbyte > Temporary Storage Total Written: 0 kbyte > Temporary Storage Total Read: 0 kbyte > State: Started > StateTransition: None > UserName: Genie > Authenticated: False > ConnectedFrom: PULS2 > CleanedUp: False > Cancelled: False > Executing: True > 2015-05-01 02:00:27.594-10:00 66363234 1704 [$2601/9729] ERROR > default Exception Cvt: NexusDB: Key violation. > Operation: BulkCreate > File: "C:\ProgramData\PULS\1.0\DatabaseBackup\_REST. nx1" > Index: "PRIMARYKEY_PtPMHi" > Key: 20885 > RefNr: 25769803777 > [$2601/9729] > ExceptionData: > ClassName: TnxServerRestructureTaskInfo > Destroying: False > Session: > ClassName: TnxServerSession > Destroying: False > Options: > _CurrentClientInfo: C:\Program Files > (x86)\PULS\bin\MeterService.exe;Size:8593408;Date: 2015-04-30 > 06:51:12;Computer:PULS2;User:PULSSrvDB > Timeout: 10000 > ServerEngine: > ClassName: TnxServerEngine > DisplayName: Server Engine > Stats: > License Key Status: Standard Server (non AWE) > Uptime: 0.09:59:40 > Sessions: 11 > Databases: 10 > Transaction Contexts: 10 > Cursors: 87 |
#3
|
|||
|
|||
![]() Hello Sue,
On 30/04/2015 07:27 p. m., Sue King wrote: > In the BackupController example, the flag KeepIndexes is set to true. > > What are the advantages / disadvantages of this ? Having the backup with KeepIndexes enabled allows you to use, without any other step, the backup, as it is a "working copy" of the database. This, of course, means that the indexes have to be created at some point when creating the backup, which makes the process take longer. The backup will be bigger, too, up to the size of the original database (it could be smaller if there are lot's of deleted records which haven't been reused, as creating a backup only copies, of course, the "live" records) Having it disabled means that the backup would be smaller (of course, this depends on the number of indexes per table, etc) and will take less to be generated (which means less strain on the server). But, if you want to use the backup, you will have to restore it so the database can restore the indexes. There is no good/bad option: depends entirely on your needs. I have that option disabled by default, but the users can choose to create a backup with indexes if they want. > > My database error seems to have occurred on an autoinc primary key. > Could the flag have something to do with this ? This shouldn't happen: I mean: if you haven't touched the values on that column, then it should always be synched with the table and you should never have problems with indexes there, at least not key violations. If you do have problems it could mean that the table is corrupted. -- Rodrigo Gómez [NDX] México, GMT-6 |
#4
|
|||
|
|||
![]() Hi Rodrigo,
At the moment, the backups are just backups, and don't need indexes. They are for backup, and for us to look at if there are issues. I guess having the indexes does make it easier for us, as we don't have to worry about which version to use if we need to recover. But as the db's start to grow, speed is probably more important. I was worried about their being an issue in the source db. I'll get them to recover the table. In this early version the data is being created but not seen by the users, so it doesn't matter very much. I'll have to look at how it is created to see if I can prevent it in future. There is only one thread that accesses this particular table, so there should not be any contention. Regards Sue Rodrigo Gomez [NDX] wrote: > Hello Sue, > > On 30/04/2015 07:27 p. m., Sue King wrote: > > In the BackupController example, the flag KeepIndexes is set to > > true. > > > > What are the advantages / disadvantages of this ? > > Having the backup with KeepIndexes enabled allows you to use, without > any other step, the backup, as it is a "working copy" of the > database. This, of course, means that the indexes have to be created > at some point when creating the backup, which makes the process take > longer. The backup will be bigger, too, up to the size of the > original database (it could be smaller if there are lot's of deleted > records which haven't been reused, as creating a backup only copies, > of course, the "live" records) > > Having it disabled means that the backup would be smaller (of course, > this depends on the number of indexes per table, etc) and will take > less to be generated (which means less strain on the server). But, if > you want to use the backup, you will have to restore it so the > database can restore the indexes. > > There is no good/bad option: depends entirely on your needs. I have > that option disabled by default, but the users can choose to create a > backup with indexes if they want. > > > > > My database error seems to have occurred on an autoinc primary key. > > Could the flag have something to do with this ? > > This shouldn't happen: I mean: if you haven't touched the values on > that column, then it should always be synched with the table and you > should never have problems with indexes there, at least not key > violations. If you do have problems it could mean that the table is > corrupted. |
#5
|
|||
|
|||
![]() Hello Sue,
> At the moment, the backups are just backups, and don't need indexes. > They are for backup, and for us to look at if there are issues. I > guess having the indexes does make it easier for us, as we don't have > to worry about which version to use if we need to recover. You don't need to worry about this. The recover process uses a stream inside the backup table to recreate the indexes, so it will be the "correct" ones (at the time of the backup). My rule of thumb is: how often would you need to use the backup? If it's just once in a while (the usual role of a backup: something went wrong, hopefully just once in a blue moon), then do the backup without indexes. If it's very often, then it might be worthwhile to create the backup with indexes. Regards, -- Rodrigo Gómez [NDX] México, GMT-6 |
#6
|
|||
|
|||
![]() Rodrigo Gomez [NDX] wrote:
Hello Rodrigo, I do the backup every night. The idea is that the backup can be copied to a different computer so that if the PC dies, they have a pretty recent db for restoring. This is a method we've been using for years in a different system, and have used the backups a few times. It is also a good way to get an up-to-date backup suitable to send me if there is something odd going on, without the need for the customer to do anything other than find the backup. With this system, we are compressing the backup and copying to a folder under Public Documents, and this should be backuped up by the client instead of the actual db. I think I will drop the KeepIndexes. Thanks a lot for your comments and ideas, Sue > Hello Sue, > > > At the moment, the backups are just backups, and don't need indexes. > > They are for backup, and for us to look at if there are issues. I > > guess having the indexes does make it easier for us, as we don't > > have to worry about which version to use if we need to recover. > > You don't need to worry about this. The recover process uses a stream > inside the backup table to recreate the indexes, so it will be the > "correct" ones (at the time of the backup). > > My rule of thumb is: how often would you need to use the backup? If > it's just once in a while (the usual role of a backup: something went > wrong, hopefully just once in a blue moon), then do the backup > without indexes. If it's very often, then it might be worthwhile to > create the backup with indexes. > > Regards, |
#7
|
|||
|
|||
![]() Sue King wrote:
> It is also a good way to get an up-to-date backup suitable to send me > if there is something odd going on, without the need for the customer > to do anything other than find the backup. If the "something odd going on" is in any way related to data corruption in the table files, then the backup is useless to investigate the issue. The backup function essentially creates a new table (normally keeping only the field definitions, stripping out indices, referential integrity and everything else), then, in the context of a snapshot transaction, copies over all records by iterating over the source table using the SAI. Being a newly created and filled table, whatever issue might affect the original table will obviously not be present in the backup. |
#8
|
|||
|
|||
![]() I hadn't thought of that aspect. I have had some of the errors perpetuated in the backups so far, and have been able to reproduce them on my system and know the fixes I suggested, like running nxRecover were appropriate. I think I'll follow Rodrigo's suggestion of having backups that can be done keeping indexes at the operator request. Regards Sue Thorsten Engler [NDA] wrote: > Sue King wrote: > > > It is also a good way to get an up-to-date backup suitable to send > > me if there is something odd going on, without the need for the > > customer to do anything other than find the backup. > > If the "something odd going on" is in any way related to data > corruption in the table files, then the backup is useless to > investigate the issue. > > The backup function essentially creates a new table (normally keeping > only the field definitions, stripping out indices, referential > integrity and everything else), then, in the context of a snapshot > transaction, copies over all records by iterating over the source > table using the SAI. > > Being a newly created and filled table, whatever issue might affect > the original table will obviously not be present in the backup. |
#9
|
|||
|
|||
![]() Even the "keep indexes" option creates a new table (just doesn't strip down the
table definition so far) and then copies the records over. Depending on the type of damage to the source table, iterating the SAI might miss records, or fail completely. Or, you might get a key violation during the backup, which would indicate that the source table is corrupted in such a way that it allowed records with duplicated unique keys to be stored. Sue King wrote: > > I hadn't thought of that aspect. > > I have had some of the errors perpetuated in the backups so far, and > have been able to reproduce them on my system and know the fixes I > suggested, like running nxRecover were appropriate. > > I think I'll follow Rodrigo's suggestion of having backups that can be > done keeping indexes at the operator request. > > Regards > > Sue > > Thorsten Engler [NDA] wrote: > > Sue King wrote: > > > > > It is also a good way to get an up-to-date backup suitable to send > > > me if there is something odd going on, without the need for the > > > customer to do anything other than find the backup. > > > > If the "something odd going on" is in any way related to data > > corruption in the table files, then the backup is useless to > > investigate the issue. > > > > The backup function essentially creates a new table (normally keeping > > only the field definitions, stripping out indices, referential > > integrity and everything else), then, in the context of a snapshot > > transaction, copies over all records by iterating over the source > > table using the SAI. > > > > Being a newly created and filled table, whatever issue might affect > > the original table will obviously not be present in the backup. |
#10
|
|||
|
|||
![]() Hello Sue,
I also have most of my customers with a nightly backup (or at least that's our recommendation). We actually have a product specifically for that, that uploads it the backups to some Amazon S3 buckets. First rule of backing up: store your backups somewhere else ![]() But, the point is not how frequent you take backups, but how often you use them. We also ask/use backups from our customers quite often (much much often than they need them to restore from some problem) to try new stuff, or try to locate bugs in the software, but we can live with the extra time spend on rebuilding the indexes, and it does make a difference in size for the backup, which is good for transferring over the internet and cost less to store on S3 ![]() Good luck! -- Rodrigo Gómez [NDX] México, GMT-6 |
Thread Tools | |
Display Modes | |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Restore backup error in version 2.07 | radvansky | nexusdb.public.support | 0 | 24th October 2012 12:20 AM |
Error message - restore from Live Backup | John Treder | nexusdb.public.discussions | 23 | 18th September 2012 04:20 AM |
Error message - restore from Live Backup | John Treder | Binaries | 0 | 15th September 2012 06:54 AM |
NLS Version / Backup file / Error | jprenou | nexusdb.public.support | 5 | 7th February 2012 10:53 PM |
Table is full error during backup | Elric Pedder | nexusdb.public.support | 4 | 4th October 2007 06:17 AM |