Quantcast
Channel: Symantec Connect - Backup and Recovery - Discussions
Viewing all 8938 articles
Browse latest View live

Moving Files from Server 2003 to Server 2008

$
0
0
I need a solution

Hello I am trying to find out the best course of action. I need to move files from a server that has 2003 to a server that has 2008, I need to move all the files over from the old server to the new server along with the permissions. I don't want to make a mistake on the new server, the tool I will be using is backup exec 2012. Can anyone advise on what the best course of action would be to do this?
It was suggested that I use Backupexec 2012 backup the older server SDDATA6 to a hard drive with the permissions in the registry then restore to the new server.


Netbackup Client silent install

$
0
0
I need a solution

Looking for a solution to install netbackup 7.6 silently to an AIX box.  Will use this solution to be pushed out to multiple AIX servers in the enterprise.  Will use Bladelogic to push out the .tar/.gz file and need a silent install solution similar to Windows.

Thanks

noli

autoloader, traget specific tape with multiple jobs

$
0
0
I need a solution

Hi all, is it possible to setup a partition of 4 slots and get multiple jobs to hit a specific tape in that partition?

I want to setup my incremental jobs across the span of a week (mon, tues, wed, thurs) to hit one specific lto 5 tape in that partition of 4 tapes (4 tapes for a 4 week rotation).  The following week the jobs should target the second tape in that partition and so on.

thanks in advance

Chris

Oracle RMAN backup fails to validate after replication

$
0
0
I need a solution

Netbackup 7.6.0.1 running on a Netbackup appliance 5330.

Linux client - runs RMAN DB backup to NBU master node A (on appliance) to advanced disk.

Node A later replicates this backup to Node B in a different city. 

DBA says the DB backups are good but the validate fails. I've had them try a validate again Node A and it fails, I had them try a validate against Node B and it fails.

Has anyone had this issue and have since resolved it? Am I replicating the backup from Node A too soon so that the validate can run against Node A and then replicate to Node B for offsite backup?

Please advise.

query to check size of backed up folder to be restore

$
0
0
I need a solution

NBU Version : 7.6.0.1

Hello, can any one help with a query or script to verify & confirm backed up folder/file size to be restore in netbackup

Specific Days vs Recuring Days for a calendar schedule

$
0
0
I need a solution

I've done some digging with google and the 7.5 admin guide, and I can't find an answer to my question.  I have my schedules all setup the way I want and they run correctly. I have a weekly schedule setup to run the 1st, 3rd, 4th, and last Saturday.  The monthly is set to the 2nd Saturday.

I have a situation where I need to run a extra copy of the monthly backup.  If I go in and add a specific date of 4/18 for the monthly, will the Specific day run or the recurring saturday for the weekly, or both??  I know I can work around it and make it do what I want, but it sure would be nice if I could just go in and add a specific day to run something special when needed.  Thanks.

Installing NB7.5 media server in Windows 2003 cluster

$
0
0
I need a solution

Hi,

I'm migrating our servers to Netbackup 7.5 using Data Domain storage. For larger servers >50gb I'm installing the Netbackup media server rather than the agent because I can then configure them to have their own storage unit and with the DDBoost agent for Data Domain we get de-dupe at source. 

We have a Win2003 x64 active/passive cluster running SQL 2008. What's the best way for me to migrate this into Netbackup 7.5? I need to think about flat file and SQL backups.

Both servers have more than 50gb storage (the active node has >>1tb). As far as I can tell Netbackup 7.5 does not include a cluster-aware option. All I'm concerned with is that if the cluster fails over I'll still get a good backup. As far as I can see if I install Netbackup media server to both the physical nodes under their individual names, and set their individual backup policies to backup all local drives, then whichever node is the active cluster node I'll get the cluster data (the clustered SAN drives) - right? 

But what about the SQL data? The SQL cluster hostnamename, e.g. BISQL1, and it's associated IP address, could be running on either node. How do I configure that? I need the SQL backup to always be performed by the locally-installed client in order to benefit from dedupe at source. If the SQL client is installed on node1 but there's a failover and the database is running on node2, then the backup will read the full 1tb SQL data over the lan - from node2 to node1 - and will take about 10 hours to complete. That's what I want to avoid. 

Any suggestions?

Thanks!

VMware Backup Proxy host issue

$
0
0
I need a solution

Good afternoon everyone,

I currently have an open ticket with Symantec on this but I'm awaiting thier response, in the meantime I would like to see if anyone has experienced this issue.

Environment:

Master Server: 7.6.0.3 NBU

Client: 7.6.0.3 (Backup Proxy Host)

ESXi v 5.5 update 2

vCenter 5.5 Update 2

I currently recieve an 135 error "VMware credential validation failed." message while trying to validate the credentials to this particular backup proxy host.  I have 17 other backup proxy hosts backing up just fine.  My normal slew of tricks to get this to work isn't working...  DNS seems fine, I can resolve the FQDN and the host name itself from both the proxy server and the master.  I turned up logginng and and I see this in the bpvmutil log.

11:34:16.068 [2752.2112] <2> bprd_read_text_file: Received status 131
11:34:16.068 [2752.2112] <2> bpVMutil main: Unable to get server credentials
11:34:16.068 [2752.2112] <2> bpVMutil main: EXIT STATUS 135: client is not validated to perform the requested operation

I also see this in the bpcd log

11:34:16.068 [2680.1800] <2> run_VMutil_cmd: exit code for the process is h_errno = 135 - An attempt was made to use a JOIN or SUBST command on a drive that has already been substituted.

Any and all help is greatly appreciated.  If you want access to the case number please let me know. 

-Tim


Low Disk Volume alert needed based on disk pool not by master

$
0
0
I need a solution

Can anyone help with getting some type of alert by disk pool in terms of warning and critical alerting at  like 80/90 percent?

The Low Disk Volume alet only relates to space available per master. I need something to look at disk pools since i have multiple disk pools per master and then some master share disk pools.

Accelerator with VMware policy

$
0
0
I need a solution

Windows Master and Media servers, 2008R2

Netbackup 7.6.0.4

DataDomain running DDOS 5.4.4.3, DDboost plugin 3.0.0.1

I have an issue I've put on the back-burner for a bit, but am back to it. In testing Accelerator using VMWare policy type, I consistently see this message...

Info bpbrm(pid=2951988) the client does not support accelerator, switch to regular backup

I am able to run Accelerator with other non-vm backups to the needed OpenStorage target, but there must be something else VM policy related that I am not seeing. The Accelerator attribute is enabled in the policy, and I have the needed licensing.  Ive been searching for this message as related to VM backups, but have not come up with much, and what I find, seems to already be in place. Any direction you could give will be greatly appreciated.

Thank you.

Todd

Suspend scheduling specific clients within a policy

$
0
0
I need a solution

I have a request to suspend NetBackup for approximately 30 servers during a period of upgrade on these servers.  These servers exist in various policies with other clients, but I need to suspend only the clients requested.  I would normally suspend the policy for the time period, except there are other servers within the policies.  Is there a command line way to suspend only the servers requested for a specific period of time.

Cause of Slow Dedup when Hardware Is Underutilized

$
0
0
I do not need a solution (just sharing information)

I'm sharing for anyone else who has run across a situation where dedup runs far slower than expected, and there are not any actual bottlenecks on any of your hardware.

  In my situation, I can push more than 500 MB/sec doing a simple file copy of VHD between a hyper-V host and my backup server target.  But, backup job rates do not approach that at all.  During backups no obvious bottlenecks - net even a single-core CPU bottleneck of dedup is present on source server or target server.

  The short story is that the Hyper-V agent only keeps a queue depth of 1 outstanding IO reading the source, so source storage array never sees IO pressure that would cause it to scale read-ahead to get better sequential read throughput.  A normal file copy keeps queue depth at 4, source storage sees IO pressure, and scales read-ahead so you get much better throughput.

  My particular storage allows me to manually specify a large read-ahead, which I am now doing in pre-commands as a workaround, and increases job rates more than 50% and prevents cases where some other IO causes inexplicable variations in job rates.  Full details are at the link below. 

http://www.symantec.com/connect/ideas/boost-hyper-v-dedup-job-rates-50-or-more-increasing-agent-read-queue-depth-beyond-1

  Frankly I think this should be embarrassing to Symantec - the fact the IO queue depth matters is such a basic storage concept.  It's incredibly frustrating to have good hardware completely underutilized because of this software coding idiocy.  If they implemented this change, I could probably double or triple my backup rates with some more concurrency. So far, no one at Symantec has cared at all. 

  Anyway, if after investigation you find this could be a possible cause, please thumbs-up my idea at the link above.  Maybe others will find forcing higher read-ahead on source storage during backup helps job rates. (BTW you also want to make sure your VHD access is truly sequential, not fragmented etc).  Maybe someone at Symantec pays attention.  But, maybe it's time to look at other products.

BE 2010 R3 barcode question

$
0
0
I need a solution

HI Am using BE2010 R3 on server 2012.

i want use my own codebar label, for my first test i have one  LTO4 tape with bar code label .

scanned label is 000032L4, and this media is scratch media. 

My library have the option enable bar code rule checked,  Under Set job default i have 2 rules created for LTO3 and LTO4 tape since we use both. and both tape are configured to  read write LTO3 and LTO 4.

here my question why my job dosent take my tape 000032L4 and wait in Queue and ask for Please insert overwritable media into the robotic library using the import command??

Here some screen shot.

barcode_in_slot13.png

Enabled barcode rule.png

Job default.png

read-write.png

Restores from Oracle Intelligent Policy backups

$
0
0
I need a solution

Disclaimer: I am not a DBA   :)

We are running NBU 7.6.0.4. For the first time we are setting up Oracle Intelligent Policies. Working alongside our DBA, I currently have several instances being backed up via a policy, that has an instance group assigned to it.

The DBA and I have two questions:

(1. Using jnbpSA, is it possible to restore to a host other than it was backed up to--or when you use the GUI can you only restore to the original host? We are running the GUI, from the client we want to restore to, and putting in the "source client" as the one who did the backup we want to restore; but we don't see the database in the Oracle list on the left, only the ones on the current client we're on.

(2. The DBA is asking about the "spfile" (again, I'm not a DBA)... he has done restores from the command line, and he says this file is not being backed up, and he has asked me to find out how to make that happen with these backups.

Sorry about not having exhaustive details... feel free to ask me any questions you'd like, and be gentle!   :)

Thanks,

Susan

Retire old ACS and add tapes to HP Library

$
0
0
I need a solution

Afternoon,

First let me describe what I want to do, then I'll describe what we have, then you can tell me if my plan will work or not!:)

What I want to do:

Retire and old Sun Storetek Library (LTO4) and be able to read tapes that have not expired in another LTO4 HP library (24 slot) & be able to use the tapes that are now in the scratch pool in the other HP (LTO4) library.

I tried to use one that was in the scratch pool but said it was not a unique barcode.

So key points:

Be able to use tapes not expired yet

Use ones in scratch pool in HP library

What I have:

Netbackup 7.1 on Solaris Master Server

Netbackup 7.1 on 2x Solaris Media Servers

1x Sun Stortek LTO 4

1x HP 2024 LTO 4

1x HP MSL 6000 LTO3

My Plan (from reading other articles:

Right click all the tapes in the library (Couple of hundred still inside) and eject.  They wont actually eject just I take manually climb into library to remove them.

Remove the ACS from the NBU console (maybe power Sun Library off if im feeling brave)

Add the tapes that were scratch in the HP library and inventory so they can be used for backup.

Add a not expired tape and see if it can see it should have images on....

Your turn, does that sound like a reasonable plan or am I missing something?

Thanks in advance.

PS: I wont be doing this for a few weeks so wont be able to try anything or mark solutions etc until a little while down the road.


PC Backup Solutions

$
0
0
I need a solution

HI ALL,

Anyone know an backup solution for PC which need to take on  incrementa basis .

Upon restore the folders were empty

$
0
0
I need a solution

Hi,

We have been backing up the same backup sets everyday as below screen, where its a full back of C and D drive and all of its folders and subfolders.

Daily Backup Setup.jpg

In recent event where we would like to restore certain date from certain folder, we retrieve that tape to obtain it. Did the inventory and catalog the tape. During the selection of folders to obtain the files, the folders were empty. There are not subfolder to be selected. Refer the screen below. As you see, the Profiles, Users, Shared Folder, Redirects folders should contain subfolder. 

Restoration of Folders.jpg

Looking at the size of the media, its approximately the same size of  Used Space on the server.

Media Properties.jpg

Any explanation to why this is happening? Or further advise how can we ensure everything that we backed up can be restored as is.

Other info: Windows 2008 using Symantec BE 2010 R3

bprestore to different client

$
0
0
I need a solution

hi,

i need your help please,

I need to restore a folder D:\Fiancial from server_A to a different location on a different server : Server_B, and erase if there is this folder on the destination ?

and it's possible that bprestore to do restore from the last backup ??

The Media and Master servers are windows 2008 R2, version 7.5.0.6

netbackup is : Netbackup Enterprise Server.

The CLient Source and destination servers are Windows 2003.

thanks in advance !

Email notifications without BLAT

$
0
0
I need a solution

Hello,

I am trying to retreive email notifications from NetBackup 7.5. My supervisor does not want me to download BLAT so I wrote a VBScript that uses CDO to send emails. I wrote a line in nbmail.cmd that runs the script with parameters:

cscript mailtest.vbs %1 %2 %3

The parameters I used are what was defined in nbmail.cmd. I have tested running just the nbmail.cmd through the server and it works, I have gotten emails from there, but when I try to have NetBackup send it, I get nothing. I have configured the Universal Settings in the clients to include my email and have tried both "Server sends mail" and "Client sends mail". I ran a backup and even saw it complete, but I got no email.

Note: The scripts that I configured are all on the master server, but I added them to the client servers as well to test out.

Any help would be much appreciated.

BackupExec 2014 - Cant reduce Time in Cancel Job option below 241 hours

$
0
0
I need a solution

Hello, I have a new Backup exec 2014 server with SP2 and Hotfix 227745 installed. 

The problem Im having is that on certain jobs Im trying to have the differential job cancel if it runs over 9 hours.  In the Differential job window in the Schedule section of the job itself I see the checkbox "Cancel the job if it is still running X hours after its scheduled start time". 

If I click the checkbox it prefills the hours box with 241.  I change it to 9 and click OK.  But before the window closes I see the value change back to 241.  This happens everytime.  It will accept a value higher than 241 and it'll "stick" but it wont accept a value lower than 241. 

Viewing all 8938 articles
Browse latest View live




Latest Images