Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Showcase


Channel Catalog


older | 1 | .... | 353 | 354 | (Page 355) | 356 | 357 | .... | 447 | newer

    0 0

    I need a solution

    Hi,

    We are using  AIX 7.1 server and Netbackup 7.5. Only root user has access to Netbackup for backup and recovery. Root umask is set at 027, by right this would prevent any netbackup files generated to have the world writable format. However we found a few CatalogBackup.lck files which have a world writable file format in /usr/openv/netbackup/db/images/... directory. Could anyone help to suggest a permanent solution for this, since our audit requirement would not allow any world writable file.

    Thank You.


    0 0
  • 04/23/15--20:59: The date of end of life
  • I need a solution

    I was told that some issue in BE2014 was fixed in BE2015. May I know the end of life date of BE2014?  Will Symantec continue to provide patch for BE2014 SP2? Thanks


    0 0
  • 04/24/15--07:14: BackupExec Job Log for ESX
  • I need a solution

    Gents,

    Can you please explain, why are you doing different backup sets for each Virtual Machine in the Job Log ?

    There is no summary section, how many Gb was processed by this one backup job.

    I.e. if I need to know, how many Gb was in the backup, I need to open each Set entry, open each 'Backup Set Summary' and count processed bytes.


    0 0
  • 04/24/15--07:32: Scheduling Issues
  • I need a solution

    Hi All,

    I need your inputs on one of the issue I am currently facing related to scheduling.

    Setup:-

    Master Server [linux 2.6]

    Media Server [Linux 2.6]

    NBU Version [7.6.0.4] on both master and media server.

    Admin console installed on [Windows 2008 R2 enterprise Machine]

    Issue:- I need to understand which time would the backup schedules follow. I am configuring the backups through an admin console installed on a windows machine and the machine is located in EST. However the actual master and media server are located in CST time zone. Now when i configure the backup policy through an admin cosole which is installed on a windows machine located in EST time zone. Which time zone would my policies would follow.

    Options 

    1) MASTER SERVER time zone which is CST

    2) Admin console Server which is EST.

    I am bit confused as I see schedules running or triggerening at a random time and hence i am confused. Please let me know your suggestions. 


    0 0

    I need a solution

    Hi all,

    Having some issues with restoring data using DLOCommandu -EmergencyRestore.

    We have previously changed netowrk locations for our NUDF and also upgraded to DLO 7.6.

    When running -

    dlocommandu -emergencyrestore "\\DLOServer\DLO NUDF\COMPANYDOMAIN-user5\.dlo" -W mypassword -AP \\DLOServer\temp -i

    From the original location or any location we recieve the following error -

    User share path not found. The user share path format could be different from the path configured in Symantec DLO.

    Anyone know of a solution to this issue?

    Many thanks.


    0 0

    I need a solution

    Hello,

    where can i find a doc for SSR MS for Customer, e. g. How can i creator a Backup Job On the ssr sm Webpage?

    Not a doc for installing the ssr sm.

    Kind regards
    Blacksun

    0

    0 0

    I need a solution

    I have created a Backup job for a Synology NAS device. The full backup runs with no issues, but when it is time for an incremental backup it just preforms a full backup again.

    I am using Backup exec 2014 currently. Just curious as to what I may have missed to get this working correctly.

    Dan


    0 0

    I need a solution

    Hi There,

    I have BE2014 on the latest version, and my architecture is as follows:

    -Production Environment, with CASO, Duplication Add-on, and a local dedupe store on this windows 2012 R2 server.  (no extra devices connected)

    -DR environment, with a BE managed server out there, and Deduplication Add-on (no extra devices involved out there either). I have a dedupe store shared out on this managed server (shared with the CASO server in Production)

    -At present, there is a 1GB LAN connection between the two servers/sites.

    I have my backup jobs defined on the CASO server in production, and the data is being saved to the dedupe store in Production just fine.

    I have a stage added to duplicate the data over to the shared store over in DR, but the jobs are taking forever (too long for the backup window at present).

    Every time i check in on it, it's almost constantly stuck on 'Loading Media - Duplicate'

    Anyone got any ideas?

    Thanks in advance


    0 0

    I need a solution

    I have a situation where I need to backup 3TB of compressed data on a regular basis.  I have two 5220 appliances both acting as Master/Media servers.  I really don't want it sitting in my Dedupe pool, so I'm trying to take advantage of the 3.5TB of Advanced Disk storage that I have.  What I would like to do is to have one appliance backup the data and keep it for a week and have it copied over to the remote master and stay for another week, about 2 weeks total retention.  Any thoughts on how to do this?  I thought I could back it up to the Dedup pool on the local master, duplicate it to AdvDisk and replicate to the remote master, but that doesn't seem to be working.  I know I will have to use the Dedup pool for AIR, but I want that backup to expire immediately if possible.


    0 0

    I need a solution

    I am getting the following events from our newly build server 2012 r2 server with BE 15. I am not sure what is causing this issue, but it appears to be related to BE and performance counters.

    Log Name:      Application
    Source:        Microsoft-Windows-Perflib
    Date:          4/24/2015 11:02:17 AM
    Event ID:      1008
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      XXXXXXX
    Description:
    The Open Procedure for service "BITS" in DLL "C:\Windows\System32\bitsperf.dll" failed. Performance data for this service will not be available. The first four bytes (DWORD) of the Data section contains the error code.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-Perflib" Guid="{13B197BD-7CEE-4B4E-8DD0-59314CE374CE}" EventSourceName="Perflib" />
        <EventID Qualifiers="49152">1008</EventID>
        <Version>0</Version>
        <Level>2</Level>
        <Task>0</Task>
        <Opcode>0</Opcode>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2015-04-24T18:02:17.000000000Z" />
        <EventRecordID>3405</EventRecordID>
        <Correlation />
        <Execution ProcessID="0" ThreadID="0" />
        <Channel>Application</Channel>
        <Computer>XXXXXXX</Computer>
        <Security />
      </System>
      <UserData>
        <EventXML xmlns="Perflib">
          <param1>BITS</param1>
          <param2>C:\Windows\System32\bitsperf.dll</param2>
          <binaryDataSize>4</binaryDataSize>
          <binaryData>02000000</binaryData>
        </EventXML>
      </UserData>
    </Event>

    Log Name:      Application
    Source:        Microsoft-Windows-Perflib
    Date:          4/24/2015 10:53:43 AM
    Event ID:      1023
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      XXXXX
    Description:
    Windows cannot load the extensible counter DLL Backup Exec. The first four bytes (DWORD) of the Data section contains the Windows error code.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-Perflib" Guid="{13B197BD-7CEE-4B4E-8DD0-59314CE374CE}" EventSourceName="Perflib" />
        <EventID Qualifiers="49152">1023</EventID>
        <Version>0</Version>
        <Level>2</Level>
        <Task>0</Task>
        <Opcode>0</Opcode>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2015-04-24T17:53:43.000000000Z" />
        <EventRecordID>3403</EventRecordID>
        <Correlation />
        <Execution ProcessID="0" ThreadID="0" />
        <Channel>Application</Channel>
        <Computer>XXXXXX</Computer>
        <Security />
      </System>
      <UserData>
        <EventXML xmlns="Perflib">
          <param1>Backup Exec</param1>
          <binaryDataSize>4</binaryDataSize>
          <binaryData>7E000000</binaryData>
        </EventXML>
      </UserData>
    </Event>


    0 0

    I need a solution

    Hi There,

    I have a number of servers that i want to back up with BE2014, using the deduplication add-on. I will also be adding a stage to duplicate certain backups to an offsite, shared dedupe store.

    I have a query about using multiple jobs for the same backup server.

    Say i create job A,(consisting of Full and Incrementals),  and this job backs up one server to the primary dedupe store, and this will be duplicated to another dedupe store offsite

    Then, i go along and create job B, with different retention settings (also backing up the same server, to the same dedupe store), and this is also duplicated to an offsite dedupe store.

    Will job B go off and create a completely new batch of files (eg full backups) on the primary dedupe store, or will everything be completely deduped and no duplicate of files created (, because it can 'see' all of the full backups that were previously created via Job A?

    If new backups will be created, will the same thing happen in the offsite dedupe store, and the duplicate jobs will also create new backups in the offsite store?

    Basically, i will be creating one job for daily/weekly, and the weekly will be duplicated offsite. I will be creating a seperate montly job, with suitable retention, and this will have a duplicate to send it offsite. I will also be creating seperate quarterly and annual jobs with suitable retention for them also.

    Is this a good approach, and will this all work together and deduplicate well?

    Thanks in advance for your advice


    0 0

    I need a solution

    Why we set robot path to one of the drive in tape library and how we can check on which drive that path is set from master server cmd on AIX. And what will happen if that drive gets faulty and what we needs to do. Do we need to reset that path to other working drive meanwhile it gets replaced and do we need to configure the drive again or anything else we can do.
    Robot is under master server control on Aix.

    Looking for steps of command for Aix platform rather than gui interface.

    0

    0 0

    I need a solution

    Hi

    I have a couple of quick questions about the dedupe store concurrent jobs setting (for a standard dedupe store running on the media server, with no external devices and no OSTs).

    What do people normally leave the setting at? It's looking like i'm going to be running quite a few jobs at once, but i'm hesitant to perform more than 3 jobs at once as it might slow all the jobs down to much

    I have BE running as a VM, with 2 x dual core processors, with 32GB of RAM. If i double this to 8 cores in total, will it help me run more concurrent jobs efficiently?

    Thanks in advance for your opinions


    0 0
  • 04/25/15--08:47: BE 14.1 Job Estatus Queued
  • I need a solution

    im using BE 14.1 SP2 and last hotfix applied, with a server 2012R2, and Local Disk storage. Suddenly Job holds in queued and stay forever until manually cancel. Storage is online and active.

    The Software was working fine and made around 12 good backup. I already delete all jobs and create again, delete servers, and rebuild the database.

    The only thing i cant do it is erase and create again the storage, Catastrophic failure showed when i try.

    any suggestion

    pantalla backup.PNGevent log.PNG


    0 0

    I need a solution

    Auto or manual,  incremental or full backups of my system drive (C:) (has system protection turned on) is creating and editing the system restore files in the C:\System Volume Information directory.  No restore points show as available restore points and this activity is chewing up disk space.  Why is this being done and how do I turn it off?

    Image 001.jpg

    Image 002.jpg

    ‎4/22,/2015 ‏‎12:51 PM = Creation of "1st point after deleting all" restore point

    4/22/2015 ‏‎5:00:50 PM = Auto Incremental  of C: drive

    4/‎24/2015 ‏‎9:46 PM  = Manual Incremental of C: drive

    ‎4/23/2015 ‏‎5:00:58 PM = Auto Incremental of C: drive

    So you can see where SSR is creating and editing the system restore files.  It is chewing up unnecessary disk space and I cannot see how this would be useful for any kind of SSR image restore???  I would like to disable this "feature".

    Thanks for your help -- David


    0 0
  • 04/25/15--13:48: Oracle RAC backup
  • I need a solution

    Hi All,

    Please calrify, how to configure oracle RAC backup, provide me steps to be followed.


    0 0
  • 04/26/15--18:40: BE 2014
  • I need a solution

    Hi,

    Which is better for BE 2014, Win 2008 or Win 2012?

    Thanks! 

    1430100809

    0 0
  • 04/26/15--20:32: Net backup Account ReadOnly
  • I need a solution

    Dear Symantec

    I created a account for read only role, but this account couldn't view access management tab ( including user and User Groups)

    Could you show me howt to assign this account can right permissions view assess management tab ?

    Thanks in advance


    0 0

    I need a solution

    Hi

    One of our customers are planning to add an expansion disk shelf on their production.

    Can anyone help me the step by step procedure on how to do this? Im not comfortable to perform this since it is already in prodcution.

    Thanks


    0 0

    I need a solution

    Backup Exec 2012 R3 CASO environment

    Server 2008 R2 

    We were running inventories on all of out OpenStorage devices so we held the scheudles on all of the jobs. Is there a way to remove the hold automanticly? Any help would be great!

    Thanks


older | 1 | .... | 353 | 354 | (Page 355) | 356 | 357 | .... | 447 | newer