Quantcast
Channel: Symantec Connect - ブログエントリ
Viewing all 5094 articles
Browse latest View live

Backup Exec 2014 – GA Date Confirmed

0
0

We promised to keep you updated on the progress of Backup Exec 2014. Today, I am excited to confirm that Backup Exec 2014 will reach general availability (GA) on 2 June 2014!

As indicated in my last update, we closed the Beta program a little over a week ago. I wanted to share some additional customer feedback we received since that time:

  • “Love the redesign on the UI- specifically the Job Monitor.”
  • “Happy that I have so much flexibility to see what is happening, without having to navigate between screens.”
  • “Very pleased with it. Very smooth installation and it works very good.”

Thank you for your continued support.

 


TrueCrypt Migration to Symantec Encryption Desktop

0
0

With the recent announcement (http://truecrypt.sourceforge.net) that TrueCrypt is no longer supported and may contain security issues, we in the Symantec Encryption group wanted to reach out to the community and help provide an alternative option for multi-platform drive encryption. On April 14, 2014 TrueCrypt completed a security audit (http://istruecryptauditedyet.com) and soon thereafter the project was shut down. While there has been great interest in the open source community to continue its support, we believe our Symantec Drive Encryption product powered by PGP technology is the best commercial solution with enterprise class support available today.

Some of the most popular methods of using TrueCrypt is creating an encrypted virtual disk shared in the cloud and protecting an external drive. We have provided a couple of articles below to demonstrate how you can migrate some of these common scenarios from TrueCrypt to Symantec Drive Encryption. 

  • First, we show you how you can encrypt your disk using Encryption Desktop. See:
    Migrating from TrueCrypt to Symantec Drive Encryption: Encrypting Your Disks; (HOWTO99727)  http://www.symantec.com/docs/HOWTO99727
  • Second, we show how you can create a small virtual disk that can ultimately be shared in the cloud via Dropbox used on either Windows or Mac OS X. See:
    Migrating from TrueCrypt to Symantec Drive Encryption: Creating Encrypted Portable Containers (PGP Virtual Disks); (HOWTO99728) http://www.symantec.com/docs/HOWTO99728

Both scenarios can be accomplished by purchasing Symantec Drive Encryption (powered by PGP technology) from the Symantec eStore @ http://buy.symantec.com/estore/clp/productdetails/pk/drive-encryption. After adding the product to your cart, use discount code TRUECRYPT to receive 40% discount (available for one month only!). This version provides Disk Encryption, PGP Virtual Disk, PGP Zip and PGP Shredder. Details and specific support for Windows, Max OS X and Linux can be found at the bottom of the eStore page. With the suite of PGP-powered tools, you will be able to cover many TrueCrypt and other scenarios and configurations.

Download your free trial today, or take advantage of the 45-day money back guarantee, and see for yourself if this is the right solution for you. Also, we are actively monitoring our support forum, so please post any TrueCrypt migration-related questions and we will be glad to assist: http://www.symantec.com/connect/security/forums/pgp-desktop-email-wde-and-netshare 

 

 

Nuts and bolts in NetBackup for VMware: Avoiding CBT penalty with NetBackup Accelerator

0
0
Better Backup for a Virtual World is here!

How can you get 35x faster backups without incurring CBT penalty for enterprise data centers? Let us do a technical deep dive into NetBackup Accelerator feature in NetBackup 7.6.

Backup Exec 2014 BEMCLI Enhancements

0
0

Today's release of Backup Exec 2014 nearly doubles the size of BEMCLI from 222 to 395 cmdlets, with 72 of the original cmdlets enhanced with new parameters and functionality.

Here's a quick list of what's new:

- Native PowerShell v2 and v3+ support

- Full support (create/edit/rename/delete) for all backup job types (BackupDefinition, OneTimeBackupJob, and SyntheticBackupDefinition)

- All agent selection types supported for backup jobs (including virtual machines)

- Full support for mulit-server selections per backup job

- Support for centralized and managed server configuration

- Push-install of windows agents

- Support for all applications and server types

- Enhanced scheduling support

- Full support for notification recipients

 

 

Over the coming days and weeks, I'll be blogging about the new features in depth.  Stay tuned!

 

Getting the most from your Support Engagements

0
0
Thoughts from a former Support Backline Representative

From the inside of Support, I saw what worked, and what did not, both from Customers and from Support Reps. This document is all about empowering YOU - the customer - to enable you to get what you need out of your support experience.

국제 공조 단속으로 타격을 입은 Gameover Zeus 사이버 범죄 조직

0
0

Large swathes of infrastructure owned by the attackers behind the financial fraud botnet and Cryptolocker ransomware network seized by authorities.

Viewing New Enterprise Vault Storage Queue Performance Counters

0
0

In this blog I show how to access the new Enterprise Vault Storage Queue performance counters manually and programmatically

WinPE 3.1 - bientot disponible et intégré

0
0

Dans le user group, nous avons identifié que le fait de devoir déployer avec winPE 2.1, au lieu de 3.1, n'était pas très optimum. Il faut ainsi intégrer les pilotes "Vista" pour le Preboot, et Win7 pour l'OS déployé. Ceci faisait un double travail pour les pilotes critiques...

Le SP2 de DS7.1 en beta version, semble intégrer désormais WinPE 3.1. Ceci est la bonne nouvelle, sous réserver d'une confirmation dans la version finale.

La mauvaise: Aucune "annonce" concernant un éventuel SP6 avec WinPE 3.1, pour une intégration native dans DS 6.9...

A suivre donc.


Growing my Commoditized Storage and HA Environment with an Extra Node

0
0

In the article Commoditizing High Availability and Storage using Flexible Storage Sharing I described my first attempt to create a two node cluster based on Flexible Storage Sharing within Symantec Cluster File System and the nice results that I got. My next step was to increase the node count as I wanted to move to an architecture where a database will be running in each of the nodes. The first step here was to add a new node to the cluster.

Many times I get the question about how easy or difficult is to add a node to the cluster. This was a good opportunity to document what I did here. Our engineers at the Common Product Installer (CPI) group have done a great job over the years and now adding a node to the cluster can be done with a few easy steps.

The first thing to do is deploy the packages on the new server. There are several ways to do this. You can use Yum as described in this article, you can use the Installer, or as I did, you can use a new 6.1 feature called Deployment Server. This is a server (I used my first node in the cluster) where the packages are stored and it is a central location to install and distribute the software across any supported UNIX or Linux operating system.

In my configuration I already have two nodes (down and up) and I want to add a new node named charm. From any node I just need to invoke the installer using the –addnode flag and the name of the new node:

down:/opt/VRTS/install> ./installsfcfsha61 –addnode strange

And we check we comply with all the pre-requisites:

figure1.png

We enter the name of any node of the cluster (either down or up in our case). This step will be used to collect the configuration from the current cluster.

figure2.png

The installer will check that the communication with the new node is correct and if needed, system clocks will be synchronized.

figure3.png

Provide the confirmation to add the node to the cluster.

figure4.png

We enter the private networks that will be used (it should automatically detect them) and verify it is correct:

figure5.png

The configuration script will detect there are shared volumes already mounted in the cluster and will allow the new node to mount them. This is a shared nothing configuration that is using Flexible Storage Sharing. That means that although the new node does not have direct connectivity to that storage, it can mount it and use it as local. From this point in time, the new node can have access to the global name space provided by the cluster.

figure6.png

Once we answer yes, the file systems will be mounted in the new node and the add node operation will be completed successfully.

figure7.png

The next figure shows a more graphical representation of the new configuration. Previous to this change I had a two node cluster sharing four mount points. Local storage is being used coming from nodes down and up. Node strange has been added to the cluster and it can now mount the four available mount points. Those mount points are available as a global name space across all the cluster members.

figure8_1.png

In this configuration I can use that third node just as a simple compute resource where I can run some analytics or backup operations. If that server has any internal storage (that is my case) I can use it to add a third mirror to my volumes to provide extra resiliency (one unique FSS capability compared to other vendors), or I can redistribute my workload so that I will have an even distribution across all the nodes. That is what I will be describing in my next article, were a database instance will be deployed in each of those servers, keeping two copies of data across the cluster for resiliency.

Carlos Carrero.-

Meet the Engineers - Paul Honey

0
0

I can never remember precisely how long I have been working with Enterprise Vault, as it feels like forever, but if you really need to know, google 'Hurricane Wilma'. Why? Well I just happened to be stranded in Florida on a trip to the christening of my god-daughter when Wilma hit, and distinctly remember getting a call, whilst bunkering down in the suburbs of Miami, to tell me that 'you got that job at Symantec'.

I joined as Enterprise Vault was preparing to cross over into the dark side known as Lotus Domino journal archiving. I had a decade of experience building and supporting Domino environments for big financial institutions in London behind me and it was now needed in this Exchange focussed engineering team. My daily grind is to provide the customer face of Enterprise Vault Engineering. I am a diagnostic and development engineer who is equally happy digging deep into the code and writing fixes, as I am navigating the complexities of customer environments to root cause their issues, whilst managing the relationship with the technical and the not so technical players involved.

Inevitably, I expect to blog about my first love, Enterprise Vault for Domino, but I am nowadays also known to regularly play the field a bit and stray into other areas of the product, so I imagine I can find some gossip to share about those other mistresses too.

Btw, if you didn't bother to google, I just did as my god-daughter's birthday is coming up so I needed to know too ... we'll both be celebrating our 9 year anniversaries soon!

Meet the Engineers - Chris Harrison

0
0

I've been working on Enterprise Vault for around 10 years now. I started out at KVS in the Technical Support team before taking a whirlwind tour through Veritas before ending up at Symantec – rather a large jump from the small couple-of-hundred-employees company to the thousands that are currently working for Symantec!

I spent a couple of years in Technical Support, cutting my teeth on the likes of EV 4 CP7 (yes – back in the day we didn't have service packs!) and all the fun and frolics that MAPI troubleshooting used to entail like running FixMAPI or deleting differing versions of mapisvc.inf. I then joined what was once Engineering Support – which most of you probably now know better as the Customer Focus Team (CFT) –  as an Escalations Engineer, taking escalations from Technical Support and working with the core development teams on resolving them in the product. During this time I also spent a lot of time on our internal Enterprise Vault implementations, which gave me a new-found perspective on the product as I was now a kind of customer myself!

Over the last few years I've spent most of my time creating and co-ordinating processes around the fixes we generate in hotfix, cumulative hotfix, and major releases. While I may not dig into root cause for specific issues any longer, I see pretty much every issue that knocks on our door which still keeps me 'technically' in shape :-)

In future blogs you can expect me to talk about the releases that we put out into the community. If there are problems with them I'll be on the look out for them. If there are trends that I'm spotting internally that is useful to know about externally – for example an issue that keeps on cropping up for which there is a solution or a workaround – then I'll do my best to give visibility of this.

Backup Exec Install Blog (Why BE Installs a 32bit SQL 2008 Instance)

0
0

Backup Exec 2014 installs a 32 bit instance of Microsoft SQL Server 2008 to facilitate upgrades, rollbacks, and all supported platforms. We welcome your feedback and questions for future releases!

One NetBackup Master/Media server, one tape drive, and millions of files

0
0

How do you back up millions of files when you only have a NetBackup Standard client and a single tape drive?

EVDuplicateCleaner makes the big time...

0
0

Whilst IMAP, storage queue and the new Enterprise Vault Search are grabbing all the headlines in our latest release, Enterprise Vault 11, there is plenty of other less heralded work that you may not have noticed. For instance, if you had so much time on your hands that you could pay close attention to the contents of your ...\Enterprise Vault\... program directory, you may have spotted a new arrival called EVDuplicateCleaner.exe in Enterprise Vault 11. In addition, full details on using the tool can now be found in the Utilities guide.

This utility has been around for a few releases now but was only ever available via version specific downloads from this technote - http://www.symantec.com/business/support/index?page=content&id=TECH193878.

It exists to target and resolve duplicates created as a result of the same one item in the same folder in the same mailbox being archived multiple times, which can sometimes occur due to varied unique data characteristics or corruptions of target items. Such duplicated archiving scenarios are rare but, in the unfortunate event that they do occur and are reported to us, we have two primary goals - firstly to root cause why the duplication is occurring and fix it; secondly to assist the customer in cleaning up any duplicates that have been created as a result of this erroneous situation.

The utility has three modes in which it can be run:

Summary– this mode runs a SQL query which groups items that were archived from the mailbox and the same folder to the same archive, with the exact same item date/time, in order to provide a high level report of the estimated number of items that may have duplicates in the archive, and the number of estimated duplicates that exist of that item. It is a good mode to scan a Vault Store for potential duplicates and at least flag any archives that require further investigation.

Report– this mode dives deeper into individual archives, again running SQL queries for each estimated duplicate group to report the saveset details of the most recently archived item in each group and the number of estimated duplicates of that item. It is a good mode to run on a per archive basis for affected archives and gather per saveset details in order to perform some additional investigation into the reality of these estimated duplicates.

***You could not have missed how many times I used the word ‘estimate(d)’ in the first two modes. That is because these modes use SQL only to perform analysis based on item metadata, in order to provide quick results of potential duplicates, but they are ultimately also capable of providing false positives – i.e. albeit unlikely and obscure, multiple items may have been archived from the same mailbox and the same folder, with the exact same item date / time that are not in fact duplicates

Execute– this mode works from the lists of duplicate groups provided by the SQL in Report mode, and performs the de-deduplication clean-up operation, deleting all duplicate savesets apart from the most recently archived one. Execute mode does not just rely on the estimation of the previous two modes however. Prior to performing any deletion, it will compare the fingerprints (or if a fingerprint is not available, extensive index properties) of the item to keep and the item to delete to guarantee that they are in fact duplicates at a binary level. This level of validation of the item’s duplicity is obviously resource and time intensive and is the reason why we decided in the original design of the utility to reserve it only for the mode where changes to items can occur

So, I hope some of that helps you understand the raison d’etre and architecture of this utility a little better. In reality, there should be no need for you to be rushing to your servers and running the utility right now, but it is at least now easily accessible should its services be required.

Remember to terminate your Symantec Installation Manager


Nuts and bolts in NetBackup for VMware: Transport methods and TCP ports

0
0

It had been a while since my last blog on NetBackup for VMware. The next in line in that series was restore process flow. However, since there had been many questions on transport methods and TCP ports, let us talk about those for now. I will get to restore process flow soon!

Thus this blog is just an addendum to Understanding V-Ray vision through backup process flow.  As that blog was too long already, I didn’t spend much time explaining various transport types and TCP ports but I received so many questions on those. Sorry for the delay.

Let us examine the transport types and ports usage through the view from VMware backup host.

There are three unique ways a backup host can stream data from VM data store. They are SAN, hot-add and NBD transports.

SAN Transport

  The VMware backup host must be a physical system in order to use SAN transport.  The data store LUNs are zoned to the backup host. The backup host can directly read VMDK objects from the datastore using vADP. The data stores can be Fibre Channel or iSCSI connected.

Pros:

  True offhost backups, zero resources tapped from ESX/ESXi hosts

Cons:

  Works only for SAN attached (Fibre Channel or iSCSI) data stores

Hot-add Transport

  You can think of hot-add transport as a special case of SAN transport where the backup host is also a virtual machine. Plus there is a bonus, it works for non-SAN data stores as well. The backup host VM accesses the VMDK objects of other VMs directly from the data store (similar to SAN transport). It can protect all VMs on datastores to which it has access.

Pros:

  No need for a physical backup host

  Does provide offhost backups for those VMs not collocated with VM backup host

  The most efficient backup method for NFS based data stores (e.g. NetApp or FileStore providing VM storage)

Cons:

  It does use ESX server resources where the VM backup host is deployed

  You cannot protect VMDK files larger than 1TB (a vADP limitation with hot-add)

NBD Transport

   NBD stands for network block device. In this method, VMware Network File Copy (NFC) protocol is used such that the VMDK object looks like a block device to the backup host which can be read through network. You need one NFC connection for each VMDK file being backed up. Thus backup is streamed from ESX/ESXi system’s VMkernel port to VMware backup host. If your ESX/ESXi hosts have VMKernel ports (known as the management ports) with dedicated uplinks, your virtual machine traffic links are not affected.

Pros:

  Works for all kinds of data stores, even the ones directly attached (DAS) to ESX hosts

  The simplest one to setup from infrastructure perspective works with both physical and virtual backup hosts.

Cons:

  Not an offhost solution.  ESX resources are used for backups.

Which one is right for you? Well, it depends on your business needs. For a large enterprise data center where you already have invested heavily on Fibre Channel SAN for your vSphere infrastructure, SAN transport is ideal. Hot-add can also be used especially if you like to spread backup hosts onto to multiple multi-node ESX clusters. With the increasing popularity of 10Gb Ethernet, NBD is not inferior either.  Good news is that NetBackup has the ability to automatically try various transports during backups.

TCP ports to vSphere infrastructure

  What ports are needed from NetBackup to vSphere infrastructure? The answer is quite simple; you need access to TCP port 443 and 902.

Ability to connect to TCP port 443 on vCenter server is mandatory. This is the port at which NetBackup connects for all things needed from vCenter server. VM discovery requests, snapshot creation, snapshot deletion etc.

The backup host may also need the ability to connect to TCP port 902 on ESX/ESXi hosts. This is needed for specific use cases.  If these are not applicable to your environment, no need for this port to be opened in firewall.

   1. You are using NBD/NBDSSL transport for backups and/or restores.

   2. You are doing restores through Restore ESX server bypassing vCenter

SAN and NBD Transports using a physical VMware backup host

Hot-add and NBD Transports using a physical VMware backup host

Back to Nuts and Bolts in NetBackup for VMware series

Enterprise Vault 9.0.2 Client Hotfix

0
0

After many weeks of work, there is a post 9.0.2 Client Hotfix which should address the issues we've seen with Instant Search (In Outlook 2007 and Outlook 2010 not returning any results).  If you remember many people were seeing that it was working, then it would stop, and then sometimes (days later) it would work again... then stop.

 

http://www.symantec.com/business/support/index?pag...

 

I've been using the fix for about a week now, on Outlook 2007 and Outlook 2010 (different computers of course) and I've got it in a repro environment which was previously broken, and it's all working nicely.

 

Once installed you need to rebuild the WDS Index on the workstation affected, and once that's rebuilt (which in my case took about 5 hours) it was all good.

Channel Marketing Update: New Sales Playbooks and Solutions Showcase

0
0

We are off to a great start in our new fiscal year. As we’ve entered the transition period for our redesigned Partner Program, we’re working on developing new tools and assets to support our partner community. We want to empower you – our partners – with the resources you need to reach your goals.

New Sales Playbooks

We’ve recently introduced two new sales playbooks: Selling Information Management (IM) Sales and Information Security (IS) Solutions Sales. These new playbooks have been developed with input from senior Symantec sales leaders in every geography. They’re powerful tools designed to help you identify new opportunities, prepare your approach, cross-sell solutions and increase your deal size. Our playbooks are organized around plays that align to our go-to-market priorities and focus on the most common customer problems we address. Inside you’ll find a number of good resources you can use to take your value propositions to your customers.

Our new playbooks will help you:

  • Retire quota faster– by understanding how to sell the entire Symantec IM and IS portfolio and better leverage opportunities.
  • Increase revenue– by leveraging positive customer experiences with the Symantec portfolio and expanding the value of your partner’s solutions.
  • Broaden your account footprint– by helping you encourage your customer’s CISO and VP IT to meet with you as Symantec becomes a more strategic part of enabling and protecting their business.
  • Win more often– with the strongest market leading Information Management and Security products and value propositions.

To learn more and download our new sales playbooks, please visit PartnerNet.

Symantec Solutions Showcase

Do you need fresh content for your company’s website? Since many customers want to research online before buying, we’ve built the Symantec Solutions Showcase as an easy way to incorporate the latest information from Symantec into your website.

Our comprehensive data on Symantec products, education and promotions will give your customers and prospects the information they need to research and make a purchase decision directly from you. You can even customize the content to the specific product lines that you sell, with automatic updates, to ensure that information is always current and accurate, so your customers make an informed purchase decision.

Want to get started? Check out the Showcase Launch Center. Also, to stay up to date on the latest from Symantec, please visit PartnerNet to opt in to receive Symantec newsletters.

Microsoft Patch Tuesday – June 2014

0
0

This month the vendor is releasing seven bulletins covering a total of 66 vulnerabilities. Fifty-five of this month's issues are rated ’Critical’.

Building Symantec’s Cloud Platform on OpenStack

0
0

I know what you’re thinking…  why am I talking to you about Symantec... cloud...  and OpenStack?

When people think of Symantec, their minds often go straight to antivirus.  In fact we do a lot more than that, we secure and manage information.  Drill in a bit, and you’ll find that this encompasses lot of things: big data analytics, massive data storage, real-time traffic scanning, management of 100s of millions of endpoints….  It’s hard to list everything we do, but the point is that what we do requires a huge amount of infrastructure.  And we’re currently in the process of building a new cloud platform, where these and all other Symantec products and services will live.

We’re building this new platform on OpenStack.  As we go through this process we occasionally stumble over issues and obstacles, sometimes frustrating but usually interesting and fraught with possibility.  You might be discovering the same ones, or maybe you’ve missed the ones we’ve found and discovered others instead.  The fact is that we’re all working through this together, and we want to share our thoughts and experiences with you.

This posting will be the first of many.  We’d love to hear your feedback, and if you’re interested in participating in the discussion please reach out.  Here are some questions we’re asking ourselves, that maybe you are also asking yourself, and that we’ll be talking about in this blog…

 

How Do You Deploy and Operate OpenStack at Large Scale?

You’re running thousands of physical nodes, many data centers running active-active, complex multi-tenancy.  How do you build it out and operate it?  How do you know you’re getting the most out of your infrastructure, tuned for the best performance?  For example, we presented our analysis on SDN architecture and performance at the OpenStack Summit in Atlanta.  We’ll continue to share our experiences in this and other areas of OpenStack.

 

How Do You Run OpenStack Securely?

Ok, we’re back to Symantec and security – naturally this is a critical criterion for us.  Security starts at the physical infrastructure, all the way through the services, and throughout operations.  It’s a very broad subject.  The OpenStack community is making the right moves towards ensuring the OpenStack software is secure, and we’ll get more involved in these efforts over time.  But a lot of your security will come from how you deploy, configure, and operate it.  I presented some of our thoughts on secure operation of Keystone at the Atlanta Summit.  I posed a lot of questions, and left some of them unanswered, TBD.  We’ll answer those questions in future postings, as well as discussing new questions around other aspects of OpenStack and security.

 

What Else Do You Need?

OpenStack provides a great base of IaaS services to solve our product teams’ infrastructure needs, and we’re going to use every bit of existing functionality that we can.  But there are features we need that don’t exist yet – we’ll be working with you all to build and contribute it.  In fact, there are some areas where we’ve found we need entirely new services and have started to build them from scratch such as batch processing, stream processing, and operational monitoring that our customers will use as OpenStack compatible services.

MagnetoDB is another one of these new services that Symantec is investing heavily in, a fully open source NoSQL Database as a Service for OpenStack.  The goal is to help our users to move most of their data from vertically scaling databases like Oracle and SQL Server to a managed service that scales horizontally for high throughput, capacity, and availability.  Look for MagnetoDB to be the subject of my next post.

 

We’re looking forward to continuing the discussion.  Stay tuned for more…

 

Keith Newstadt

Symantec Cloud Platform Engineering

Follow: @knewstadt

Viewing all 5094 articles
Browse latest View live




Latest Images