Quantcast
Channel: Symantec Connect - ブログエントリ
Viewing all 5094 articles
Browse latest View live

Earn reward points by finishing the Information Governance survey today!

0
0

On behalf of Symantec, we would like to invite your participation in a research survey on Information Governance. Your comments and inputs are very important to help us understand your needs, so please try to make your answers complete. In return, we would like to offer ALL the qualifying respondents who finish the survey 50 ofSymantec Connect Reward Points.

The survey should only take about 5-10 minutes to complete.

Please click below link to begin the survey. Thank you.

https://www.surveymonkey.com/s/messagetesting

 

 


Earn reward points by finishing the Information Governance survey today!

0
0

On behalf of Symantec, we would like to invite your participation in a research survey on Information Governance. Your comments and inputs are very important to help us understand your needs, so please try to make your answers complete. In return, we would like to offer ALL the qualifying respondents who finish the survey 50 ofSymantec Connect Reward Points.

The survey should only take about 5-10 minutes to complete.

Please click below link to begin the survey. Thank you.

https://www.surveymonkey.com/s/messagetesting

 

 

 

Microsoft Patch Tuesday – July 2014

0
0

This month the vendor is releasing six bulletins covering a total of 29 vulnerabilities. Twenty-four of this month's issues are rated Critical.

Creating Screencasts for Symantec Connect

0
0

Are you interested in publishing how-to videos and screencasts on Symantec Connect? Video demos and screencasts are great ways to share information with the community, earn rewards points, and they're just plain fun to create!
 

Creating/Exporting Your Video File

Once you submit your video to the Symantec Connect team for publishing, we go through a series of encoding steps in order to publish your video using the streaming Symantec video player, and can run into encoding issues depending on the codec you used to produce your final file (see Wikipedia for a great explanation of codecs).  Here are some tips and suggestions from the Symantec video team:

Whenever possible we recommend 1280x720px as the best resolution for support as this conforms to the 16:9 ratio of the vast majority of Brightcove and YouTube players (and laptops screen, etc.). This resolution also keeps the text/icon quality from breaking down on screen shots over higher resolutions. 

 
When exporting your video, 1920x1080px 10mbps or 1280x720px 5mbps is ideal, though many programs (Camtasia, etc.) will only output far less. For screenshot videos the bitrates can be much lower and still be fine visually. It's usually best to focus on resolution over bitrates.

Although H.264 MP4/MOV formats are recommended, we can accept many different formats/wrappers (WMV, AVI, MPEG, etc.).

Providing Video to Symantec Connect

Once you have your video created, you'll need to Create a video page on Connect, add a title and description that will make people want to watch it, attach your video the page, and then submit.

Your video will need to be reviewed by the technical team, and then be encoded. Please allow 3-5 days.

 

Thanks for your contributions to the community!

 

Social Engineering: Attacking the Weakest Link in the Security Chain

0
0

It’s happened to major corporations, and even the U.S. Department of Defense--falling victim to data breaches that resulted from attackers exploiting employees or company vendors. Unfortunately, along with exposing millions of identities these attacks also reveal what is often the weakest link in enterprise data security – the human element.

SSL Ciphers - Beyond Private key and Certificate

0
0

Today SSL is an integral part of online businesses and any secured communication. It is however not an area that many system administrators or security experts are comfortable with. For most administrators the correct installation of the private key and its corresponding certificate is sufficient. As long as the green bar, the padlock, or https:// can be seen during the SSL/TLS negotiation, both the administrators and their clients trust that the connectivity is secure.

 

However many security flaws and vulnerabilities have been discovered in the recent years. From the server side there is the infamous Heartbleed bug or CCS injection - CVE-2014-0224, side-channel attacks such as Beast, Lucky 13, Crime or BREACH, and others. It is not sufficient to just have a correct installation of the private key and certificate pair on the server. Beside patching up server libraries and client applications, additional control to SSL/TLS negotiations need to be applied. One of those control mechanisms is selecting the right cipher suite.

 

The strength of an SSL/TLS negotiation depends not only the size of the private key or certificate. As of 2014, the recommended minimium key pair size is 2048 bit, however this does not guarantee maximum encryption sessions. During SSL/TLS handshakes, the agreement of what cipher suite to use determines if the negotiation will be using SSL or TLS protocols. It also determines the key exchange and encryption algorithms. If the agreed encryption level between the client and the server is low, the SSL/TLS session will still be vulnerable. For a system to be truly secure, strong cipher suites are required.

 

To address this issue, a project was initiated. The result, "SSL/ TLS Cipher Suite Analysis and strong Cipher Enablement" is included in this blog.

The purpose of this research is to provide an implementation process to set up a strongly secured SSL/TLS system by viewing the available cipher suites present in a system, recognizing the strength and weakness of the different ciphers and choosing the most applicable cipher suite.

Note:  The configuration examples given in this document do not represent the complete or best set of strong ciphers to use. Depending on the various security policies and business requirements, the examples given in the document may not apply .

 

Beyond Hacking

0
0
- Teaching Our Customers, Employees and the Next Generation of Cyber Professionals to Think Like the Savviest Hackers

According to Symantec’s 2013 Norton Report, the total global direct cost of cybercrime (US$113 billion; up from $110 billion) and the average cost per victim of cybercrime ($298; up from $197) increased in 2013. Additionally, a 2013 study by Hewlett-Packard states that the average cost of a cyberattack on a U.S. company is $591,780 -- and this is only increasing.  

The fact is, we are becoming more mobile, more open to cyberthreats and at the same time, cybercriminals are becoming more savvy, and harder to track. To combat this, businesses (both IT security companies such as Symantec, and our customers) are increasingly turning to Cyber War Games in the hopes that they can train their employees to stay one step ahead of today’s savviest cybercriminals.

Symantec’s Cyber War Games and Cyber Readiness Challenge

At Symantec, we are always looking for ways to build programs that ensure our employees are ready and prepared to address the ever changing, extremely complicated cyber security challenges that our business and customers face. Three years ago, we therefore developed our own Cyber War Games, an engaging, intensive program that would ask our employees to think like their cyber enemies, think like hackers.

To many this may seem risky or strange to encourage our employees to learn to operate like hackers. However, we developed the program with the thought that for ourselves and our customers to protect against cyber perpetrators, to help our customers understand the potential threats they face, wemust understand what weaknesses and entry points cyber criminals look for, we must see the online landscape from their perspective. One of the most important messages we try to convey through this exercise is that it doesn’t always take a huge amount of experience or knowledge to access weaknesses in cyber security so it is crucial for companies like Symantec, and our customers to ensure they are completely aware of all possible vulnerabilities. 

While the program was originally developed for our product development teams, we’ve now opened it up to everyone in Symantec – including development, IT, marketing and finance teams. Everyone brings a unique perspective to the event and this has helped us show our employees why every component of our product development process, everyone’s input and role is crucial to ensuring a safe online experience for our customers.

We have been running the Cyber War Games for three years and it has been extremely successful. Each year we choose a theme to base the games on. The first year challenged participants to hack into a government infrastructure, the second into a large oil and gas company, and our latest this February challenged participants to hack into a fictitious financial institution. This year’s Cyber War Games included over 1,000 employees and through a series of events they are weeded down to the finalists who compete in Mountain View.

It is great to see how dedicated and involved employees become in the Cyber War Games. Many are spending time outside of this competition to learn languages and frameworks that they aren’t necessarily using in their day to day jobs.

Bringing the Cyber War Games to the Market

Following the Cyber War Games, we realized the opportunity and benefit of offering this unique event to our customers and the wider market. We therefore developed the Cyber Readiness Challenge, an immersive, interactive capture the flag competition that models scenarios after the current threat landscape using realistic IT infrastructure. Similar to the Cyber War Games, it is designed for many levels of technical skill and experience, and it puts participants in the hacker's shoes to understand their targets, technology and thought processes so they can ultimately better protect their organization and themselves.

We offer the Cyber Readiness Challenge in 2, 4 and 8 hour increments, both in person and online. Since the launch of the Cyber Readiness Challenge in 2013, we have hosted 40 events at locations globally with nearly 4,000 participants. We’ve also expanded the Cyber Readiness Challenge and partnered with top universities so not only our customers, but our future IT professionals are equipped to defend the most savvy cyber criminals effectively. We are hoping in the future to rollout a collegiate competition centered around the Cyber Readiness Challenge. 

Both the Cyber War Games and Cyber Readiness Challenge have been extremely successful. As customers learn more about how hackers view their IT infrastructure, we are simultaneously teaching them about the right processes and technologies to address these issues so they are a step ahead and don’t fall prey to these.

In fact last year, we were highlighted by Fast Company for this innovative and engaging program. Symantec Vice President of Product Management Samir Kapuria was quoted in the article discussing the importance of this initiative:

"In every other high-risk environment--be it race car drivers or doctors--people have a practice space to hone in on their skills and innovate," Symantec vice president of product management Samir Kapuria told Fast Company. "In our domain, where you have active adversaries trying to steal money or intellectual property, or hactivists, there's no place for us to learn and innovate in a safe environment. That was the inspiration for this."

Additionally, for our employees it has not only increased their skills and knowledge, but it has also helped them get excited about how passion and innovation in every role drives success at Symantec.

Our goal is to ensure a safe and secure world for our customers, as well as foster a challenging and constantly engaging workplace, where our employees can grow and expand their skills every day - this program is doing just that.  

cybergames1.jpg

Symantec’s Cyber Readiness Challenge ensures customers and future IT professionals are equipped to defend the most savvy cyber criminals effectively.

cybergames2.jpg

Challenged to “think like a hacker” customers and students take part in Symantec’s Cyber Readiness Challenge and aim to replicate the work of the savviest hackers. 

cybergames3.jpg

From hacking into a fictitious government or financial institution, Symantec’s Cyber War Games and Cyber Readiness Challenge replicate real-world situations to provide employees, customers and students the tools and know-how to identify and address the latest cyber threats.   

 

Anthony Barkley is Symantec's ‎Director of Product Management, Information Security Services Group 

Connect Dev Notes: 09 July 2014

0
0

Updates deployed to the Connect production servers as a result of the code sprint that ended 08 Jul 2014.

User Facing: Desktop

  • Fixed an issue that was allowing unpublished content to show up on list pages.
  • Patched code that was sorting URL aliases incorrectly.
  • Updated cache flushing code to include URLs with the "device=desktop" suffix.
  • Added code that sets a limit on the length of feedback submitted to the translate.cloud system.
  • Updated email notifications to consistently use https:// when linking back to Connect.
  • Eliminated cases where content updates were programmatically submitting redundant cache-clear requests to Akamai.
  • Removed references to Badgeville badges from user profile pages.

Admin Facing

  • Enhanced the link checking tool to avoid reporting false-positives in broken link reports.

Performance Wins

  • Tuned code to serve up RSS from http:// so the oft-visited RSS feeds can be cached by Akamai.

Novacoast Webinar- Universal Imaging

0
0
Streamline your Imaging!

Join our webinar on July 29th @ 11AM EST to learn about our Universal Imaging Solution.

 

                                            REGISTER HERE NOW!

         

            Screen Shot 2014-07-09 at 2.53.51 PM.png

Screen Shot 2014-07-09 at 2.48.48 PM.png

Unable to Archive Exchange Mailbox

0
0

I was having trouble archiving content from an Exchange 2010 mailbox using EV9. To troubleshoot I used a "Run Now" job but nothing happened.

Watching the message queue I briefly saw an entry added to the A3 queue but it disappeared immediately. No events recorded in the event log.

I started a DTrace of the ArchiveTask and re-ran the "Run Now" job and the below entries were logged:

80    12:23:58.014     [3136]    (ArchiveTask)    <9648>    EV:L    {HrMAPIOpenMsgStoreKvs:#50} Opened msg store [0x8004011d]
81    12:23:58.014     [3136]    (ArchiveTask)    <9648>    EV:H    :CArchivingAgent::ProcessUser() |Attempt to open the users message store resulted in failure with error message = MAPI_E_FAILONEPROVIDER |This is because the mailbox does not exist. This often happens when processing utility accounts like site connectors, or when mailboxes have been deleted after a process mailbox message has been queued |
82    12:23:58.029     [3136]    (ArchiveTask)    <9648>    EV:L    :CArchivingAgent::ProcessUser() |Return the MAPI session to the session pool |
83    12:23:58.029     [3136]    (ArchiveTask)    <9648>    EV:H    :CArchivingAgent::ProcessUser() |Exiting routine |
84    12:23:58.029     [3136]    (ArchiveTask)    <9648>    EV:H    {CArchivingAgent::ProcessUserEx:#18985} It took [0.917972] seconds to process mailbox [LegacyExchangeDN].

Investigating the mailbox further and noticed it had a failed mailbox move request status. I removed the move request and now the mailbox content is being archived.

Not sure I understand why a failed move mailbox request causes archiving issues, the mailbox was available throughout.

Any insight or further information is welcome.

All That Glitters Is No Longer Gold - Shylock Trojan Gang Hit by Takedown

0
0

The gang behind one of the world’s most advanced financial fraud Trojans has experienced a major setback after an international law enforcement operation seized a significant amount of its infrastructure.

Symantec Innovates Toward The Future of IoT

0
0

Having all of our useful belongings connected to the Internet could have great potential to make our lives easier; however, it also leaves us more vulnerable to security issues. Here's how Symantec is innovating to address the growing landscape of the Internet of Things.

Symantec Intelligence Report: June 2014

0
0

Welcome to the June edition of the Symantec Intelligence report. Symantec Intelligence aims to provide the latest analysis of cyber security threats, trends, and insights concerning malware, spam, and other potentially harmful business risks.

The largest data breach reported in June resulted in the exposure up 1.3 million identities. This seems like a small number when compared to the 145 million exposed in the largest breach of May. However, while reported in June, this breach also took place during the month of May. This brings the total number of identities exposed in May to over 147 million, which is the second-worst month for data breaches in the last 12 months.

There was an average of 88 spear-phishing attacks per day in June. This appears to be a return of spear-phishing levels seen in the months of March and April, after the average per day dropped in May.

A relatively new OSX threat by the name of OSX.Stealbit.B topped our list of OSX malware, responsible for 25.7 percent of OSX threat found on OSX systems. This threat looks for specific bitcoin-related software on OSX computers and will attempt to modify the programs in order to steal bitcoins.

The number of Android variants per family reached the lowest levels seen in the last twelve months. While there was not a significant change in the number of families discovered in June, this may indicate that attackers have had more success with their current set of threats, reducing their need to create multiple variants.

June was a quiet month for vulnerabilities, where (only) 438 were reported—tying the lowest number reported in the last 12 months. There were no zero day vulnerabilities disclosed during the month.

We hope you enjoy the June Symantec Intelligence Report. You can download your copy here.

Google Kubernetes - Analytical Evaluation

0
0

What is it ?

Kubernetes is a newly released Google’s Container-as-a-Service solution. Its an orchestration middleware built on top of another popular technology Docker which is known for creating and managing lightweight linux containers from where application can be built and run out of. 

 

Building blocks of Kubernetes:

While Docker itself works with individual containers, Kubernetes provides higher-level organizational constructs to support common cluster-level usage patterns, currently focused on service applications. Some of the important constructs/concepts are described below.

 - POD: Relatively tightly coupled group of containers that are scheduled onto the same physical node. Containers in a POD all use the same network namespace/IP (and port space) and defines a set of shared storage volumes. They serve as a unit of scheduling, deployment and replication.

 - LABELS: Loosely coupled cooperating pods are organized using key/value labels. Each POD can have a set of key/value set on it.

 - KUBERNETES NODE: It is a physical node that has services necessary to run Docker containers and be managed from master systems

        i. Docker: Takes care of the details of downloading images and running containers.

        ii. Kubelet: Takes a set of container manifests (YAML that describes a POD) and ensures that the containers described in them are started and continue running. 

        iii. Kubernetes Proxy: A simple network proxy that reflects services as defined in the Kubernetes API on each node and can do simple TCP stream forwarding.

- KUBERNETES MASTER: It is split into a number of components that work together to provide an unified view of the cluster.  

        i. etcd: All persistent master state is stored in an instance of etcd. Configuration data can be stored here reliably.

        ii. Kubernetes API Server: Serves up the main Kubernetes APIs and validates and configures data for every POD, SERVICE(configuration unit for proxies that run on every worker node) and replicationCONTROLLER  (ensures there is a specified number of replicas of each template.)

        iii. Controller Manager Server: Watches etcd for changes to replicationCONTROLLER objects and uses public Kubernetes API to implement the replication algorithm.

 

Quick Look Architecture:                                            

Kubernetes1.png

 

Where is Kubernetes useful ?:

Developing and deploying applications out of linux container environments has its own advantages. Especially with Docker, containers can be used to provide standardized environment for development, QA and production teams. Packaging and deployment of these apps can become easy and seamless. As a result, the dev-ops teams can ship faster and run the same app, unchanged, on laptops, data center VMs and cloud platforms.  

Kubernetes tries to build an orchestration layer on of top this framework, trying to decide how PODS (collection of closely tied containers) can be deployed onto a physical machine. With such an approach, Kubernetes can be viewed as a good application deployment tool for the PaaS (Platform-as-a-Service) layer. 

Can Kubernetes be used on top of Openstack:

One way of leveraging Kubernetes would be to use it in deploying applications on top of VMs provided by Openstack (at the IaaS layer). 

Once VMs (along with the required networking and storage setup) have been allocated for their respective projects, application deployment engineers could view these VMs as actual physical machines allocated to their Business Unit and choose to run their container friendly applications on top. They would even be able to manipulate POD sizes and leverage the distributed VM footprint provided by an IaaS scheduling solution (Openstack - nova scheduling) keeping high availability in mind.

Is Kubernetes trying to solve both IaaS + Paas:

As described in the architecture above, a POD becomes the least granularity construct which can be deployed on a single physical bare metal host. Since a POD can be categorized as a collection of lightweight containers running applications ideally belonging to a single project owner, we are arguably left with the fact that a single physical host is locked down to serve only one project at a time as against to the Openstack approach of being able to control deployment of every single VM on any physical machine purely on the basis of its cpu, disk and memory usage, resulting in a truly shared cloud. 

As an IaaS solution, the approach taken from Kubernetes can end up having limitation towards achieving high availability across data centers and also leaves IaaS providers with the task of identifying and categorizing selective applications that are ready to use such scheduling scenarios.

 

Kubernetes as compared with Openstack

Is it a substitute ?:

Kubernetes could potentially be seen as a substitute for Openstack in providing docker based container scheduling and deployment. Light weight linux containers as against virtualized guest environments, being the basic difference in their offerings respectively.

Docker with Openstack:

Kubernetes is heavily integrated with Docker, having said that, Openstack open-source community is currently working with the Docker team for providing a driver than can seamlessly be loaded into Openstack for providing a container-as-a-service solution orchestrated by openstack nova itself.

Once a full feature set driver is available, we are potentially looking at Openstack to be a single stop solution provider for managing virtual machines as well as light weight linux containers at an IaaS level.

Developer community strength: 

Kubernetes is fairly new and will need time to gather a strong open source community around it. Inevitably, the immediate directions of the product will be decided by Google developer community. Kubernetes is written in GO, a language released by google 3 years back. Openstack is pure python, a world accepted language for coding and product development. 

Missing other required Openstack like services:

With the current capabilities, Kubernetes can be viewed as an orchestration layer for managing light-weight containers deployed by Docker. All other key cloud (IaaS) components like Identity-as-a-Service, Image-as-a-Service, Networking-as-a-Service(along with SDN solutions) are still missing from the picture.

It does not have the concept of projects/tenants yet which, in a typical private cloud setup, relates directly to company wide organizational structures of groups and projects. 

Initial support focused for Google Cloud Engine:

Initial development for Kubernetes was done on GCE and hence many of the instructions and scripts are built around that. Development cycles will require a Google Cloud Platform account with billing enabled.

新たな標的を狙う Neverquest の進化形

0
0

Trojan.Snifula は、常に進化を続けており、オンラインバンキングに関する機密情報をさらに多く盗み取るための新機能を備えています。


Security Advisories and Bulletins - July 2014

0
0

Following Security Bulletins have been released in July 2014:

 

Microsoft

Microsoft Security Bulletin Summary for July 2014

https://technet.microsoft.com/library/security/ms14-jul

Symantec product detections for Microsoft monthly Security Advisories - July 2014

http://www.symantec.com/docs/TECH146537

 

MS14-037

Cumulative Security Update for Internet Explorer (2975687)

Critical 

Remote Code Execution

MS14-038

Vulnerability in Windows Journal Could Allow Remote Code Execution (2975689)

Critical 

Remote Code Execution

MS14-039

Vulnerability in On-Screen Keyboard Could Allow Elevation of Privilege (2975685)

Important 

Elevation of Privilege

MS14-040

Vulnerability in Ancillary Function Driver (AFD) Could Allow Elevation of Privilege (2975684)

Important 

Elevation of Privilege

MS14-041

Vulnerability in DirectShow Could Allow Elevation of Privilege (2975681)

Important 

Elevation of Privilege

MS14-042

Vulnerability in Microsoft Service Bus Could Allow Denial of Service (2972621)

Moderate 

Denial of Service

 

Adobe

Security updates available for Adobe Flash Player (APSB14-17)

http://helpx.adobe.com/security/products/flash-player/apsb14-17.html

CVE number: CVE-2014-0537, CVE-2014-0539, CVE-2014-4671

Affected software versions

  • Adobe Flash Player 14.0.0.125 and earlier versions for Windows and Macintosh
  • Adobe Flash Player 11.2.202.378 and earlier versions for Linux
  • Adobe AIR 14.0.0.110 SDK and earlier versions
  • Adobe AIR 14.0.0.110 SDK & Compiler and earlier versions
  • Adobe AIR 14.0.0.110 and earlier versions for Android

 

Oracle

Oracle Critical Patch Update Advisory - July 2014

http://www.oracle.com/technetwork/topics/security/cpujul2014-1972956.html

 

Email Dating ... Domino

0
0

As I mentioned in my last blog about how we determine which date to use on Exchange items, the mail system itself is the first differentiator, so in Domino, we use a similar algorithm as that in Exchange but, as all mail systems were certainly not created equal, there are subtle differences to match the Domino architecture. The Domino algorithm basically iterates this list of known potential date fields / containers on an item in the priority order below until we get a hit

  • If Form = Calendar Then EndDateTime
  • If Form = Task Then DueDateTime
  • DeliveredDate
  • PostedDate
  • $Created
  • Creation Date extracted from item UNID

In the extreme case that none of the above properties are available, then LastModifiedDate is used

In order to see which of the above dates are available on any Domino item, then choose your weapon of choice for viewing Domino note properties on an item - the notes client itself gives you the 'Document Properties' window

notesprops.jpg

or for a more granular and developer focussed investigation, NotesPeek is a great tool

notespeek.jpg

Backup Exec 2012 SP4 Woes

0
0

I wanted to share my problems and solution regarding Backup Exec 2012 and the Service Pack 4 update in hopes it might help others.  I’ve seen several threads on this topic, but the official response from Symantec has always been, “there are no known bengine SP4 issues”, which I’m not convinced of.  My maintenance is not current for reasons I won’t get into, but because of that, I have not filed a tech support request on this topic, but rather figured it out on my own.  I’m evaluating my needs vs BE2014, which I’d reinstate my maintenance for.

My setup is fairly small and simple.  I have 6 servers being backed up, including the BE server.  1 is using an Agent for Applications and Database server and the rest are Agents for Windows.  Each server has its own job that is a backup to disk (external USB3.0).  There are 4 disks, 3 have unique jobs and 1 of them is shared amongst 4 jobs.  No jobs overlap and there is no deduplication used.  The BE server is Windows Server 2008 R2 and the rest are various other server class operating systems.  I’ve noticed other threads have some advanced features in them that often get blamed for the issue, so I think that’s where my situation is more unique and telling.

I’ll try to highlight the troubleshooting steps, rather than get into great detail.  If anyone has any questions about more detail, I’d be happy to provide what I can remember.

Everything was running fat, dumb and happy for months with SP3 and the latest hotfixes…until I installed the SP4 update and pushed it to the Agents.  After that, the Backup Exec Job Engine service (bengine.exe) began hanging sporadically and frequently, sometimes once a day, sometimes multiple times a day.  When the bengine was hung, jobs would begin to run and just sit there, jobs would pile up and the queue became a complete mess with no jobs completing.  Restarting the service, while bengine was hung, would usually fail unless I used Task Manager to kill bengine.exe first.  Once restarted, the queued jobs would start to kick in and it was a struggle to get back to a good state.

  1. Next, I thought that I could work-around the issue by scheduling a windows task to use the CLI in order to restart the services daily.  This didn’t help because if bengine was hung, it would not restart successfully.
  2. Next, I tried a repair of BE2012 and thought it helped, but after a day or so, the issues began again.
  3. After struggling with this for long enough, a hotfix came out that didn’t specifically address this issue, but I tried it anyway.  No help.
  4. I then backed up the Catalogs and Data folder, completely uninstalled BE2012, reinstalled and updated to SP4 plus hotfixes and reapplied the Catalogs and Data folders.  No help.
  5. I tried a complete uninstall and reinstall, but only to SP3 plus hotfixes.  I read forums that gave tips on how to repair the SP4 Catalogs and Data to work with SP3, but I wasn’t able to be successful with this.  I was getting tons of database errors and the jobs wouldn’t run.
  6. My Solution:  I uninstalled BE2012 and all Agents.  I reinstalled to SP3 plus Hotfix 213186, the only hotfix for SP3.  I did not restore the Catalogs and Data folders and rebuilt my jobs from scratch, along with pushing the Agents to the servers again.  In many cases, I rebooted the target servers after updates, even if BE didn’t tell me to do it.  This sounds like a disastrous solution for anyone with a larger, more complicated installation, but I decided to bite the bullet and spend a Sunday rebuilding things.

Since the last reinstall and some minor tweaking of selection options (which had been done once before on the database that I had to ditch), my jobs have been running perfectly for 3+ weeks.  The Backup Exec Job Engine has never hung and my jobs have never failed to run.  I am once again, a happy camper, but I refuse to install SP4.  It’s just not worth the potential hassle.  I am evaluating whether I want to jump to Backup Exec 2014, since it adds the Windows 2012 and Windows 8 Agent support, but I’ll be honest that it scares the hell outta me.  I don’t want to go through weeks’ worth of pain again.

I hope my saga might help someone.

--Mark

 

Agility, freedom to choose and even greater security: all wrapped up in our Information Management vision

0
0

Symantec’s unified vision and new array of solutions around its Information Management (IM) offering are propelling customers towards ever greater agility, freedom of choice and the highest levels of security in what is becoming an increasingly complex and disparate IT environment for organizations everywhere.

That was the good news we were able to deliver at a recent webinar for the APJ region where industry analysts gathered to hear how environments spanning both physical and virtual can be better managed using our IM solutions. What we were able to tell everyone was exactly how we are now able to reduce those complexity issues they face every day, supporting our customers as they perform crucial functions around archiving, data protection and backup, and data recovery.

In a world where the move into the cloud is accelerating – public, private and hybrid –customers want to leverage all the advantages this may offer by having the flexibility of running any operating system they choose across their physical and virtual environments. I’m delighted to say that our new technologies highlighted during the webinar will take them on exactly that journey.

Our IM strategy delivers solutions in three key areas: Availability, Protection and Insight. I’d like to take you through the main points in each of these that were key to the webinar – and how they are helping to drive new levels of customer execution and achievement.

Availability – Keeping their businesses ‘Always On’ is a big concern for Symantec customers. The new IM solutions will take data recovery into the public cloud, slashing costs and the complexity of building a physical data recovery site.

Protection – With double digit data growth and shrinking backup windows, the launch of our Backup Exec 2014 is giving customers just what they told us they were after: really powerful backups, more flexibility and even greater ease of use. But what do people think of the new release? During the intensive Beta program – and 11 weeks of data testing – we had this feedback: “This is drastically quicker than previous versions” and “Not one single crash…. Great job!” And that really is very gratifying to us at Symantec.

Insight – A clear view into the data that exits in their environment that is no longer ‘dark’, but instead intelligent, adding value to their business. This is where Enterprise Vault 11 comes in, bringing all the benefits of much more rapid searches (up to 18x faster browsing), more platforms (extending end user archiving into virtually any email platform, on premise or cloud) and higher productivity by reducing the time spent performing daily check activities by up to 45%.

Finally, another part of the solution – discussed in the webinar and hinted at here earlier on – is our DR Orchestrator, targeted at organizations with data centers and seeking a solution in the cloud. What this delivers over and above previous data recovery solutions, I would argue, is threefold: cost effectiveness (massive savings, as you only pay for what you use); reliable and secure: 99.95% uptime, safe transactions (with Microsoft Azure site-to-site VPN); and easier to manage, with fully automated single-click DR.

Our IM portfolio of solutions has a wealth of other great features as well, of course, but space – and your valuable time! But, for a comprehensive view of how they are bringing new levels of performance, productivity, reliability and security to organizations, in and out of the cloud, please take a look at the links within the text. Thank you! If you would like to listen to the recording of the webinar, please contact Sancharini Mazumdar, Asia Pacific & Japan Industry Analyst Relations.

Enterprise Vault Items Pending Indexing

0
0

Items Pending Indexing

From time to time you may get into a situation where indexing in Enterprise Vault is lagging behind. It happened to me just the other day when ingesting a large amount of data into some test archives. I'm sure of the exact trigger, but the net result was that the indexing service stopped. This didn't immediately cause me an issue, because the data was still added to the archive. But searching for the data was a bit tricky!

This can happen in a production environment too. Perhaps it's something you've encountered, or had escalated to you from the IT Helpdesk? 

A couple of handy SQL queries can help with figuring out if this is something that is affecting your environment:

select COUNT (*) from JournalArchive where IndexCommited = 0 

The output in my environment looked like this:

Image_11.png
The second query is used to figure out which archives are affected. Sometimes that's handy to know. That query is here:

select Records, ArchiveName
FROM
(select ArchivePoint .ArchivePointId, count(*)   Records
from journalarchive
inner join ArchivePoint on ArchivePoint .ArchivePointIdentity = JournalArchive.ArchivePointIdentity
where indexcommited= 0
group by JournalArchive.ArchivePointIdentity , ArchivePoint.ArchivePointId) SQ
INNER JOIN EnterpriseVaultDirectory.dbo .ArchiveView ON ArchiveView.VaultEntryId = SQ.ArchivePointId
order by Records desc

And the output looks like this:

Image2_3.png

Of course in my environment it was a simple matter of starting the indexing service and waiting for it to catch up.  It did take quite some time though.

Reference technote: http://www.symantec.com/business/support/index?page=content&id=HOWTO59167

Viewing all 5094 articles
Browse latest View live




Latest Images