Quantcast
Channel: Symantec Connect - ブログエントリ
Viewing all 5094 articles
Browse latest View live

Enterprise Vault Version Numbers

$
0
0

In the past version numbers within Enterprise Vault were pretty straight forward. You had the base release, and service packs.  Sometimes people would have hot fixes, but that usually affected a handful of files. It all got a bit more complicated with the introduction of the Cumulative Hotfixes. And worse, it’s not easy any more to find out what specifically is installed. Let’s take a few examples:

On a regular Enterprise Vault 10.0.4 environment you might see that the binaries are like this:

Screen Shot 2014-07-28 at 12.06.42.png

And if you do Help -> About Enterprise Vault in the VAC you’ll see:

Screen Shot 2014-07-28 at 12.06.53.png

That’s good, but how do I know if I have installed the 10.0.4 Cumulative Hotfix, like this one.  Well, there isn’t an easy way save for looking at version information on files that are affected.

I thought this would get a whole lot easier in Enterprise Vault 11. At least with that version you can see in Add/Remove Programs that a fix has been installed. You just can’t see what it is:

2014-07-28_12h10_09.png

And to top it all, if you look at the list of Servers in the Vault Admin Console, you might see something like this:

Screen Shot 2014-07-28 at 12.07.48.png

And that’s on a 10.0.4 system!

A bit confusing, as I’m sure everyone would agree. Hopefully Symantec can do something about this in the future to make it simpler for people to manage their environments.


What Cumulative Hotfixes are installed?

$
0
0

I noticed a blog post today from Rob Wilcox, an Enterprise Vault trusted advisor, regarding confusion and difficulty that he was experiencing with determining if any cumulative hotfixes are installed on an EV server and, if so, exactly what ones are installed - https://www-secure.symantec.com/connect/blogs/enterprise-vault-version-numbers

This information is in fact readily available, if you know where to look, so I thought I'd pen this quick response for the benefit of Rob and anyone else experiencing the same problem.

The initial place to look for version related information on an Enterprise Vault server is in the registry (as explained in technote http://www.symantec.com/docs/TECH70211) at the following location - HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\KVS\Enterprise Vault\Install. Here you can see the main 'Version' installed and any CHF info in the relevant subkeys

regkey_versions.jpg

In an Enterprise Vault version number, the first component represents the major release, the third component represents the service pack release, and the fourth component represents the specific build. For example:

10.0.4.1189 represents Enterprise Vault 10, Service Pack 4, Build 1189

11.0.0.1351 represents Enterprise Vault 11, Base, Build 1351

Additionally for further information on CHFs installed, you can look at the file ...\Enterprise Vault\Installed Hotfixes\x.x.x.\AppliedHotfixes.txt to get a complete history of when cumulative hotfixes were installed and possibly uninstalled for that particular x.x.x release on that server

chf_versions.jpg

And if you really want to get deep into it, you can then drill down to the file ...\Enterprise Vault\Installed Hotfixes\x.x.x\x.x.x.x\Hotfix Info.txt to get precise details on the CHF, when it was installed, who installed it and exactly what files were changed by the install

chf_info.jpg

So, in summary, we hope that we have made everything you can possibly want to know about a Cumulative Hotfix install available on an EV server, and the above will help you find what you need to know.

O quanto seguro é o seu quantified self? Tecnologia de rastreamento, monitoramento e computação vestível

$
0
0

Entusiastas do automonitoramento estão gerando uma torrente de informações pessoais por meio de aplicativos e dispositivos. Esses dados e informações estão a salvo de olhares indesejados?

How safe is your quantified self? Tracking, monitoring, and wearable tech

$
0
0

Self-tracking enthusiasts are generating a torrent of personal information through apps and devices. Is this data safe from prying eyes?

NetBackup 7.6.0.3 (NetBackup 7.6 Maintenance Release 3) is now available!

$
0
0

I’m extremely happy to announce that NetBackup 7.6.0.3 is now Generally Available!

NetBackup 7.6.0.3 is our latest maintenance release for the NetBackup 7.6 line.  This release contains fixes to 336 issues (bringing our total over 900!) including resolutions for most commonly downloaded EEBs, customer escalations, and critical internally found defects. 7.6.0.2 also provides several critical security fixes and many high-demand proliferations to our customers. The significant content of this release is:

Over 200 customer-related defect fixes, including three 7.6.0.2 issues highlighted in the 7.6 Late Breaking News

Proliferations:

  • SharePoint/Exchange 2013 GRT
  • MSDP/Client Dedupe for Windows 2012 R2
  • SQL 2014
  • VDDK 5.5 U1

Features:

  • NBU Support Utility Updates
  • JRE update (7u51)

To download 7.6.0.3, please visit the following page:

NetBackup 7.6.0.3 Download Links
 http://symantec.com/docs/TECH217819

This is a MAINTENANCE Release for NetBackup (as opposed to a Release Update) - it can be applied on top of NetBackup 7.6 GA or 7.6.0.2.  (If you are currently running 7.0, 7.0.1, 7.1, 7.1.0.x, 7.5 OR 7.5.0.x, you will need to upgrade to 7.6 GA before you can apply 7.6.0.3.)

To check to see if your particular Etrack is resolved in NetBackup 7.6.0.2, please refer to these Release Notes and our updated EEB guide:

NetBackup 7.6.0.3 Release Notes
 http://symantec.com/docs/DOC7221

Symantec NetBackup 7.6 Emergency Engineering Binary Guide
 http://symantec.com/docs/DOC6085

The NetBackup 7.6 Late Breaking News has also been updated to reflect newly released fixes in 7.6.0.2 for some of our highest visibility issues:

NetBackup 7.6 Late Breaking News
 http://symantec.com/docs/TECH199999

Bookmark the NetBackup Product Landing Page to have these links and many more useful links handy:

 http://go.symantec.com/nb

Note: The next NetBackup Appliances release (2.6.0.3) should be available next Monday, 4 August.

 


bit.ly/76LBN | APPLBN | 75LBN

Perfect Forward Secrecy - Protecting the gateway to your world

$
0
0

Remember the movie "The Truman Show", where Jim Carrey played the main character of a TV show that chronicled the life of a man who was initially unaware that he was living in a constructed reality television show, broadcast around the clock to billions of people around the globe. Imagine that your organisation is chronicled the same way. Every online transaction, secured or not.

That's what Heartbleed can do.  Fortunately most systems using OpenSSL libraries have been patched (hopefully) to counter this. What if there is another way that this can be done. That this could be happening right now, on a  daily basis and that this is not a vulnerability, but is actually how most clients connect to organisations during SSL/TLS negotitaions for the past decade?

Fristly have a look at how SSL/TLS handshake works. 

 

Consider this scenario:

A script kiddie downloads Wireshark and uses it to track network activities within your organisation. Entire transations are recorded, including SSL sessions.  Several years later, after gaining much experience, he can now gain access to the servers and the expired Private Key pairs that were once used to encrypt these sessions. These sessions were encrypted with RSA key exchange. He emails the CSO, "I know what you did last summer".

 

OK. A bit too dramatic and over the top, but perfectly possible. This is the flaw (not vulnerability) when using RSA Key Exchange in SSL/TLS negotiations without proper Key Management. As each session is related to the RSA private key used, recorded sessions can be decrypted later.

An alternative to the RSA key exchange is to use another algorithm, Diffie-Hellman, which creates sessions that are not associated with the private key. Even if the session information is recorded there is no easy way to decipher the computations. With proper Diffie-Hellman implementation, encrypted information cannot be deciphered in the future. This is called Forward Secrecy.

 

To see how Perfect Forward Secrecy can be be achieved, ready your coffee, get your thinking cap on and start reading the document attached. 

IT Analytics Needing credentials reset after Upgrade from 7.5 to 7.5SP1

$
0
0

After upgrading from SMP7.5 to 7.5P1 on our production setup (which has SQL server offbox) one niggle was that IT Analytics stopped working. All console users were presented with this error,

An error has occurred during report processing. (rsProcessingAborted)
Cannot impersonate user for data source 'ITAnalytics'. (rsErrorImpersonatingUser)
Log on failed. Ensure the user name and password are correct. (rsLogonFailed)

After rechecking our settings in the console, we couldn't understand the root cause. Logging in directly to the SQL server and trying to access the reports through the report url, http://localhost/reports, resulted in the same error.

After playing a bit with permissions and getting nowhere, I found this tech article, "TECH213502: Error accessing IT Analytics reports after upgrade to 7.5" which seems to cover the ground required to fix this.

What I did to resolve was,

  1. In the SMP console is to re-confirm my authentication settings under "IT Analytics Settings" -> Configuration
    We don't have kerberos configured here and use "Stored Credential" authentication type. So I just clicked the pencil to reset the service user name and password, and then clicked "Save Security Settings" to finish.
     
  2. On the SQL Server, open up the report server url (http://localhost/reports)  in Internet Explorer. Navigate to the "IT Analytics" folder and re-enter the stored credentials in the "IT Analytics" and "CMDB" database objects.

Instantly the reporting kicked back into life. If anyone can shed some light as to why this was required please do!

 

Russian ransomware author takes the easy route


Come verificare i driver installati in un computer

$
0
0

InstalledDriversList è un programma portabile della NirSoft che consente di visualizzare una lista dettagliata dei drivers installati nel vostro sistema.

Come mostrato nella immagine seguente, per ogni driver sono visualizzati : nome del driver, nome visualizzato, descrizione, tipo di avvio, tipo driver, gruppo, nome del file, dimensioni, data di creazione/modifica e informazioni sulla versione del file.

InstalledDriversList.jpg

Sono disponibili le versioni del programma a 32-bit e 64-bit. InstalledDriversList inoltre è disponibile in alcune lingua ( Nota:al momento non è ancora disponibile la lingua Italiana)

Sisitemi Operativi s: Windows Xp , Windows Vista , Windows 7 , Windows 8 , Windows 8.1 – (x86 e x64 bit)

Licenza: Freeware

Link : InstalledDriversList

Fortune, Boston Globe, and More Highlight Symantec Corporate Responsibility Efforts

$
0
0

From Fortune to The Boston Globe to top sustainability media site Triple Pundit, over the past month, Symantec has been highlighted in various prominent media outlets and reports for our corporate responsibility programs and efforts. Below we bring you highlights of a few of these, which cover a range of issues core to our CR strategy including gender diversity, cyber security skills training, and corporate volunteering.

Fortunefeatures launch of the Symantec Cyber Career Connection (SC3)

Fortune.JPG

On June 24, 2014, Symantec officially launched its new signature initiative, the Symantec Cyber Career Connection (SC3), at the Clinton Global Initiative America meeting in Denver, CO. SC3 aims to address the global workforce gap in cybersecurity by training and certifying young adults in cybersecurity and assisting them in landing meaningful internships and jobs.

Fortune featured the launch in a June 24th articleSymantec tries to fill shortage of cybersecurity workers with $2 million donationhighlighting the shortage of cyberprofessionals and how the SC3 program will help fill this gap through training underserved youth in crucial cyber security skills.

“We are affected by the skills shortage as much as any company,” says Aled Miles, a senior vice president at Symantec. “We’re constantly on the lookout for early talent. It affects us and it affects our customers.”

Boston Globe Highlights Symantec Service Corps

Boston Globe.JPG

In a feature in The Boston Globe on corporate volunteerism, the Symantec Service Corps’ Peru project was named by Pyxera Global as one of a growing number of pro bono corporate service programs offering employees a chance to apply their skills abroad, and organizations a chance to gain valuable support they otherwise may not be able to afford.

In all, 39 companies worldwide have started some form of corporate service program, according to Pyxera Global, a Washington, D.C., nonprofit that connects companies with international pro bono projects that fit their corporate objectives. Accenture workers helped connect retailers selling food and personal hygiene products with saleswomen in rural Bangladesh; Symantec Corp. employees created a marketing strategy for a Peruvian nonprofit that works with victims of domestic violence; in August, employees from IBM and Dow Chemical Co. will team up to improve water quality and sanitation in Ethiopia.

By the end of the year, more than 9,000 employees will have participated in one of these projects in 88 countries, according to Pyxera data from 26 of the 39 companies — up from just a few hundred employees in five countries eight years ago.

Symantec Featured in UK Digital Skills Taskforce Launch Video and Report

The UK Digital Skills Taskforce is an independent taskforce of leading experts developed to highlight practical solutions that enable UK business to meet persistent skills gaps by identifying, developing and using home-grown talent. The taskforce conducted research with hundreds of organizations to understand how best to develop local talent to meet future occupational needs in the UK. The results were published in the taskforce’s report Digital Skills for Tomorrow's World.

Symantec’s Reading office hosted an event for the report, contributed expertise, and was one of few organizations highlighted in the report and the report’s launch video. In the video and report, Director of Security Strategy at Symantec, Sian Johnson, emphasizes that “people just don’t realize how great and varied a career there is in technology or in digital, how much you need digital skills to work in any industry now.” Additionally, she discusses that people often have a preconceived image of a technology job, but don’t realize how varied technology career paths can really be.

UK Digital Skills TaskForce.JPG

Triple Pundit Series: Symantec Corporate Responsibility and Internet Security

Triple Pundit.JPG

Top sustainability media site Triple Pundit published an article on cloud security, “Storing Data in the Cloud: How Safe is It?,” as the latest installment of the Triple Pundit Symantec series on CR and Internet security. The article reviews the security risks associated with the growth in cloud adoption and smart connected devices, and the need for cybersecurity from vendors such as Symantec. A quote from Symantec VP of Corporate Responsibility Cecily Joseph highlights information protection as central to corporate responsibility.

Data security and protection, as well as assuring personal privacy, is inherently an issue of corporate social responsibility (CSR). As Cecily Joseph, Symantec vice president of Corporate Responsibility, told 3p: “At Symantec, we enable people and businesses to enjoy the connected world by protecting their most important assets – their memories and data.

Symantec considers the protection of information – whether it’s in the cloud, on your mobile or desktop – central to the responsibility of corporations in this digital age. Our customers trust us with the data they capture, share and save online. Trust is at the heart of the relationships we cultivate, and the responsibility we have to our customers, partners and communities.”

Blog for Female Professionals Calls Out Symantec for Diversity Leadership

VitaminW, a blog for delivering thoughtful news to professional women, recently ran an article outlining the steps technology companies should take to address gender diversity. Symantec is cited as an example of a company that makes diversity a priority among its leadership, and Symantec’s Cecily Joseph is quoted on how the company pursued its goal for bringing more women on Symantec’s Board of Directors:

Make it a Top Priority

Research shows companies serious about gender equity make it a company priority with the message coming from the CEO.  A recent example involves Symantec’s decision to set a 30 percent goal for the number of women on their board of directors, acknowledging that a diverse workforce is essential to exceptional products and performance.

Cecily Joseph, VP of Corporate Responsibility stated, “[Symantec's Board of Directors] directed a search firm to include women candidates and expanded criteria beyond CEO, Board experience and our networks to focus on candidates with international, governmental and financial expertise and who could bring both age and cultural diversity to the team.” In a relatively short time two candidates were identified and Symantec met its goal. 

We're always excited to share some of the work we're doing here at Symantec, and always love to hear from you! Comment here, or reach us at CR@Symantec.com.

 

Lora Phillips is Senior Manager, Global Corporate Responsibility, at Symantec.

Making the Case for an Agile Data Center

$
0
0
If You Don’t Have Agility in Your Data Center, Will Your Business exist in 10 Years?

According to Forbes, 70% of organizations that were in the Fortune 1,000 ten years ago have vanished.  The natural question is why did this happen?  The simple answer is business agility.  These organizations were not able to adjust to changes in the business environment and were eventually crushed by their competitors or by changing trends in the market.  

Look at Blockbuster for example.  In 1982, Blockbuster was the largest video rental leader with over 2,800 stores worldwide and by its peak in the early 2000s, Blockbuster grew to over 10,000 stores across the globe.  Then the business environment changed.   In 1997, a startup called Netflix was launched with a completely different business model.  Instead of opening brick and mortar stores that people could rent videos from, Netflix mailed DVDs to their customers and didn't bother to charge late fees.  Netflix customers could keep the videos as long as they wanted, while Blockbuster continued to charge fees.  And as content delivery changed to on-line and streaming video, Netflix was agile enough to change its business model, while Blockbuster resisted.  By 2010, Blockbuster filed for bankruptcy and recently announced plans to close all of its stores, while Netflix became the biggest source of streaming web traffic.1  

What lessons can be learned from Netflix and Blockbuster?  And how does this relate to the data center? 

If we think about the Blockbuster example, it is clear that organizations need to be agile and adjust to trends, competitors, regulation and business shifts in order to survive and thrive.  It is also clear that IT is becoming more and more of a requirement for the business, and arguably the key driver.  Therefore, it is easy to conclude that if business has to be agile, IT must be agile as well to meet the demands of the market. 

Organizations are already recognizing the need for agility.  According to CIO Magazine2, the top five technology initiatives for 2014 are:

  1. Improve the use of data and analytics to improve business decisions and outcomes
  2. Identify new ways IT can better support business/marketing objectives
  3. Improve IT project delivery performance
  4. Develop new skills to better support emerging technologies and business innovation
  5. Reorganize or retrain IT to better align with business outcomes and drive innovation

To help us define how the transformation to agility in the data center will take shape, we need to ask a few important questions:

First, the business:

  • Can we move fast enough to keep up? Can we move faster that the market so that we’re always a step ahead?
  • Are we prepared for disruptive events?
  • Can we contain costs within the context of shrinking IT budgets?

Next, the IT infrastructure:

  • What are the consequences of recovery? Can we recover with the right RTO and RPO objectives? What about recovery requirements for different locations?
  • Are our data/applications/connections/servers secured against today’s complex threats?
  • Can we easily make changes to our IT systems and resources? Will these changes have any consequences to the way IT runs?

Finally, data:

  • Who is using what data? What are they using it for? And for how long?
  • Where is the data that the business needs? The data is normally there, but it’s as good as useless if we can’t find it.
  • Do we have the right controls in place? That’s a very important consideration in an era of increased oversight and regulation.

There three major trends that are converging to create data center agility.  First, there is resource elasticity that is often delivered in the form of virtualization. Next is the capability to deliver IT as a service to the business allowing users to get what they need, when they need it at the right price whether it is on-prem or in the cloud.  The emergence of these two trends (virtualization and IT service delivery) is what many refer to as the Software-Defined-Data Center.  However, to truly be agile a third component is required - IT intelligence.  IT intelligence allows organizations to make the right decisions about how to manage, secure and protect their data and applications in an effective, but flexible way.

Agility sits at the center of all of these trends.  It is important because the business that does not have agility will probably not survive.  And now, with the importance of IT in relation to the business, IT by default must be agile. 

So how can Symantec help you become more agile so your business thrives like Netflix and doesn’t disappear in the next 10 years?  Symantec provides offerings that will keep applications and data highly available and protected, allow you to scale your storage as the your business requires, recover your data when necessary and keep everything secure.  With the right technology in place, you have the flexibility you need to become more competitive in your industry by taking the risk out of being agile.

There are many use cases that can be applied to agility.  One good example is how Symantec is providing features such as Flexible Storage Sharing (FSS) and SmartIO in the Storage Foundation offering to help IT organizations unlock the true value of adopting Solid State Drives (SSD).  SmartIO allows organizations to use a portion of their SSDs as a cache to increase performance.  In our testing, we have seen 400% performance increases over traditional SANs because SmartIO has intelligence to understand the frequency a given data set is being accessed and then cache it in the SSD.  Flexible Storage Sharing provides a global name space across up to 8 nodes.  This allows organizations to create a “shared nothing” architecture” meaning they can use commoditized DAS on the backend instead of expensive SAN infrastructure.  We have seen cost savings of up to 80% using FSS.  (The method and published results from our testing are available in the white paper, “Running Highly Available, High Performance Databases in a SAN-Free Environment.” )

But why is this use case important in the context of an agile data center?  It is important because we are enabling organizations to seamlessly adopt new technologies faster and at lower costs.  Since IT is the driver of business, it only makes sense that competitive business advantage, in the form of agility, will be initiated from the digital realm, such as the data center or the cloud.   

If you want to understand how Symantec’s offerings help organizations capture the agility to thrive, please check out the Agile Data Center page.  For other blogs discussing the Agile Data Center check out What is an Agile Data Center? and Learning on our dime: lessons from the largest software-defined data center in the world.

For detailed product information please go to the following product pages:

1TechCruch, A Look Back at How the Content Industry Almost Killed Blockbuster and Netflix (And the VCR)

2‘State of the CIO’ Survey Exclusive Research from CIO Magazine 2014

[PowerShell] MAPI Properties through EWS(Exchange Web Services)

$
0
0

For troubleshooting, we sometime need to dive into MAPI properties.
To do this, we usually use Outlook Spy or MFCMAPI.

They both work great and let you debug the MAPI properties in GUI.
The third approach is use EWS(Exchange WebServices) through PowerShell.

The benefit of using PowerShell to see the MAPI properties is that we can work with the data easily.

These are the use cases from my mailbox.

  • Disk space saved by replacing the email with ShortCut. 
PS>$return_object | %{if($_.OriginalSize -gt $_.Size){$_.OriginalSize -  $_.Size}} | Measure-Object -Sum -Maximum|ft Count,Sum,Maximum -AutoSize

Count     Sum Maximum
-----     --- -------
  122 5324215  551692

Total of 5324215 bytes were saved and the maximum was 551692 bytes.

  • Average days Enterprise Vault archived the emails.
PS> $return_object | %{($_.ArchivedDate - $_.DateTimeReceived).Days} | Measure-Object -Average|format-table Count,Average -autosize

Count          Average
-----          -------
  122 25.5081967213115
  • When did the archive run ?
PS > $return_object|Group-Object {($_.ArchivedDate).toShortDateString()}|Sort-Object name -Descending|ft Name,Count  -AutoSize

Name       Count
----       -----
2014/05/08    17
2014/05/07    70
2014/05/06    23
2014/05/05    11
2014/05/02     1
2014/05/01     7
2014/04/30    12
  • Which emails were archived on 2014-05-07?
PS > $return_object | where-object {(($_.ArchivedDate -gt (get-date 2014-05-07)) -and ($_.ArchivedDate -lt (get-date 2014-05-08)))} | format-table DateTimeReceived,ArchivedDate,Sender,Subject -autosize


DateTimeReceived    ArchivedDate        Sender            Subject
----------------    ------------        ------            -------
2014/04/09 21:55:25 2014/05/07 9:43:01  Christopher Loak  New Lab budget
2014/04/09 22:00:59 2014/05/07 9:43:01  Scott Mathews     Feature News
2014/04/09 22:12:18 2014/05/07 9:43:02  Ted Johnson       Re:New Lab budget
..

Once you have the data from EWS, you can sort or filter with any property you like.
Other possible use case are 

  • Search for emals with specifig MAPI property with specif value.
  • Loop through the Folder and watch if any property changed

 

Let's go through how this is done.

First you need to install "Microsoft Exchange Web Services Managed API".
http://www.microsoft.com/en-us/download/confirmation.aspx?id=35371

Import the Microsoft.Exchange.WebServices.dll to your PowerShell session.

PS>Import-Module -Name "C:\Program Files\Microsoft\Exchange\Web Services\2.0\Microsoft.Exchange.WebServices.dll"

Then we connect to the Exchange Service.
We have to specify the version of Exchange Version you are connecting.
If you do not know which version, see this article or you can try your luck (There are only 5 versions that support EWS).
http://office.microsoft.com/en-001/outlook-help/determine-the-version-of...

 Exchange2007_SP1
 Exchange2010
 Exchange2010_SP1
 Exchange2010_SP2
 Exchange2013

PS>$exchService = New-Object Microsoft.Exchange.WebServices.Data.ExchangeService([Microsoft.Exchange.WebServices.Data.ExchangeVersion]::Exchange2010_SP2, [System.TimeZoneInfo]::Local)

Use the default credential to connect to Exchange Server and use your email address to auto discover the EWS access point.

PS > $exchService.UseDefaultCredentials = $true 
PS > $exchService.AutodiscoverUrl("YOUR@EMAIL_ADDRESS")

This part is creating the Filter criteria.
The example is specifying any items with "IPM.Note.EnterpriseVault.Shortcut" ItemClass.

PS > $searchFilterCollection = New-Object Microsoft.Exchange.WebServices.Data.SearchFilter+SearchFilterCollection([Microsoft.Exchange.WebServices.Data.LogicalOperator]::Or)
PS > $searchFilter1 = New-Object Microsoft.Exchange.WebServices.Data.SearchFilter+IsEqualTo([Microsoft.Exchange.WebServices.Data.ContactSchema]::ItemClass,"IPM.Note.EnterpriseVault.Shortcut")
PS > $searchFilterCollection.add($searchFilter1)

This part is creating a MAPI Protery Set that you are interested in.
Archive Date is a custom MAPI Property created by the Enterprise Vault so need to specify the GUID, name and the Type.

PS > $prArchiveDate = new-object Microsoft.Exchange.WebServices.Data.ExtendedPropertyDefinition([GUID]"D0F41A15-9E91-D111-84E6-0000F877D428","Archived Date",[Microsoft.Exchange.WebServices.Data.MapiPropertyType]::SystemTime)
PS > $prArchiveDatePropertySet = new-object Microsoft.Exchange.WebServices.Data.PropertySet($prArchiveDate )
PS > $customPropSet = new-object Microsoft.Exchange.WebServices.Data.PropertySet($prArchiveDatePropertySet)

Add any other properties in interest.

PS > $customPropSet.add([Microsoft.Exchange.WebServices.Data.ItemSchema]::Subject)
PS > $customPropSet.add([Microsoft.Exchange.WebServices.Data.ItemSchema]::DateTimeReceived)
PS > $customPropSet.add([Microsoft.Exchange.WebServices.Data.EmailMessageSchema]::Sender)

This part creates a view of the query result from the EWS.
This is specifying that only 50 items are returned from Exchange so that unexpected large amount of items are not returned.
Also it is setting the custom PropSet created earlier and sorting with DateTimeReceived.

PS > $itemView = new-object Microsoft.Exchange.WebServices.Data.ItemView(50,0,[Microsoft.Exchange.WebServices.Data.OffsetBasePoint]::Beginning)
PS > $itemView.Traversal = [Microsoft.Exchange.WebServices.Data.ItemTraversal]::Shallow
PS > $itemView.PropertySet=$customPropSet
PS > $itemView.OrderBy.add([Microsoft.Exchange.WebServices.Data.ItemSchema]::DateTimeReceived,[Microsoft.Exchange.WebServices.Data.SortDirection]::Ascending)

The last part is doing the actual search.
We get the results from the EWS only 50 items per query so loop though the result untill we get all the result.
The result is put into a custom object which makes the sorting and filtering in PowerShell easier.

PS > $return_object =@()

PS > do
     {
      $FindItems = $exchService.FindItems([Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Inbox,$searchFilterCollection,$itemView)

      foreach ($eItems in $FindItems.Items){

            $props = @{ DateTimeReceived  = $eItems.DateTimeReceived
                        ArchivedDate      = $eItems.ExtendedProperties[0].Value;
                        Sender            = $eItems.Sender.Name;
                        Subject           = $eItems.Subject
                        }
      
            $return_object += New-Object -TypeName PSCustomObject -Property $props
      }

        $itemView.Offset= $itemView.Offset + $itemView.PageSize

}while ($FindItems.MoreAvailable)

At last, we have result in $return_object.
With this result, we can filter , count as you like.

PS > $return_object | where-object {(($_.ArchivedDate -gt (get-date 2014-05-07)) -and ($_.ArchivedDate -lt (get-date 2014-05-08)))} | format-table DateTimeReceived,ArchivedDate,DiffData,Sender,Subject -autosize

This is the script all in one.

Import-Module -Name "C:\Program Files\Microsoft\Exchange\Web Services\2.0\Microsoft.Exchange.WebServices.dll"
$exchService = New-Object Microsoft.Exchange.WebServices.Data.ExchangeService([Microsoft.Exchange.WebServices.Data.ExchangeVersion]::YOUR_EXCHANGE_VERSION, [System.TimeZoneInfo]::Local)

$exchService.UseDefaultCredentials = $true 
$exchService.AutodiscoverUrl("YOUR@EMAIL_ADDRESS")

$searchFilterCollection = New-Object Microsoft.Exchange.WebServices.Data.SearchFilter+SearchFilterCollection([Microsoft.Exchange.WebServices.Data.LogicalOperator]::Or)
$searchFilter1 = New-Object Microsoft.Exchange.WebServices.Data.SearchFilter+IsEqualTo([Microsoft.Exchange.WebServices.Data.ContactSchema]::ItemClass,"IPM.Note.EnterpriseVault.Shortcut")

$searchFilterCollection.add($searchFilter1)

$prArchiveDate = new-object Microsoft.Exchange.WebServices.Data.ExtendedPropertyDefinition([GUID]"D0F41A15-9E91-D111-84E6-0000F877D428","Archived Date",[Microsoft.Exchange.WebServices.Data.MapiPropertyType]::SystemTime)
$prArchiveDatePropertySet = new-object Microsoft.Exchange.WebServices.Data.PropertySet($prArchiveDate )
$prOriginalSize = new-object Microsoft.Exchange.WebServices.Data.ExtendedPropertyDefinition([GUID]"D0F41A15-9E91-D111-84E6-0000F877D428","Original Size",[Microsoft.Exchange.WebServices.Data.MapiPropertyType]::Integer)
$prArchiveDatePropertySet.add($prOriginalSize)
$customPropSet = new-object Microsoft.Exchange.WebServices.Data.PropertySet($prArchiveDatePropertySet)

$customPropSet.add([Microsoft.Exchange.WebServices.Data.ItemSchema]::Subject)
$customPropSet.add([Microsoft.Exchange.WebServices.Data.ItemSchema]::DateTimeReceived)
$customPropSet.add([Microsoft.Exchange.WebServices.Data.EmailMessageSchema]::Sender)

$itemView = new-object Microsoft.Exchange.WebServices.Data.ItemView(50,0,[Microsoft.Exchange.WebServices.Data.OffsetBasePoint]::Beginning)
$itemView.Traversal = [Microsoft.Exchange.WebServices.Data.ItemTraversal]::Shallow
$itemView.PropertySet=$customPropSet
$itemView.OrderBy.add([Microsoft.Exchange.WebServices.Data.ItemSchema]::DateTimeReceived,[Microsoft.Exchange.WebServices.Data.SortDirection]::Ascending)

$return_object =@()

do
     {
      $FindItems = $exchService.FindItems([Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Inbox,$searchFilterCollection,$itemView)

      foreach ($eItems in $FindItems.Items){

            $props = @{ DateTimeReceived  = $eItems.DateTimeReceived
                        ArchivedDate      = $eItems.ExtendedProperties[0].Value;
                        Sender            = $eItems.Sender.Name;
                        Subject           = $eItems.Subject;
                        Size              = $eItems.Size;
                        OriginalSize      = $eItems.ExtendedProperties[1].Value;
                        }
      
            $return_object += New-Object -TypeName PSCustomObject -Property $props
      }

        $itemView.Offset= $itemView.Offset + $itemView.PageSize

}while ($FindItems.MoreAvailable)

$return_object | where-object {(($_.ArchivedDate -gt (get-date 2014-05-07)) -and ($_.ArchivedDate -lt (get-date 2014-05-08)))} | format-table DateTimeReceived,ArchivedDate,DiffData,Sender,Subject -autosize

Phishers' fake gaming app nabs login information

Trojan.Backoff: Support Perspective

$
0
0

Security Response is aware of an alert from US-CERT regarding a threat they are calling Backoff. This threat family is reported to target Point of Sale machines with the purpose of logging key strokes and scraping memory for data (like credit card info) and then exfiltrating the data to the attacker.

Symantec Security Response is currently investigating this threat family and is working to obtain samples that were mentioned in the IOC section of the CERT alert. All detections for threat files have been, or will, be mapped to: Trojan.Backoff

Detection information:
AV:      Trojan.Backoff – available in RR def 20140731.025 (156267)
IPS:     Under investigation
XPE:    W32/Trojan3.JRS

Information on US-CERT alert:
The impact of a compromised PoS system can affect both the businesses and consumer by exposing customer data such as names, mailing addresses, credit/debit card numbers, phone numbers, and e-mail addresses to criminal elements. These breaches can impact a business’ brand and reputation, while consumers’ information can be used to make fraudulent purchases or risk compromise of bank accounts. It is critical to safeguard your corporate networks and web servers to prevent any unnecessary exposure to compromise or to mitigate any damage that could be occurring now.
 
SEP for XP Embedded:
SSEP 5.1 for XPE is reaching its End of Support Life on 10/15/2014 but is still in use on a significant number of PoS devices around the world.  This legacy product only updates its definitions once per week- and uses a def set and naming convention different from SEP 12.1
We recommend that customers still using SSEP 5.1 for XPE should migrate to SEP 12.1 for more complete coverage.
Additional reading:
•    US-CERT Alert:
http://www.us-cert.gov/ncas/alerts/TA14-212A

•    Windows XP Embedded Support in Symantec Endpoint Protection (SEP) 11 vs. Sygate Symantec Endpoint Protection (SSEP) 5.1
http://www.symantec.com/docs/TECH91152

•    Symantec Endpoint Protection support for embedded operating systems
http://www.symantec.com/docs/TECH106027

How Many People Profit From Stolen Credit Cards?

$
0
0
The Underground Economy, Pt. 3

We all know that a common form of information gained in data breaches from individuals and organizations is credit card numbers and verification credentials, but what happens to this information and who actually benefits from this type of theft? Symantec takes a closer look at the profits gained from stolen credit cards.


NetBackup: The true scale-out backup solution for VMware vSphere workloads on Nutanix

$
0
0
Nuts and bolts in NetBackup for VMware vSphere

How to protect VMware vSphere environments hosted on Nutanix using NetBackup

Changing to SSL and Enterprise Vault FSA

$
0
0

I’m all for making systems more secure and robust, and one of the changes which often implemented ‘after deployment’ is to change Enterprise Vault to use HTTPS rather than plain, old HTTP.  The problem with this is that it can break existing archived content, and in this article I’ll talk about placeholders with FSA.

A placeholder in FSA contains a ‘link’ back to the original item in the archive.  A small utility called fsutil can be used to view the contents of the placeholder like we see here:

2014-07-28_10h06_43.png

What this means it that if you change Enterprise Vault (and IIS) to use HTTPS these placeholders won’t work, because they were built using HTTP.

The Cure?

The cure for this is to recreate the placeholders. Fortunately FSAUTILITY has an option to do that, the -c switch.

Have you ever changed from HTTP to HTTPS and recreated the shortcuts? Let me know in the comments below.

Performing Incident Response using Yara

$
0
0

yara.png

Yara is a tool that Symantec uses on incident response engagements in order to help us respond quickly and triage hosts while our security team is prepping signature updates for our affected clients. Yara is very popular tool among security researchers as it is a flexible tool for classifying and discovering malware through hunting and gathering techniques.

In a live response situation the malware we find is usually only running in memory, with little to no disk artifacts. Yara is perfect for deploying across an enterprise and scanning processes running in memory or files residing on disk. As an incident responder time is of the essence, customers are worried about losing intellectual property, the security team and or the IT team of the customer is walking on eggshells, and the need to find evil fast is of the utmost importance.

The idea is to create a yara rule based on prominent strings in the malicious code, and start testing the rule to make sure there are positive matches. Below is a screenshot of some of the human readable strings from a sample case. There are some strings that are very useful here and I highlighted which ones, might be good for a first round try at finding the malicious code on a suspect endpoint.

image1.png

Here is a very easy sample rule following the guidance received from the Yara manual.

image2.png

Looking at the signature above you see that these are strings that might reside in other samples but not all of them. Picking the wrong string combination can lead to false positives There is a great deal of resources available from the “Yara Exchange Community” including generating and testing signatures on shared malware repositories. Below is a sample scan with the above signature on two malicious DLL’s that are from the same malware family.

imagereplacement.png

If you want to take a shortcut there are yara signature generators out there, and some of them do a pretty good job. If during an IR engagement I have a bunch of different samples then I opt for the fastest way to generate signatures to get the containment strategy moving faster. If you are finding a high number of false positives with your signature, then there are other options such as using function bytes, or regular expressions. In the next series of this blog I will illustrate which functions in a particular malware that would be appropriate to use as a byte signature, and then discuss wild carding.

Live Response vs. Traditional Forensics

$
0
0

liveresponse.png

The term live response is being heard more and more frequently but what exactly is it and how does it differ from traditional forensics.

Live response and traditional forensics have a lot in common in that they both are looking for similar artifacts on a system. The differentiator with live response is that the artifacts are being discovered on a live running system against an active adversary. With traditional forensics, images are taken of volatile memory and disks before being analyzed.  Imaging alone can take hours and then the images need to be processed and indexed to allow for keyword searches. Obtaining and processing the image can easily take a day or longer with large capacity discs. With live response there is no imaging or processing that has to occur.  . , everything is real time. This dramatically improves the response time in identifying and quantifying a threat and the quicker the threat is identified, the quicker it can be contained and remediated.

In a typical live response scenario it is a response to an immediate and active threat.  Many times the details of the threat are unknown, so the first priority is identifying and quantifying the threat. Using live response memory analysis techniques we can quickly pull a process listing showing what processes are running and begin identifying suspicious ones. Some other common artifacts that we can look for in memory are suspicious mutexes. It is common for a malicious mutex to be a string of random characters similar to Zeus domain names. Once we have identified the suspicious process, a sample of the code is pulled from running memory and analysis of the malware and creation of IOCs can begin. Of course an advantage to pulling the code from memory as opposed from disk is that it is unencrypted and unpacked so no special processing is required; all of this work can easily be completed before images would be gathered using a traditional forensic approach.

 It’s not just volatile memory but any other information such as prefetch files, registry keys, open network connections, system accounts, etc. can be gathered almost instantly using live response.

The key to using live response successfully is being very specific and focused on what to examine. When large files such as the registry or $MFT are transferred things slow down dramatically. For example, instead of pulling back the entire $MFT, focus on specific locations that malware is commonly located such as C:\Users\%APPDATA%\roaming and instead of pulling back the complete registry start by looking at the keys commonly used for persistence such as HKLM\Software\Microsoft\Windows\CurrentVersion\Run.

Traditional forensics will always be needed to provide in depth analysis identifying how the malware got on the system and what activities took place while it was active. Where live response excels is at quickly identifying and containing an active threat.  The quicker we can identify the threat the quicker containment and remediation will take place.

There are many open source and commercial tools available for live response for insight into one of them check out my colleague Trent Healy’s post on Yara

Enhancing Apache Logging For Improved Forensic Capability Part II: Implementing Enhanced Apache Logging

$
0
0

apache2.png

In the previous installment we examined default Apache logging. Now let's pump up the default Apache combined log format in order to supercharge forensic capability! We'll utilize the "LogFormat" directive in order to define the "enhanced" log format within the /etc/apache2/apache2.conf configuration file:

LogFormat "%{[%a %D @ %I:%M:%S.}t%{msec_frac}t %{%p %Z]}t [%h (%{X-Forwarded-For}i) > %v:%p] [%H %m \"%U\" \"%q\" %I > %>s %D %O %k %L] \"%{Referer}i\" \"%{User-Agent}i\" %{USER}C %{JSESSIONID}C" enhanced

A sample Apache enhanced log format entry looks a little something like this:

[Wed 07/30/14 @ 10:45:59.420 PM CDT] [10.1.1.101 (-) > 192.168.1.1:80] [HTTP/1.1 GET "/example.html""?foo=bar" 666 > 200 295 999 0 -] "http://192.168.1.1/from.html""Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0" jdoe 09162009

Jeepers Creepers! That looks like even more gibberish than last time, so let's break down the enhanced log format step by step:

  • The %{[%a %D @ %I:%M:%S.}t%{msec_frac}t %{%p %Z]}t component logs the time in a more intuitive format using strftime conversion specifications and embedded Apache tokens. The day of the week is now included, the time is now specified in 12-hour format with millisecond precision, and the time zone is now specified by abbreviation. Note that all production servers should be synchronized with a Network Time Protocol (NTP) server in order to ensure consistent time settings across the enterprise. In the example log entry this value is "[Wed 07/30/14 @ 10:45:59.420 PM CDT]".

  • The [%h (%{X-Forwarded-For}i) > %v:%p] component logs the source and destination of the request. The %h directive specifies the client IP address, and the %{X-Forwarded-For}i directive specifies the underlying client IP address for proxied requests. Note that the value of the "X-Forwarded-For" header could be spoofed by the client. In the example log entry these values are "10.1.1.1" and "-", respectively. The %v directive specifies the server name or IP address, and the %p directive specifies the server port. The server port is useful to determine whether a request was transmitted over an HTTP or SSL network connection. In the example log entry these values are "192.1681.1.1" and "80", respectively.

  • The [%H %m \"%U\" \"%q\" %I > %>s %D %O %k %L] component logs details regarding the request and response. The %r directive from the combined log format has been split into individual elements for easier sorting and searching functionality. The %H directive logs the request protocol. Uncommon protocols such as "HTTP/1.0" could indicate automated scanning tools or targeted attacks. . In the example log entry this value is "HTTP/1.1". The %m directive logs the request method. Uncommon protocols such as "PUT" could indicate automated scanning tools or targeted attacks. In the example log entry this value is "GET ". The %U directive logs the requested URL path. Uncommon URL paths such as "/admin.html" could indicate automated scanning tools or targeted attacks. In the example log entry this value is "/example.html". The %q directive logs the query string, which can contain a wealth of useful forensic information. Common attacks such as Cross-Site Scripting (XSS) and SQL Injection can be identified by indicative attack strings within the URL such as ' or 1=1 or <script>. In the example log entry this value is "?foo=bar". The %I directive logs the total number of bytes received from the client, including headers and the request itself. An unusually high number of bytes received could indicate certain types of attacks such as buffer overflows. In the example log entry this value is "666". The %>s directive logs the status code of the request. Uncommon status codes such as "405" (i.e., "Method Not Allowed") could indicate automated scanning tools or targeted attacks. In the example log entry this value is "200". The %D directive logs the number of microseconds taken to serve the request. Unusually long times could indicate certain types of attacks such as successful time-based SQL injection. In the example log entry this value is "295". The %0 directive logs the total number of bytes sent by the server, including headers and the response itself. An unusually high number of bytes could indicate certain types of attack such as successful SQL injection. In the example log entry this value is "999". The %k directive logs the number of keepalive requests processed on the connection. Automated scanners do not typically utilize keepalive requests. In the example log entry this value is "0". Finally the %L directive logs the request log identifier from the error log. This identifier can be used to correlate the request with errors log entries within the /var/log/apache2/error.log logfile. In the example log entry this value is "-".

  • The %{Referer}i directive logs the "Referer" header sent by the client. Note that the value of the "Referer" header could be spoofed by the client. In the example log entry this value is "http://10.1.1.101/".

  • The %{User-Agent}i directive logs the "User-Agent" header sent by the client. Note that the value of the "User-Agent" header could be spoofed by the client. In the example log entry this value is "Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0".

  • The %{USER}C directive logs the "USER" cookie. The application server would set this cookie to contain the username of the authenticated user after form-based authentication. Note that the value of the "USER" cookie could be spoofed by the client. However, the application server could verify the "USER" cookie value sent with each request. If the cookie value does not match the authenticated user correlated with the submitted session identifier value then a deterministic status code such as "403 Forbidden" could be returned to the client. Consequently a "200 OK" status code would confirm a valid "USER" cookie value. The identity of the authenticated user can be extremely useful during a forensic investigation. In the example log entry this value is "jdoe".

  • The %{SESSIONTRACK}C directive logs the "SESSIONTRACK" cookie. The application server would set this cookie to contain an eight-digit identifier in order to track requests throughout the duration of a session. Note that the value of the "SESSIONTRACK" cookie could be spoofed by the client. However, the application server could verify the "SESSIONTRACK" cookie value sent with each request. If the cookie value does not match the tracking identifier correlated with the submitted session identifier value then a deterministic status code such as "403 Forbidden" could be returned to the client. Consequently a "200 OK" status code would confirm a valid "SESSIONTRACK" cookie value. Tracking requests throughout the duration of a session can be extremely useful during a forensic investigation. However, it is important to note that the "SESSIONTRACK" tracking identifier is not the session identifier itself (e.g., "JSESSIONID") and cannot be utilized to resume a session. If the session identifier itself was logged then attackers could leverage a compromised logfile in order to hijack authenticated sessions. Therefore, session identifiers and other sensitive security tokens should never be logged. In the example log entry this value is "09162009".

As you can see, the enhanced log format captures a wealth of information that would be extremely useful during a forensic investigation. Obviously additional information means larger logfiles, but disk space is cheap and the benefits of extended logging certainly outweigh the nominal increase in resource consumption. All that's left now is to enable the enhanced log format within the /etc/apache2/sites-available/000-default.conf and /etc/apache2/sites-available/default-ssl.conf configuration files:

CustomLog ${APACHE_LOG_DIR}/access.log enhanced

And finally we'll restart the Apache daemon in order to load the configuration changes:

root@debian $ service apache2 restart

Done and Done! Apache will now begin logging each request in the enhanced log format, providing additional information that would be extremely useful during a forensic investigation. There's no doubt about it, our pumped up log format is a lean mean forensic machine!

Viewing all 5094 articles
Browse latest View live


Latest Images