Saturday, December 29, 2012

Dude, Where's My Data?


Harlan's tweet (view picture to the right) got me thinking, and I would like to share a case example that I feel drove this particular point home for me.

 Many of the 'Swiss Army' forensics tools will parse data for you and automate various tasks. For example, X-Ways will parse link files and EnCase will  parse (or mount, whatever term you prefer) PST files. Instead of exporting out these files and working with them in separate programs, these Swiss Army knives will display the data in a more readable format within their GUI.

Cell phone forensic programs work in a similar fashion. They will first acquire the phone (if you’re lucky that day) then parse typical data such as SMS and MMS messages, call logs and contact information. These programs can also generate a pretty report for you to turn over to your clients (whether it’s a prosecutor,  a defense attorney or your cousin Vinnie). As an examiner, this can be great thing.  No need to locate and export the database, run queries and convert timestamps.

Each of these programs has taken a task that is repetitive and automated it - in most cases, saving the examiner time. But where is the Swiss Army knife getting its data from, how is it interpreting it and  is it getting all the data?
 
Now, to get to my case example.  I had an iPhone where I was tasked with getting the voicemails.  In order to do this I had three “Swiss Army knife” tools at my disposal:
  • Swiss Army Knife A – $$$$$
  • Swiss Army Knife B – $$$
  • Swiss Army Knife C – $
All three programs were able to acquire a file system image of the cell phone as well as parsing the SMS, MMS and call logs, but what I needed were the voicemails.  Only one of these tools (A) automatically parsed the voicemail.db file which contained information regarding the voicemails. I did locate the voicemail.db file within the file system of the two other programs (B and C) - the programs just didn’t parse this database automatically.

Now, if an examiner had just the option of B or C that did not automatically parse the voicemails and point them out – would they have assumed there were no voicemails? Ok, this may not be the best example as voicemails are a pretty common thing, but what if it were a not so common artifact?

I decided to use A to conduct the remainder of the exam since it had already parsed the voicemails saving me the time of exporting out the database, running quires and converting timestamps. I could now get a pretty report.   I went to generate the report and the program threw an error. No report, no exported voicemails.  Dude, where's my data?
 
In my quest to find out why the $$$$$ Swiss Army knife threw an error, I went to view the contents of the voicemail.db file to see if there was some abnormal data causing issues. I opened the voicemail.db file with an SQLite viewer and noted several columns of data NOT displayed by the $$$$$ program.

Included were two columns I thought right off the bat could be important – a "flag" column and a "trashed" column. The flag column designates certain statuses of the voicemail such as heard, unheard or deleted. The trashed column is the date that the voicemails where placed in the deleted folder. What if the examiner needs to prove the suspect had listened to a voicemail?  I know I don't always listen to my voicemails (sorry Kim) and opt to just call the person back instead. (Now I know you can't prove they listened to it, per say. Maybe their speaker was busted,  or their nephew had their phone but this is just an example for illustrative purposes so just roll with me).

After some more testing, I determined it was blank values in the database that were causing errors with the reporting. Was I going to wait for the next software update to get a report? No. Time to work with the data myself. I had three options:

Good → Export the data from the database into an Excel sheet, use formulas to convert the timestamps

Better → Write a script to parse the data as I would probably need it again

Best → Have someone else write a script to parse the data

Now, arguably, Better and Best could be switched.  It’s always good to write your own tools so you gain a deeper understanding of the data. However, in my case I had someone (Cheeky4n6Monkey) reach out to me when I was working on iParser asking if I needed any help.  I know he enjoys learning and is always looking for a forensic project to help on. So rather then write my own parser  I thought this would be a nice project to involve him in.

A few emails later I had a custom tool written by him that gave me the exact data I wanted and hopefully he got to learn something in the process too.

So in summary, I am going to quote Harlan’s tweet again:

 How much do you know about what your tools do for you?  May I make the following suggestions?  Look at the data with different tools to see what your tool may not be doing. Look at the raw data, or look at the data in its native format to see how your tool interprets the data and what it may be missing. Read the forums associated with your tool, see what it may be capable of that you are missing out on based upon how others use it.

Do they get the data you need? In my case, not always. Sometimes I need to roll up my sleeves and do the dirty work by myself (err, in this case I asked someone else to join in with me).

Do you know what you need? How do you know what data you need, if you don’t even know it exists? Keep researching, reading blogs, watching webcasts and asking questions.  Don’t assume that everything will be handed to you on a silver platter by your tools.

Umm, in case you don't get my movie reference, Google "Dude, Where's My Car"  :-)


Saturday, December 22, 2012

iParser Update: Batch Processing Added

I figured before the end of the year I should cross off at least one thing on my list I have been meaning to do. When I first released iParser, I had some feedback asking for a way to batch process plist files (thanks to a tester who has asked to remain anonymous). Due to some other projects, work and life in general this went on the back burner for a few months.
  
What got me moving on making the improvements was a recent exam on an iPhone.  I wanted a way to parse all the data in the plist files. Once I wrote the batch processing option in iParser, I simply exported out the plist files from the phone into a folder, ran iParser and had a report to review.

The above is just one example of how the batch processing can be utilized. As long as you have a plist file, you can use iParser.  Keep in mind the power of the plug-ins though.  If you keep on parsing the same files from an image, take a moment to add the plist as a plug-in (see my instruction here).

I hope this will add more flexibility to the program.  Thanks to everyone that provided feedback.  It may take a little bit for me to get around to it, but I do listen and hope to keep making improvements.

Download iParser v.1.0.0.20 with batch processing.









Friday, November 23, 2012

Google Analytics Cookie Parser

I recently watched an excellent webcast on the SANS website archive about ‘Not So Private Browsing”. In this webcast, Google Analytics cookies are covered, and the wealth of information that can be found in them. I also located a great article on the DFI News website that covers these cookies as well.
 
I won’t go into detail here, as both of the above mentioned resources do a great job. But, briefly, the Google Analytics cookies can contain information such as keywords, number of visits, and the first and second most recent visit. According to the SANS webcast, approximately 80% of websites use Google Analytics, so there is a good chance you may find some of these in your exams.

There are three types of cookies that contain information of value: __utma, __utmb and __utmz. The four main (debatable, I know) browsers store them differently. Internet Explorer stores them in a text file, Firefox and Chrome in an SQLite database, and Safari in a plist file.

The values in the data base look something like this:
  • __utma: 191645736.1125870631.1349411172.1349411172.1349411172.1
  • __utmb: 140029553.1.10.1349409002 
  • __utmz: 140029553.1349409002.1.1.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=php%20email%20throttling 
For example, the values highlighted in red are timestamps in Unix epoch, and the "1" in blue is the number of hits.  As I mentioned before, both SANS and the DFI News article break down how to parse these out in detail.

I have written one tool that will parse the Google Analytics cookies for these four browsers, GA Cookie Cruncher:

Internet Explorer - point the tool to the folder containing the cookies (either export out the cookies folder, or mount the image). The tool will read each cookie within the folder, determine if it has these values and parse them accordingly.

Chrome – point the tool to the cookies sqlite database (either exported from your image, or mounted). The tool will query the database for all the Google Analytics values and parse accordingly.

Safari (Mac) – point the tool the the cookies.plist file. It will parse the plist file and the the Google Analytics cookies within.

Firefox- The Firefox cookies are stored in an SQLite database. Unfortunately, the wrapper library I used can not access this SQLite Database. I also tried to test the Firefox cookies database with the free SQLite Browser which could not read it either. So far, the only tool I have been able to access this database with is the SQLite Manager plugin for Firefox.

The work around I implemented is load the Firefox cookies database into the SQLite Manager plugin. From there, export out the mz_cookies table into a CSV file. This csv file can then be parsed by the program. I know, extra work, sigh - but it’s still better then manually parsing through that data.


I have  included a little hint in the "Browser Information" box to remind you where the default location of these cookies are for whatever browser you select. I cant event remember where I put my keys, so I thought this might be helpful.

The program creates 3 files in CSV format: %Browesername%_UTMA, %Browesername%_UTMB and %Browesername%_UTMZ

 Here is some sample output from Internet Explorer from a __utmz cookie:



 Now, I haven’t tested this on every browser version out there, and I have seen some variations on the way the cookies are stored. Some initial tests indicate that IE 9 does not seem to track these values, but more research will need to be done to confirm (thanks to cheeky4n6monkey for the testing).  If the tool does not read your cookie file, I'm happy to help, just shoot me an email.

Download the GA Cookie Cruncher here.

Enjoy!

Thursday, September 20, 2012

iParser: Automated Plist Parser Release


Let me preface this  with saying, I.A.N.A.P.P. – I Am Not A Professional Programmer. I enjoy programming, and I hope others find this tool useful.  If you find a bug, please let me know.  If you have some suggestions or feature requests, please let me know. What may be intuitive to me may be totally off for others. I also wanted to thank Cheeky4n6Monkey for designing an icon for me as I have zero graphic skills, and Scott Zuberbuehler for doing some testing and making some suggestions for improvements.


What does it do?
The concept behind iParser is to provide an automatic way to gather various plist files from a MAC image into one place, rather than look for them every time an exam is conducted.  You simply mount the image, point to the root directory, choose a user and let it run.  It will gather system information, application preferences, network information and user information.  It converts binary plist files into XML using the iTunes plutil, then parses the XML and generates a text report.  Although you can use notepad to view the report, I find that Notepad++ works better. If you are unfamiliar with plist files, please read here

Using RegRipper by Harlan Carvey as my inspiration, I decided to use plug-ins to define the plist files so that users can add in plist files as they see fit. I used the OS X 10.7 artifact list by Sean Cavanaugh from http://www.appleexaminer.com/ as a starting point for the plist files that will be parsed.

What does it not do?
It does not convert the data within the plist file.  For example, in the Safari History plist file, it will not convert the timestamp. It does not decode base64 data. It basically strips out the XML tags and builds a report.

Looking ahead
Yes, this is a Windows based program (sorry). My hopes are to dig my heels in, learn some Pearl, and make it cross-platform compatible.  I have a new found respect for the work and ingenuity of RegRipper and realize how spoiled I have been by such a great tool...

Requirements

  • Windows
  • Mounted Mac Image or access to Mac partition from Boot Camp
  •  iTunes 
  •  .Net Framework (quick install if you don't already have it)

Plugins
The Plug-in files are in XML format. You can easily add a plist file that is not already included. I have detailed instructions on the format here, or just open and view some of the existing plug-ins to view the format. If you would like me to add any plug-ins to future releases, please email me:  arizona4n6 at gmail.com - or email me if you can't figure out the plug-ins and would like me to add a plist.


Download and Documentation
Download iParser here
View the Documentation here




Wednesday, August 29, 2012

Automated Plist Parser


Plist files in the MAC world are the equivalent to, or as close as you are going to get to registry files on Windows Systems.  They contain system settings, application preferences, deleted user accounts and much much more.  These files come in two formats, Binary and XML.

Plist files, IMO tend to be in various places all over the file system.  For example, plist files specific to the user may be under the /User/*Username*/Preference folder, and plist files for the system will be under /System/Library.

During MAC exams, I feel like I am running around looking for all these crazy files (which is tough to do if you have heels on).  Additionally, for each exam there are a standard set of plist files I need to gather, such as OS Version, Time Zone, Deleted Accounts etc.  I may also spend a significant amount of time researching and locating plist files for specific applications and wanted a way to document and share this information. 

Anytime something becomes repetitive, it’s a good chance to write a script or develop a tool to automate the process.  A perfect example of this is RegRipper.  It parses the registry for common (and even uncommon) keys, and gives the community an easy way to add  plugins for additional registry keys.  

So, using RegRipper as source of inspiration, I set out to develop a tool that accomplishes an automated way to parse plist files.  I  am almost done developing it and in the testing phase.  The tool runs on Windows with a GUI, and requires the MAC image to be mounted .  Adding your own plist file to parse  is relatively simple  - an entry in an XML file that specifies the location of the plist file such as /System/Library/CoreServices/SystemVersion.plist and a description. 

I will be adding in all the plist list files listed under the OS X 10.7 artifacts on the appleexaminer.com website which should be a good running start.

I am almost done. I figured once I blogged about it, it would commit me to putting the finishing touches on and wrap it up. If you have a clever name for it, let me know. All I have manged to come up with is iParse (ha ha).

If your interested, check back next week and it should be done. [Edit - the tool is now available, please see this post  or download here]


Monday, August 13, 2012

Windows Backup and Restore


A recent investigation led me to a Windows Backup file.  Windows 7 as well as Windows Vista includes a utility allowing the user to backup and restore folders, files and system information. This is not the same as Volume Shadow Copies (VSCs), another method wherein Windows backs up files.  For information on how to examine VSCs  check out Harlan Carvey's book, or other blog posts here and here.  Depending on the version on Windows, the backup can be stored on an external device, such as USB drive or over the network (Windows 7 Pro/Ultimate).   My research was done with Windows 7 Home Premium and Ultimate.

Windows creates a backup with the following naming convention:
ComputerName\Backup File YYYY-MM-DD ######\Backup files ##.zip




Interestingly enough, if an end user looks at this backup through Windows, they will only see the top level folder:

 



Windows Backup creates multiple zip files containing the files/folders that where backed up. True, if you mount the zip files in your favorite all in one forensic tool you will have access to all these files in their glory. You can run keyword searches until you are giddy, and forensicate to your heart’s content, BUT the dates in the zip file are the dates the backup was created, not the date the file was originally created or modified.  That being said, Windows Backup tracks these original dates which may come in handy.

Windows Backup tracks the names of the folders, files and original dates in a file named GlobalCatalog.wbcat under ComputerName\Backup File YYYY-MM-DD ######\Catalogs. If you do not have access to the back up media, a local GlobalCatalog.wbcat file is created. I discuss this in more detail below.
 
Ideally, this file could be parsed for all of this information, with the results displayed in a nice format, CSV or otherwise.  I have been looking at this file in hex trying to figure out a way to accomplish this. So far, I have located the file names, folders and dates, but have not figured out how the records are tied together within the file.  Boooo…. If you know of any existing program or script that can parse the data, or know the file format, please let me know. If you are interested in seeing a sample of what I have located so far, contact me (arizona4n6 at gmail dot com) and I can send it to you.

As such, viewing the backup file natively through Windows Backup is the only method I have discovered  to see the original dates for the files and folders. Step by Step directions follow: 
  •  Export the backup files from your image to an external device. If you prefer to mount the image, create a VHD using Vhdtool  on a DD image and attached the VHD through the Disk Manager. Make sure its a copy of your image as Vhdtool will make changes to it.  This should sound familiar if you have read Harlan's Post on using the Vhdtool to examine VSCs. I tried to mount the image using FTK Imager and the backup file was not seen by Window's Backup.
  •  Launch Windows Backup and Restore (Control Panel>System and Security>Backup Your Computer).
  •   Got to Restore>Select another backup to restore files from. It should auto locate the Windows Backup.


  • Next, Search for *.*, and all the files will be listed or you can browse to a particular file if you please. By default, only the Date Modified is listed.  If you right click the title bar, you can select the Date Created as well. If you use the Browse function instead of Search, you will also have the option to see the backup date.



Now, instead of seeing all the same dates and times for the files contained within the zip files, you are presented with the original Date Created and Date Modified for files. As I mentioned before, it would be soooooo nice to have this information parsed directly from the GlobalCatalog.wbcat file.


Windows Backup Registry Entries
When a Windows Backup is created an entry is made or updated in the Software Hive under the key \Microsoft\Windows\ CurrentVersion\WindowsBackup\.

This key holds various sub keys with information regarding the backup including USB device information. This USB information may come in handy if you are also conducting link analysis/USB analysis and can be cross referenced with other registry keys.

Some of the information available with sample data :

Target Device

For a USB Device:

  PresentableName = E:\
  UniqueName = \\?\Volume{a2e6b4d4-e492-11e1-a39d-000c29448ee3}\
  Label = MYTHUMBDRIVE
  DeviceVendor  = SanDisk
  DeviceProduct  = Cruzer
  DeviceVersion  = 1.26
  DeviceSerial = 200605999207D70370EF         

 For a Network Share:


  PresentableName = \\COMPUTERNAME\Users\Public\Documents\backup\
  UniqueName = \\?\UNC\COMPUTERNAME\Users\Public\Documents\backup\


Status
  
  LastResultTime = Sun Aug 12 17:45:39 2012 (UTC)
  LastSuccess = Sun Aug 12 17:45:39 2012 (UTC)
  LastResultTarget = \\?\Volume{a2e6b4d4-e492-11e1-a39d-000c29448ee3}\
  LastResultTargetPresentableName  = E:\
  LastResultTargetLabel = MYTHUMBDRIVE


According to my testing, the LastResultTime and LastSuccess will be the same if the backup completed. If the backup did not complete or was cancelled, these times will be different, and the LastResultTime will contain the time of the attempted backup.

I have created an Reg Ripper plugin and passed it along.  It should be included in the next disto.
 
Other Artificats
A Volume Shadow Copy is created before the backup.

Event log entries in \Windows\System32\winevt\LogsMicrosoft-Windows-WindowsBackup%4ActionCenter.evt

Local GlobalCatalog files created:

    \System Volume Information\Windows Backup\Catalogs\GlobalCatalogCopy.wbcat

    \System Volume Information\Windows Backup\Catalogs\GlobalCatalog.wbcat

This local GlobalCatalog.wbcat file seems to contain not only entries for the last backup, but for previous backups done, as well as previous media used. This could be helpful if you need to locate/subpena various devices that contain backups. Below are some results from running Strings across this file:

COMPUTERNAME\Backup Set 2012-08-11 213315\Backup Files 2012-08-11 213315\Backup files 1.zip
\\?\Volume{177d1d16-e2fc-11e1-914b-ec9a745b406c}\
SanDisk
Cruzer
1.26
200605999207D70370EF
COMPUTERNAME\Backup Set 2012-08-11 213315\Backup Files 2012-08-11 213315\Backup files 2.zip
Backup Set 2012-08-12 194644
COMPUTERNAME\Backup Set 2012-08-12 194644\Backup Files 2012-08-12 194644\Backup files 1.zip
\\?\Volume{45f45fcd-e269-11e1-a36e-ec9a745b406c}\
Kingston
DataTraveler SE9
PMAP
COMPUTERNAME\Backup Set 2012-08-12 194644\Backup Files 2012-08-12 194644\Backup files 2.zip
COMPUTERNAME\Backup Set 2012-08-12 194644\Backup Files 2012-08-12 203800\Backup files 1.zip
COMPUTERNAME\Backup Set 2012-08-12 194644\Backup Files 2012-08-12 203800\Backup files 2.zip

As I mentioned before, I am trying to figure out the GlobalCatalog file format, so if you know the file format, or any tools that can parse it, please let me know :-)