Sunday, June 10, 2018

The Bigger They Are, The Harder They Upgrade

I read a news story a while ago about the woefully outdated technology underpinning the US military's nuclear arsenal. The system that coordinates the US nuclear forces, including missiles and bombers, runs on a IBM Series/1 computer and 8-inch floppy disks: both relics of the 1970's, according to the Government Accountability Office.
As shocking as this is, the US government is not alone. Many of the largest businesses are similarly running on outdated technology with skyrocketing maintenance costs, and unlike the US military they don't have any good excuse to not upgrade.

The US military makes several good points on why they don't upgrade their systems.

  • One, the current system fulfills all the needs of the organization: indeed the state of nuclear warfare hasn't seen much change since the end of the Cold War.
  • Second, the military's systems are air-gapped from the Internet, so patching vulnerabilities is less of a worry.
  • Third, the military isn't at a competitive disadvantage because of its IT, simply b/c the US military doesn't have much competition.

Now lets look at private companies. Some of them run their backend processes with applications written in languages like COBOL, running on top of mainframe machines. Email is still a locally hosted database requiring Intranet access and a heavy desktop client.


  • As their businesses have been transformed in the past 40 years their IT is barely keeping up. The frustration with legacy technology is a common meme with office workers, and therefore it can hardly be argued that such technology meets the needs of its users.
  • Unlike the nuclear launch systems of our military, virtually no company has an "air-gapped" network: internal systems can often be accessed by employees around the world via VPN. Third parties and suppliers are given access to certain systems to better perform their jobs. This allows companies to increase productivity at the cost of increased security risk.
  • Every private corporation is struggling to survive in a competitive market where the latest in IT give firms a competitive advantage in product development, operations, sales, and recruiting. Having legacy technology can have a detrimental effect on business performance, particularly with a younger generation that has grown up with technology that "just works" such as GMail, Dropbox, Mac OSX, and iPhones.


So where won't you see legacy technology? In my experience, startups tend to be the most successful in leveraging the latest innovations in IT due to their smaller size, less compliance red-tape, and off-the-shelf strategy.

Smaller Size

Its much easier to implement new technology with a small number of employees. Larger organizations, however, can mimic the power of smaller organizations by allowing more IT autonomy within its divisions. Technology could be implemented faster if it is implemented among fewer people, and by allowing each division to make its own IT decisions, they can better adapt to their own needs. Of course, their is a risk cros-division communication will suffer because of IT incompatibilities. It will be the role of the main IT department to ensure the authorities given to divisions do not result in incompatibility.

Red-Tape

The fact that IT in smaller organizations are less restricted by internal compliance is not necessarily a good thing, as a lack of IT compliance can lead to serious legal consequences. Forever 21 was sued in 2015 for pirating Adobe Photoshop, an allegation that, if true, would almost certainly have involved the company's IT department.
However not all compliance measures are good and some need to be re-considered in light of the effects on productivity. Again the solution may be to give a company's internal divisions more IT autonomy to make upgrading faster. Also, companies should beta-test new technology to expedite the process.

Off-the-shelf strategy

Larger corporations tend to use IT solutions that are customized to that company's needs. A custom solution, while it may fit all the needs of the organization now, will be expensive to upgrade and often becomes a bottleneck in the future as the organization's needs change. A small company doesn't have the resources to custom order a solution that fits all the company's needs, so they make do with a product that may fullfill 80% of the requirements and change their processes to work around limitations. In the long term however, this gives the same organization the savings and flexibility to implement up-to-date solutions as they mature and become commonplace. Companies that never had email suddenly moved to web-based email, while corporations who had email long ago are struggling to migrate their legacy on-premise email services to the cloud. Large corporations should consider a conservative IT strategy that focuses on deploying wide-usage, affordable, off-the-shelf solutions whenever possible and save the IT budget for custom solutions only when absolutely necessary.

By learning from smaller organizations, large company IT departments can move more nimbly to keep up with the latest in IT trends, as well improve employee satisfaction and productivity.


Saturday, September 26, 2015

Android to iPhone: Day 0

After 4 years of being a loyal Android user, I've decided to try iOS. The primary reasons were better security, privacy, applications, and support.

The release of the iPhone 6S couldn't have come at a better time. My Oneplus One had been crippled by a variety of hardware issues, including the commonly cited touchscreen grounding issue that essentially bricked the phone in humid environments, as well as problems with WiFi and USB connectivity. The massive 5.5in screen was great to look at but made one-handed use a chore and wasn't comfortable in my pockets.

Before I go into my initial experiences, I wanted to review the primary reasons I made the jump:

Security: 

iOS is more secure than Android. While a non-rooted Android phone is mostly safe from malicious applications, Google can't patch discovered vulnerabilities on most Android phones.
The insistence of major manufacturers and carriers on making their on "flavors" of Android also means those manufacturers are the only ones who can patch phones. Why is this a big deal?
Imagine if a new Windows zero-day exploit was discovered but you couldn't download a patch from Microsoft: rather, you have to wait for HP, Dell, or Lenovo to release their own version of the patch. This is the reality for Android users, who remain vulnerable to discovered threats for an unacceptably long time.
For example, Stagefright was discovered in July 2015 and wasn't patched on the Galaxy S4 for Verizon until late August.
iOS, of course, benefits from Apple's total control over the ecosystem which results in much faster vulnerability patching.

Privacy: 

Security and privacy are related. To me, security is concerned with compromise of personal information without user consent, while privacy relates to giving away information with a user's consent, usually through 20 page EULAs that no one reads. 
Digital privacy often boils down to business models. Apple is in the business of (primarily) selling phones, and they are very good at doing that: estimates of per-unit profit margins on iPhones are around 70%
Google, on the other hand, is in the business of selling ads: it gives Android away because the OS gives Google better access to and more information about users. Google has a business incentive to collect data on its users because it makes them more valuable to Google's actual customers: advertisers. 
While the personal information Google tracks certainly makes the Android experience better in some ways, it also hurts if you're against the notion of a company (and by extension, the government) knowing the most intimate details of your private life. 
Apple, for example, uses public key encryption in iMessage, making it impossible for anyone besides the end users to read the contents of messages. One of the main reasons why similar encryption for Gmail isn't enabled is because Google needs to be able to read your emails to give you targeted ads.

Applications: 

iOS often gets new applications and updates before Android. This is ironic because Android has a bigger app store and it technically costs less to develop on Android: the SDK is free while Apple's SDK costs 100 dollars and only runs on Macs. 
My mobile developer friends, however, say most companies write for iOS first due to ease of development and profits. Since there's only a few models of iPhones out there, iOS developers don't have to worry about supporting dozens of different hardware configurations as Android developers must. 
Furthermore, iPhone users are generally more willing to pay for apps while Android users are accustomed to free, ad-supported apps. This also leads to some low quality apps in the Android Market: for example, the flashlight app that needs network and location access to serve me targeted banner ads.

Support: 

With an essential tool like a smartphone, fast customer service is essential, and Apple is well-known for speedy customer service. When my started encountering issues on my Oneplus One months ago, I was stuck: to fix it under warranty, I'd have to send it to China and wait at least 3 weeks for them to ship it back. 
This was of course a huge problem: I couldn't be without my phone for 2 days, much less a month, and I didn't have a spare. With an iPhone, however, I can expect to walk into an Apple store for a warranty repair or exchange within a few hours. Android manufactures don't have the luxury of 70% unit margins to provide such class-leading service.
As a college student, spending $800 dollars on a phone (or paying even more over 2 years under a contract) was untenable. So when I finally got a full-time job, I preordered the 64GB iPhone 6S and it came on launch day: I hadn't been this excited for a new piece of tech in a long time. 
I'll update this blog tomorrow with my initial setup experience.

Thursday, June 18, 2015

Unique Challenges in SSD Forensics

Introduction

In today’s computers, traditional hard disk drives (HDDs) are being rendered obsolete by solid state drives (SSDs) that are faster, smaller, and more reliable. (Domingo, 2015) SSDs accounted for 13.6% of total PC storage sold in 2013, but are predicted to account for over 33% in 2017. (Kingsley-Hughes, 2013) Popular computers like Apple’s Macbook Pro and Air lines now exclusively use SSD memory. From the user’s perspective, an SSD is a drop-in replacement for a HDD, but their underlying method of operation is fundamentally different and presents several unique challenges to forensic investigators.

SSD Method of Operation

Consumer SSDs consist of multiple NAND flash memory cells, where data is stored, and a microcontroller that interfaces between the memory cells and the computer. It is much faster to read NAND flash memory than to write to it, and manufacturers of SSDs have employed a variety of techniques such as TRIM, wear-leveling, hardware compression, and overprovisioning to overcome the slow write speeds of NAND flash. These technologies impact the ability of forensic investigators to make forensically sound copies of SSDs and recover deleted data.

TRIM

How It Works

Unlike magnetic storage like HDDs, the NAND flash storage used in SSDs needs to be erased before being re-written. Data is written to NAND memory in “pages” of 4 or 8KB each, but can only be erased in “blocks” that contain hundreds of pages. Since erasing and re-writing hundreds of pages is a slow operation, SSDs write data to empty pages first rather than erase deleted blocks. If this operation was left unchecked, however, the SSD will suffer severe performance degradation once empty space has been used up. The TRIM function was created to prevent this from happening by telling SSD controllers to erase deleted blocks as part of a background process. When data is deleted or re-written with TRIM enabled, the SSD queues the block to a background process known as the “garbage collector” which erases the blocks on during idle time. As a result, the performance impact of erasing deleted blocks is hidden from the user and fresh blocks remain available for writing. Practically all modern SSDs support TRIM. (Gubanov, 2012) (Belkasoft, 2014)

TRIM’s Impact on Forensics

Since TRIM commands are executed by the SSD microcontroller, it is impossible to stop once started. TRIM commands will finish even if the SSD is powered cycled. Additionally, a re-format command will cause TRIM to clear the whole partition. This means that a forensic investigator will not be able to read deleted data from a TRIM-enabled SSD, and users can effectively erase whole partitions just seconds before acquisition.
There is a notable exception to this, however, involving files smaller than 2MB. Since these files will take up less than 1 block of NAND space, they will not be subject to TRIM if that same block also contains part of a non-deleted file. There are several other limitations: TRIM is disabled if the operating system doesn’t support it or if the physical interface doesn’t transmit TRIM commands. The USB interface, for example, doesn’t support TRIM and therefore deleted data may be recovered from external USB SSDs. (Belkasoft, 2014) Generally, pre-configured PCs with internal SSDs will have TRIM properly configured.

Wear-Leveling

How it Works

Wear-leveling is a feature in SSDs that increase speed and longevity by distributing data across the whole drive. NAND has limited life compared to HDDs: each block on an NAND chip can only be erased 10 to 100 thousand times before becoming unusable. To ensure no blocks fail prematurely, SSD manufacturers built wear-leveling algorithms into SSD microcontrollers to ensure that each memory block is written to equally. There are two types of wear-leveling: dynamic wear leveling algorithms distribute new data across the blocks with the least number of previous writes, and static wear-leveling also cycles existing data out of less-used blocks so that all blocks can be written to equally. (Memon, 2009) Both of these types of wear-leveling hinder the abilities of forensic investigators.

Wear-Leveling’s Impact on Forensics

Dynamic and static wear-leveling result in extreme fragmentation of data in the physical NAND chips, since data is not store sequentially but rather in whatever blocks have the least number of previous writes. This fragmentation is not predictable. If the chips were to be removed from the SSD to be examined with a custom-built reader, a process known as chip-off, it is difficult and sometimes impossible to re-combine the resulting data into whole files. (Memon, 2009)
Static wear-leveling presents the additional challenge of invalidating cypto-hashes. Forensic investigators generate a cryptographic hash of an acquired drive before and after imaging the drive to prove that the drive was not tampered with during the process. They also take a hash of the image and compare it to the hash of the drive to ensure that their image is a perfect copy of the original. If the drive is an SSD with static wear-leveling, however, the wear-leveling process can move blocks around in the background as soon as the drive is powered on, resulting in a different hash before and after imaging. The wear-leveling process, like TRIM, is executed by the SSD’s internal microcontroller and therefore cannot be stopped unless the NAND chips are physically removed from the circuit board. (Wiebe, 2013)

Compressing Controllers

How it Works

As explained earlier, the NAND flash chips used in SSDs have limited read-write lifespans. To prolong the life of NAND chips, some SSD manufactures use microcontrollers (Sandforce is a well-known brand) that compress data on the fly before writing it to NAND. By reducing the amount of data written to the NAND cells, compressing controllers can significantly improve the lifespan of SSDs. (Memon, 2009)

Compressing Controllers’ Effect on Forensics

Since these compression algorithms are proprietary to the chipset manufacturer, there’s currently no way to decompress data through off-chip analysis short of sending the drive to the manufacturer. This is an expensive and time-consuming process that is reserved for only the most critical investigations. If a forensic investigator acquires a drive equipped with a compressing controller, the only option is to use acquire the image through the SSD’s interface and risk forensic spoilage as a result of static wear-leveling.

Overprovisioning and Secure Erase

How it Works

Since NAND blocks have limited life expectancy, SSD manufacturers often incorporate extra NAND capacity in their devices to take the place of prematurely failing NAND. This practice is known as overprovisioning. Since this extra memory is not directly accessible to the consumer, concerns were raised by the US government about the ability to securely erase the contents of SSDs. The secure erase command addresses this concern by sending a TRIM command to every available block on the SSD, including these “backup” blocks. When properly implemented, secure erase completely destroys all data on the SSD. (Gubanov, 2012)

Secure Erase’s Effect on Forensics

Using secure erase, a SSD user can destroy digital evidence much faster than with a HDD. Secure erase takes just minutes rather than hours as in HDDs, so it’s feasible that a suspect can issue a secure erase command immediately before the acquisition of the device- for example by seeing investigators outside his/her window. As with individual file deletion, secure erase is ultimately processed by the SSD microcontroller and therefore can’t be stopped once started unless de-chipped.

Conclusion

SSDs have been engineered to overcome the limitations of NAND flash memory, and the resulting technologies pose real challenges to forensic investigators. As a general rule, it is much easier for users to securely delete data and much harder for forensic investigators to recover deleted data from SSDs. Background processes like static wear leveling make it harder for investigators to prove cryptographically that drives weren’t tampered with, and even processes like chip-off where the NAND chips are physically read without the interference of the controller will often fail due to fragmentation or compression. As SSDs increase in popularity, digital forensics will face greater challenges recovering evidence from computing devices unless significant innovations are made in the field.



Bibliography

Belkasoft. (2014, September 23). Recovering Evidence from SSD Drives in 2014: Understanding TRIM, Garbage Collection and Exclusions. Retrieved from Forensic Focus: http://articles.forensicfocus.com/2014/09/23/recovering-evidence-from-ssd-drives-in-2014-understanding-trim-garbage-collection-and-exclusions/
Domingo, J. S. (2015, February 17). PCWorld. Retrieved from SSD vs. HDD: What's the Difference?: http://www.pcmag.com/article2/0,2817,2404258,00.asp
Gubanov, Y. (2012, October). Why SSDs Destroy Cort Evidence, and What Can Be Done About It. Retrieved from Belkasoft: https://belkasoft.com/en/why-ssd-destroy-court-evidence
Kingsley-Hughes, A. (2013, May 7). SSDs set to grab over one third of PC storage solutions market by 2017: IHS. Retrieved from ZDNet: http://www.zdnet.com/article/ssds-set-to-grab-over-one-third-of-pc-storage-solutions-market-by-2017-ihs/
Memon, N. (2009, December 14). Challenges of SSD Forensic Analysis. Retrieved from Digital Assembly: http://digital-assembly.com/technology/research/talks/challenges-of-ssd-forensic-analysis.pdf

Wiebe, J. (2013, May 28). Forensic Insight into Solid State Drives. Retrieved from Forensic Mag: http://www.forensicmag.com/articles/2013/05/forensic-insight-solid-state-drives

Saturday, December 13, 2014

IT Audit and Security Final Case Study: Target Breach

The prompt for this case study was to develop an updated information security policy for Target in light of its recent card breach. There was a 2 page limit to the response, so instead of outlining every topic in a comprehensive policy, I detailed several individual policies that I would change from existing practices.

Recommendations for Security and Privacy at Target

The 2013 breach of credit cards and customer’s personally identifiable information (PII) revealed serious deficiencies in the security of the company’s IT infrastructure and illustrated the inadequacy of having state-of-the-art security technology without the appropriate people and processes to respond to threats. The goal of this new security policy is to build on Target’s existing security infrastructure while improving the people and processes needed to secure Target’s IT against future attack.

Background:

Target has been audited every year by Trustwave and found to be compliant with PCI DSS up to the time of the breach.[1] Several security experts claim that simply complying with PCI DSS would not have necessarily prevented the breach,[2] and that security standards take time to develop and do not reflect the newest threats.[3]
Regulatory compliance with PCI DSS was not enough for Target to prevent this attack, yet that doesn’t mean there was nothing Target could have done. In fact, Target had received multiple warnings of suspicious activity during the early stages of the attack yet their internal security team chose not to investigate them. This shows that Target, as an organization, did not make a commitment to security. In this paper, we will focus on aspects of this policy that are significantly different from Target’s current state. On the people and processes front, security culture, issue escalation, alert response, and third party vendor management will be covered. On the technical front, we will cover network segmentation.

Security Oriented Culture

C-level executives set the “tone at the top” of whether or not an organization cares about security. The hiring of a CISO is a critical first step in achieving a healthy security culture, but studies show that security suffers when the CISO reports to the CIO.[4] This policy mandates that the CISO reports to the Board of Directors, not the CIO as in Target’s current structure. This ensures that the voice of security is heard at the highest levels of the organization.

Issue Escalation

Employees of the security organization must know when to report security issues to higher command. On November 30th, FireEye’s security experts sent a high level alert to Target’s security headquarters after detecting malware that was attempting to exfiltrate data from Target’s network. Target had two days to act on this warning before the exfiltration began.[5] Target’s security team, however, ignored the warnings, perhaps because the head of the department had quit a month earlier and no one else thought they had the authority to take action.5 The policy mandates if a chain in command is missing, the threat will be reported to the next level of command and not simply ignored.

Alert Response

Target has world class security applications installed on their networks. In addition to FireEye’s alert, Target also received warnings from its Symantec Endpoint Protection software to malware on a network server.5 Target’s security team chose to not respond to either of these warnings, even though they were of high priority.[6] This policy mandates at all medium to high priority alerts and warnings are documented and investigated within 24 hours. Furthermore, a high level alert must be resolved in 48 hours. Possible resolution can include removing malware, disabling a service, installing a patch, or even determining that the alert was not a threat. To resolve a high level alert, the CISO must sign off on the resolution documentation.

Third Party Vendor Management

Third party vendors significantly increase the attack surface of large organizations, as Target had learned first-hand. To protect Target’s internal IT infrastructure, the security posture of third party vendors with access to Target’s network must be carefully vetted, and that 3rd party access to Target’s system is limited. The level of scrutiny for a potential vendor should be proportional to the level of access granted to Target’s internal network.

Network Segmentation

Network segmentation is a defensive measure of isolating sensitive IT resources the rest of the company’s network. Target did not separate Point of Sale (POS) machines from the rest of the company’s network, even though isolation of the card environment is highly recommended by PCI.7 The allowed the attackers to install RAM scraping malware on every POS system after breaching the general network.[7] This policy mandates all systems involved in storing or processing PII to be isolated from Target’s general network, ensuring “defense in depth” even in event of a surface level breach.

Conclusion

The proposed information security and privacy policy minimizes risks and maximizes returns on security investment by leveraging Target’s existing technology and augmenting the people and processes behind its security infrastructure. The Target breach showed that there are not miracle applications or compliance certifications that guarantee security. Security can only be achieved through organizational commitment, effective processes, and defense in depth.


[1] http://www.darkreading.com/risk/compliance/target-pci-auditor-trustwave-sued-by-banks/d/d-id/1127936
[2] http://blogs.gartner.com/avivah-litan/2014/01/20/how-pci-failed-target-and-u-s-consumers/
[3] http://www.technewsworld.com/story/80160.html
[4] http://www.csoonline.com/article/2365827/security-leadership/maybe-it-really-does-matter-who-the-ciso-reports-to.html
[5] Target Data Breach Case Study
[6] http://www.businessweek.com/articles/2014-03-13/target-missed-alarms-in-epic-hack-of-credit-card-data
[7] http://blogs.sophos.com/2014/04/02/sophos-at-bsides-austin-credit-card-security-and-pci-dss-compliance-post-target/

Thursday, May 8, 2014

Bitcoin, 3D Printing, and Drones: Three Technologies That Will Change the World

If I were to pick 3 technologies that will most likely change the world on the scale of the PC or the Internet, Bitcoin, 3D printing and drones would top my list.

There are still many people out there that believe Bitcoin is a fad that can "drop to zero" on the whim of a few websites or the US Government. Anyone with a solid understanding of the siginificance of the technology behind Bitcoin will realize this is almost impossible.

Even if Bitcoin were to be overshadowed by a newer cryptocurrency, the fundamental technology behind it, the blockchain, will remain significant.

The Bitcoin blockchain is a digital record that contains every Bitcoin transaction in history. It solves the longstanding problem of digital ownership: no one can "spend bitcoins twice" without fooling more than half of the entire Bitcoin network: a task becoming increasingly hard with the exponential increase in Bitcoin mining power. 


While digital money is the most obvious item to transmit through this technology, it can be used to prove ownership and provide of any kind of digital file: copyrighted music, article of incorporation, etc. Like the Internet in the 90s, we have not even imagined the possibilities of Bitcoin and blockchain technology.
One of the best indicators of successful technology is whether people are using it to break the law.
One of the best early predictors of successful technology is whether people are using it to break the law. This shows that the technology has overcome an economic hurdle that has prevented people from breaking such laws in the past.

A great example is Peer-to-Peer file sharing, the technology behind Bittorrent that allows people share often very large files with many people without buying expensive bandwidth and dedicated servers. 

Much P2P content is pirated, but unlike traditional content piracy, pirate content is distributed on P2P without financial incentives. P2P pirates are motivated to increase their online reputations, not profits, but this is only possible due the the extreme efficiency of P2P sharing technology.

Despite enormous pressure from the entertainment industry, P2P has thrived and is now used by organizations to legitimately distribute softwareThe closet analogue to P2P file sharing is 3D printing. 

Like P2P, 3D printing drastically decreases an economic cost, this time the cost of physical manufacturing. Like P2P, the cost decrease is so dramatic that people will begin to distribute 3D printable designs without financial incentive. 

We already see the potential for law breaking: anyone can now print their own 30 round magazines and knockoff toys. The potential for legitimate use is also great: 3D printing empowers artists and small-time designers with a whole new medium. 

Looking at P2P's past, we know that any government or industry effort to ban or restrict 3D printing will be futile since the triumph of efficient technologies is inevitable. Its best to embrace such technologies and encourage their legitimate use.

Lastly, aerial transport drones will transform the entire retail market, including online commerce and physical stores. Even with 3D printing, we will still need to buy goods that are not printable: fresh food and electronics for example.

 Instead of scheduling a trip to the grocery store or waiting days for a package to arrive, imagine an Amazon or Walmart drone delivering items to by parachute to your front door within hours of ordering. 

The success of transport drones will not rely on convenience alone: it also represents a huge reduction in shipping costs. It costs much less for a small robot to deliver a package through the air  than paying for a driver and a vehicle to deliver packages on an assigned route.

While aerial transport drones have great potential, I put this technology last because it has the greatest risk of failure due to regulation. While it will be incredibily hard for a government entity to enforce a "ban" on Bitcoin and to a lesser extent 3D printing, it would be relatively easy to enforce a ban on transport drones: they can simply be shot out of the air. 


Unless the FAA clears the use of airspace for private transport drones, we may never see the success of this technology. I am hopeful, however, because well-heeled companies like Amazon are likely exerting lobbyists right now to make this a technology a reality in the US.

Saturday, February 15, 2014

Universal Cell Phone Kill Switch: Why government-mandated solutions can make for bad security

Several bills have been proposed on the state and national levels to mandate a remote "kill switch" on all cell phones sold in the US. The purpose of the bill is to prevent cell phone theft by allowing users to permanently disable stolen cell phones.
 On the surface, this seems like a great idea: what can go wrong? A government mandated solution has many issues:
 Since the feature is one that is mandated by the government, we will likely see some poor implementations of it by some manufactures. A poor implementation could result in accidental bricking or even exploitation. This has happened in the past: a Gizmodo writer had his iPhone and macBook remotely wiped by a hacker. 
Given recent revelations about our Government's actions in the technology security field, I also worry about the the power of this legislation. What prevents the government from mandating the bricking of would-be protester's phones, for example? That may sound ludicrous, but its not. In 2011, Bay Area Rapid Transport Authority of San Francisco shut off its subway's cell phone transmitters to prevent a protest, leaving all passengers without cell signal. Imagine what can be done with a cell phone kill switch.
Currently there are free and built in solutions that offer similar functionality to the proposed "kill switch." Apple allows users to deactivate stolen devices in a way that persists even through a reset, and the free Android app TrustGo allows users to track and lock stolen devices.
 If these bills were to pass, language must be added to ensure that the kill-switch has an opt-out that allows the user to completely disable the functionality. Why give a hacker or the government the chance to brick your phone?

Sunday, January 5, 2014

Connect a 2560x1440 QHD Monitor over HDMI 1.4 [Windows]

This post is a bit different from my other ones- it is a simple tutorial to connect a 2560x1440 monitor to a laptop with a HDMI 1.4 port. While HDMI 1.4 is capable of outputting to 2560x1440, many laptop graphics drivers artificially limit this to 1200p. I'm posting this because while simple, the process of tricking the GPU to output to 1440p via HDMI took me a few days of googling to figure out.

You will need:
2560x1440 monitor with Dual link DVI input (They sell for around $300 on Ebay)
Spare Monitor (1080p or less) with DVI input
Laptop/Desktop with HDMI 1.4 port running Windows OS (most computers made in the last few years are on 1.4 specs, mine is the HP DV6-6135DX)
Dual Link DVI to HDMI cord (It is important to get a dual link DVI adapter. A single link adapter will not have the bandwidth to output 1440)
The Custom Resolution Utility, a free software by ToastyX at Monitor.com

First, connect the QHD monitor to the computer via the DVI-HDMI cord. You may notice that your graphics adapter refuses to output any sort of resolution to the screen. Don't worry, the monitor's specification is now saved and you can edit the specification to trick the graphics adapter to output in 1440p.

Connect the spare monitor to the laptop via the same cable. This monitor should work at 1080p or whatever the native resolution is.

Open CRU and select the "Active" monitor from the dropdown. Choose "Add" a detailed resolution. Edit the entry to read 1440 for vertical and 2560 for horizontal pixels. Save, and restart your computer.


Now you should have the ability to output in 2560x1440 on this 1080p monitor. Right click the Desktop and go to screen resolution. Select 2560x1440 and apply to test this.


Open CRU again and click "Copy" on the top right corner next to the active monitor. This copies the display settings on the current monitor. "Paste" those settings onto the next monitor on the dropdown- this should be the 1440p monitor we first connected.


Restart your computer and reconnect the 1440p monitor. You should now be able to see the screen output and select the native 2560x1440 resolution.

Note: Some people will need to decrease the screen refresh rate to display at 1440p. Usually 55Hz is okay.

A 2560x1440 monitor is great for things like computer coding, video editing, and multitasking in general. Don't let silly driver restrictions stop you from enjoying life in QHD.