Skip navigation
1 2 3 Previous Next

Geek Speak

41 Posts authored by: LokiR

Have you ever wondered what the actual cost of security breaches are?

 

Target, due to its mid-December, 2013, breach can certainly be a case study. As of now, Target's profits are down 46% and down by 5.3% in revenue. Their stock is still down by 5% as well.

 

While this tells me it's an excellent time to shop at Target, why aren't more people shopping there? And why did Target take such a hit in profits?

 

Part of this no doubt involves the timing of the breach. Rumors started to filter out in the middle of the holiday shopping season followed by the announcement.

 

Another part may be related to a loss of trust. Since rumors leaked out before Target made an official announcement, consumers may not consider Target to be as trustworthy a source anymore. Granted they may have wanted to wait until after the holiday shopping season was over, but the leak made that impossible. Since the rumors started before Target made the announcement, it looked like the company was trying to hide the damage instead.

 

Of course, after they announced the breach, they later had to amended the magnitude of the breach. Not only were credit card numbers stolen, but personal information was also stolen. Since privacy is a hot topic, people may continue to be wary about shopping at Target until some more time has passed.

 

Here are some potential take-aways from the Target breach:

  • Hackers seem to like to try for a mid-December attack. The massive 2006 attack also happened in mid-December.
  • If someone leaks that you've been hacked, disclose immediately and with as much detail as feasible.
  • When you finally tell the public about a breach, also include what you're doing to circumvent future attacks.
  • It's probably a good idea to educate the public about what the breach means for them and their continued use of your product.

You've heard of P2P, but have you heard of V2V?

 

Vehicle-to-Vehicle (V2V) communications

 

Vehicle-to-Vehicle communications is, as the name implies, data exchanged between two vehicles. The NHTSA has announced that they are beginning to draft rules to "enable vehicle-to-vehicle (V2V)  communication technology for light vehicles," and that eventually all new vehicles will be required to have V2V communication technology.

 

V2V communication wirelessly transmits bursts of "basic safety data" ten times a second. While there's no hard definition of "basic safety data" right now, it will include the speed and position of the vehicle.

 

V2V technology will be used to help prevent accidents. The current, tested implementation uses the safety data to warn drivers if there's a car coming. For example, if you are changing lanes, you will be warned if there's a car coming in the other lane. According to DOT research, V2V technology can prevent a majority of accidents that involve two or more vehicles. Right now, there are no plans for your car to take over; your car just warns you.

 

Downsides

 

There are a number of privacy issues that are going to come into play on these future regulations, and there are some significant security issues too.

The NHTSA has said that anonymized data will be available to the public. It's fairly easy to identify individuals from such data, so people who have safety concerns about being tracked (such as those with restraining orders against other people) will need to be extra careful.

 

Vehicle manufacturers will also be able to collect additional information so long as the correct basic safety data is transmitted to other vehicles. The extra data can be used by your insurance company to determine your rates, and so on.

 

It also sounds like your vehicle will be tracked, though the information may be vaguely difficult to access due to this line: "vehicles would be identifiable through defined procedures only if there is a need to fix a safety problem."

According to a recent Pew poll, American users are more afraid of "cyber attacks" than world-impacting threats, like nuclear weapons. Granted, you are more likely to get your credit card number stolen than someone is likely to push the big, red nuclear button of doom, but that's like being more afraid of being pick-pocketed than being beaten to a pulp and robbed when you go to a big city. Getting your wallet or credit card number stolen can be a big deal, especially the first time, but it's more on the annoyance level of threat than loss of life or limb level of threat.

 

 

What is a "cyber attack" (according to popular opinion)?

 

 

Unshockingly enough, Hollywood and mass media have a lot more to do with this fear of hackers than reality does. People seem to think that rogue (or government sponsored) hackers can ruin their lives (sort of true), bring down power grids (not likely), and start WWIII by hacking missile launch or guidance systems (thanks, Hollywood).

 

The first thing most people think of when they hear they've been hacked is usually something to do with their bank accounts or credit cards. This is common enough that financial institutions have a set of guidelines already in place to deal with unusual credit card or account activity. If you're afraid someone is going to drain your accounts, that's more difficult than you think. Since banks don't like to lose customers or money, they usually put some kind of hold on large amounts of money being transferred around. There are also federal regulations on the movement of large amounts of money.

 

Financial ruin? Unlikely.

 

Now there are other attacks that are more likely to harm individuals, though generally not physically. Facebook accounts and other social media accounts can be hacked and used to ruin people's reputation. Hackers can post personal information of others and release the full might of Internet trolls and bullies (which generally include death and other unsavory threats).  They can also upload private pictures or doctored pictures to sites and ruin someone's reputation enough that they'll be unable to get a job in their chosen field. These kinds of attacks are less reported in the media and significantly more difficult to recover from.

 

Life ruining? Possibly. Do people think of this when "cyber attack" comes up? Probably not.

 

Getting into the more dramatic, high-profile, high-damage ideas of hackers, losing infrastructure or missiles to cyber attacks is not particularly likely. Squirrels, tree limbs, and Mother Nature are more likely to cause a black out than hackers. As far as death and destruction from missiles or nuclear weapons, well, I haven't heard of any confirmed (or unconfirmed) death from a cyber attack. Cyber attacks can certainly cause damage (aka, Stuxnet), but they don't have the massive loss of life or property damage that most people seem to fear.

 

Death via hacker? Nope. Get me my self-driving car, and then we'll talk.

 

 

Why are we afraid?

 

 

There's a lot of hype around "cyber" threats that stem from popular media and ignorance. Computers have become widespread enough that everyone can relate to "cyber" dangers in movies or television, but only people in our industry seem to realize how much these fictionalized attacks are either widely exaggerated or just wrong based on current technology. News stories also get a lot wrong when reporting security breaches. It doesn't help that there are companies designed to take advantage of these fears and spread computer security misinformation to drum up more business. Few people are taught basic information security, so they make poor security choices and unreasonably fear attacks.

 

For a real-world example of how non-IT folks interpret the news based on how they think cyber attacks work, my mother is very concerned about me purchasing things online due to the Target credit card breach. I can't seem to convince her that the Target breach has nothing to do with how I shop online and that she really doesn't need to worry about that. She sees "credit cards hacked" and then associates that with online shopping when it has nothing (or at least very little) to do with online shopping, especially online shopping at non-Target stores.

 

 

Simple preventative measures

 

 

If only we could get everyone to attend a short information security class... Really, the easiest and most effective way of preventing the feared "cyber attack" is probably basic information security. Things like using strong passwords or pass phrases, cycling your password, not keeping lists of your passwords, and not clicking on strange links are the most likely steps to take to prevent security breaches.

LokiR

Password 123456

Posted by LokiR Jan 23, 2014

For those of you who can ban certain passwords (such as "password" or "abc123"), you may be interested in SplashData's list of the worst passwords of 2013.

 

 

Here are the top ten jewels for your banning pleasure:

  1. 123456
  2. password
  3. 12345678
  4. qwerty
  5. abc123
  6. 123456789
  7. 111111
  8. 1234567
  9. iloveyou
  10. adobe123

 

 

For those of you who don't have this power, you can use this list to help educate your users on what constitutes a bad password and maybe guide them into using stronger passwords. Most security firms recommend passphrases, like the estimable "correct horse battery staple". Even suggestions like "use your favorite sports team and favorite player number" are better than the tried and broken "company/product and 123".

LokiR

Wafer-thin flash drives

Posted by LokiR Jan 13, 2014

There's a new design concept out there for flash drives the thickness of a sticky note. The company, called dataSTICKIES, uses a relative newcomer material called graphene and a proprietary wireless data transfer protocol to get achieve this wafer-thin thickness.

 

 

Now, graphene is my favorite new material; I've been waiting for close to 10 years for someone to come out with a viable commercial application, and this is a pretty cool proto-product. Graphene is a form of crystalline carbon (essentially atom-think graphite) that is super strong and an excellent conductor. Research using graphene has taken place anywhere from medicine to energy to quantum science.

 

 

The dataSTICKIES company is using graphene to store data. Because graphene is an atom thick, the hard drive becomes a flat sheet. Instead of using USB to transfer the data, the company developed an optical data transfer surface to take advantage of the super thin material. This also makes transferring data easier since you no longer have to deal with the USB superposition effect (i.e., it takes at least three tries to connect the USB cable) or moving computers around to get to the USB ports.

 

 

Another cool thing with the dataSTICKIES is that it looks like you can increase the data capacity by stacking stickies. I'm not sure how that's supposed to work though, since you are supposed to be able to stack stickies as discrete drives.

 

 

These would be pretty awesome anywhere, but especially for people on restricted networks. Need to install some more pollers or every SolarWinds product you bought? Just slap a sticky on the computer.

How would you like 100 Gb/s over wireless?

 

 

In an experiment funded by Germany's BMBF, researchers have transmitted 100 Gb of data per second (the equivalent of 2.5 DVDs) over the grand distance of 20 meters. Before you are underwhelmed, sister project transmitted 40 Gb/s to another station over 1 kilometer away in a field experiment earlier this year. The researchers are designing these experimental wireless systems to integrate seamlessly with existing fiber optic networks.

 

 

How are they getting these amazing speeds?

 

 

The earlier field test at 40 Gb/s used higher frequencies (200-280 GHz) and experimental transmitter and receivers. The higher frequencies allow the increased speed and large volume of data. The experimental transmitter/receivers enabled them to use the higher frequencies.

 

To get the 100 Gb/s speeds, researchers used a photonic method - a photon mixer generates the radio signals for the transmitter - to produce the high frequency radio waves. By applying photonic methods, data from a fiber-optic system can be directly converted to a wireless signal. Of course, they must use the experimental transmitter/receivers to use the signal.

Take a look at their sadly paywall-ed article in Nature Photonics for more information.

 

 

This is amazing, and I hope someone makes this into a wireless standard pretty fast. I would definitely volunteer to be part of that beta.

When I hear "wetware," I think of futuristic, cybernetic implants that connect our brains to the Internet, version 10.0, but IBM is using the term to refer to a new form of liquid cooling and energy transportation.

 

The new technology emulates the brain's energy transportation (the quintessential wetware model). Capillaries in the brain cool and power our neurons. IBM researchers are attempting to copy that same architecture to reduce the estimated 60% of computer volume dedicated to electricity and heat exchange in modern computers. In the process, computers could become smaller, more powerful, and more energy efficient.

 

 

The cost of technology

 

An increasing concern in the Tech sector - especially for those businesses running server farms, super computers, and data centers - is the cost of running the computers. The cost of purchasing the computers may begin to factor less into the purchasing decision as energy, cooling, and location costs increase. With this new technology IBM will be able to build smaller, more energy efficient computers because chip components can be stacked in a kind of electronic blood that is both battery and coolant. Because the chip components can be stacked, there is less distance for signals to travel, further reducing heat production that the electronic blood will transport away. Without the need for airflow between components, noisy fans can be removed and the computer case can be much smaller. Reducing the size of the cases and the amount of heat produced will significantly reduce the cost of running the computers.

 

 

The first steps

 

The first iteration from IBM uses the traditional approach of water to cool the computer chips. However, IBM's experimental Aquasar installation in the Swiss Federal Institute of Technology Zurich (ETH) will use the heat from the cooling system in the heating system of the ETH building.

 

The second step of this technology is to get energy to the components using a liquid medium IBM researchers are looking at vanadium as a potential key component in this step.

 

 

 

Hopefully IBM succeeds in making the beginnings of a cybernetic brain. While I do enjoy hanging out in the server room to warm up, I'm sure the money that's going into cooling that room could be better spent. Regardless of the future of computing coolant systems, we will still have to monitor the internal temperatures of our servers using tools like SolarWinds Server and Application Monitor.

LokiR

Voice Over I.V.

Posted by LokiR Oct 7, 2013

No, the title is not a typo. October's wacky, weird, so-new-it-hurts technology is a subcutaneous cellphone that runs off of your blood.

 

The phone rather looks like a prototype for the awesome phones from the Total Recall remake. Essentially, the phone is a small, thin, silicon touchscreen that is inserted under the skin and only "lights up" when you make or receive a call. The other cool thing about the subcutaneous cellphone is that it would use a blood battery - a tiny, biological battery that converts the glucose in your blood to electricity.

 

How can this technology affect you? Not only is this not available (and there are no plans to make it available), who wants to implant cellphone technology that will be out of date in a year or so? On the other hand, the battery is going to be very useful in the health care field.

 

There may come a time in the near future when you'll have to track medical implants and determine battery life or functionality. It would be awesome if you could monitor these things remotely like you can your server health.

 

Actually, the blood battery can act as a human health monitor since it directly interfaces with your blood. When this is implemented, there is probably going to be a wireless or Bluetooth element to it so that health care providers can monitor you for blood disorders. There will also be an app for that.

 

And in that eventuality when cellphone technology has plateaued, you will be able to get your very own subcutaneous cellphone. I hope by that time we might have some holograms or something to make it even more awesome. 

After we have collectively congratulated ourselves for the D-Wave folks producing what may be the first quantum processor-capable of solving computing problems 10,000 times faster than  conventional computers-we might want to put some serious brain power into thinking about what "solving computing problems 10,000 times faster" means for security, especially during the transition period between the conventional computing and wide-spread adoption of quantum computing.

 

If the initial offerings of the D-Wave processor can solve problems at 10,000 times the speed of conventional  computers, and we're not even 100% sure it is a quantum processor (hello, Schrödinger), what impact are these new, quantum (like) processors going to have on data security?

 

Even if quantum computing takes 10 or more years to be viable for businesses or governments, IT pros will still have to address security concerns around hackers with quantum computers. Assuming you have no plans or desire to migrate to quantum computing or leverage quantum computing in your organization, most people will need to either be involved in quantum security measures or be aware enough of quantum security measures to choose an appropriate third-party product. If you are part of an organization that collects private data, financial data, or medical data, you have sensitive information that you are obligated (usually by law) to protect. That is pretty much every organization that can afford an IT department.

 

As soon as criminals have access to quantum computers, conventional security wisdom and policies will no longer be viable because criminals will be able to breach security faster by multiple orders of magnitude. To combat that, the rest of us will basically be forced to invest in quantum security measures.

 

So, what will the future look like? Are security firms going to be the new IT rock stars, like Google, Apple, and Microsoft? Will new computers ship with biometric security devices? Will employers hand out smart cards?

 

While it is highly unlikely that we can know the future (going by the principles of quantum mechanics), the security field is definitely going to be impacted by this new technology.

Hats off to Canada! It seems that D-Wave has produced the first commercial quantum processor, or at least they've presented enough evidence that NASA, Google, and Lockheed Martin have purchased, or are in the process of purchasing, one of their processors.

 

When D-Wave first started out, there was, and continues to be, a great deal of skepticism regarding its claims of creating a quantum processor. Most research labs have only succeeded in building general-purpose quantum computers that use a few qubits (quantum bits). D-Wave claims to use hundreds of qubits in their processor.

 

D-Wave seems to have overcome the limitations seen in research labs by creating a processor that solves a specific type of problem - optimization problems - though it is comparable to a high-end classical processor for general problem solving. A recent study in Nature Communications supports D-Wave's claims about creating the first commercial quantum processor. At minimum, the study lends credence to D-Wave's claims  by acknowledging that "quantum effects play a functional role" in the processor.

 

Regardless of its status as quantum processor, it has performed up to 10,000 times faster on some optimization problems during testing. If you are involved in machine learning or identifying exoplanets from satellite images, there might be a D-Wave computer in your future. Unfortunately, the high price tag means you have to have extraordinarily deep pockets to purchase one. Heck, you might even need to have deep pockets to look at one, considering the reported $10 million asking price.

 

So the general IT field probably won't have to worry about monitoring quantum devices for decades, but with this viable quantum entry into the field, we're going to have to start thinking about how to apply IT management principles to quantum computing, especially in regards to monitoring quantum equipment and new security protocols. And really, after quantum computing, we're going to have to worry about quantum networking to move this new wealth of information around more efficiently.

LokiR

What is PRI

Posted by LokiR Jun 10, 2013

A Primary Rate Interface (PRI) is not an esoteric banking term designed to take as much money as possible away from your account. It's a less obscure telecom term that connects your internal PBX to "the outside."

 

How is this important?

 

The PRI is the part of the ISDN that is responsible for carrying voice and data from point A to point B. This is essentially the trunk you rent from your telecom provider to connect your internal VoIP lines to the external PSTN. (I wrote a blog about trunks a while back, if you're interested.)

 

A PRI uses different carrier lines depending on which part of the world you live in. For example, if you live in North America, the PRI uses a T1 line. If you live in Europe, it uses an E1 line. These carriers establish how many external phone lines you can access and how much data you can transmit.

 

Using the T1 line as an example, there are 24 available channels. These channels are further divided into B channels and D channels. The B channels carry your voice and data, . The D channels carry control and signaling information. A single T1 PRI line has 23 B channels and 1 D channel. If you use more than one PRI line, you can often reduce the total number of D channels. If you lease two PRI lines, you could have 27 B channels and a single D channel, giving you 27 outside telephone lines instead of 26 outside lines.

LokiR

To Cryo or Not to Cryo?

Posted by LokiR Jun 3, 2013

That is a question.

 

This is probably not relevant for a great number of IT folks, but it is interesting for the implications on server room/server cooling.

 

This year some enterprising researchers developed a micro cryocooler that can cool a device down to 30 Kelvin (-243 °C, -406 °F) in around an hour and is about the length of your pinkie finger. This new cryocooler device is a multi-stage, mini version of the tried and true Joule-Thompson cryocooler (circa 1852). The Joule-Thompson cryocooler cools by causing a high-pressure gas that is below its inversion temperature to expand as it flows to a low-pressure region.

 

The first stage of the new device uses nitrogen so it can cool from room temperature to 100 K (-173 °C, -180 °F) . The second stage uses hydrogen, and cools the rest of the way  to 30 K.

 

As it stands, this is a great innovation for medical technology and space technology (such as interplanetary telecommunication).

 

Now, if we can cool down to 30 K, we should be able to regulate temperatures to a happy medium between temperatures needed for superconducting devices and temperatures too hot for business-level computing. I, personally, would be very grateful for an inexpensive, consumer-level CPU cooler for my laptop.

 

In the meantime, you can use SolarWinds Server and Application Monitor to keep an eye on your server temperatures and wish you had a micro cooler for your CPUs, cryo or otherwise.

You might not know it, but SolarWinds has a product to manage your IT infrastructure directly from your mobile device called Mobile Admin. Now Mobile Admin comes in two parts-a free mobile client and a purchasable server. You can download the mobile client on iOS, Android, and BlackBerry devices. The Mobile Admin server installs on a Windows box, and integrates your IT management tools - such as Active Directory, SolarWinds Orion, or CA Service Desk - into a single UI that you can access from your mobile client.

 

Going back to the mobile client, it has three very nice, free features.

MA_free_stuff.png

 

SSH

You can open a secure shell connection using the mobile admin client. If you're connecting to a computer on the corporate network, don't forget to open a VPN connection first.

ssh.PNG

 

Telnet

Oh, telnet, you IT workhorse, you. Someday you might get to retire to greener fields.

telnet.PNG

 

RDP

Who really wants to open an RDP session on your phone? But if you have to do it, the Mobile Admin client provides the option. Obviously, this option works best on a tablet.

rdp.PNG

As a side note, you can't connect to a 2012 or Win8 box through RDP yet.

 

Pro-tip: There are more "keys" on those toolbars - just scroll to the right or left.

 

There you have it - three reasons to use the Mobile Admin client on your mobile device.

There are long, exhaustive lists of best practices for creating master images. In this post, I have compressed them down to three major areas.  So sit back, fiddle with that fancy fob watch, and read about a couple of things to consider before creating master images.

 

Storage

 

While storage space is fairly cheap now, it's still a commodity. Do you really want to waste storage space that can be used elsewhere? Moreover, do you want to waste time searching through fifty images?

 

When you create the master images, it is easy to go overboard and make too many images.  Try to create a basic master image and then a couple of images based on the number of users per software package, frequency of requests, or the difficulty level of an install.

 

For example, most users need some sort of office suite, but if a significant number of users need a specific piece of specialty software, then you should probably make a master image for them. if you support a department with a lot of turnover, you might want to create an image specifically for that department. If you support software engineers, you might want to create an image for them so you don't have to waste time installing and configuring the development environment, version control software, and so forth.

 

Image Optimization

 

Optimization goes hand in hand with storage restrictions. You want the image as trimmed as possible but have all of the appropriate patches, service releases, and software. Most vendors recommend that you uninstall any unnecessary programs and disable unnecessary services. On a VDI platform, you should probably disable automatic updates from the virtual desktop and manage updates yourself to prevent performance and network hits.

 

When building your images, don't forget to add trusted sites and intranet links, map printers or other shared network devices, install drivers and virus scanners, and apply all approved software updates.

 

Clean Up

 

This is the part that I usually forget about and then pat myself on the back when I remember it. After you have created your lovely new master images, remember to remove all the installers, empty the trash, and defrag the hard drive. You should probably check the hard drive for errors and run the virus scanner before making the image.

LokiR

Brainwave Biometrics

Posted by LokiR Apr 12, 2013

I have heard a lot about biometrics - granted, almost everything I know about biometrics comes from action-packed spy movies and T.V. series - but I've only been one place that uses any form of biometric security. Thank you, Disney World, for using biometric finger scanners to get into your parks. You make my life complete in so many unintentional ways.

 

For all intents and purposes, biometrics is still in that scary, futuristic niche of rigorous governmental or corporate information control. Fingerprint scanners, retina scans, and voice recognition, while secure, suffer from multiple problems, such as expense and speed, which prevent the widespread use of biometrics. Plus, using biometric information is creepy, invasive, and brings out privacy advocates faster than Google or Facebook. It's safe to say that this is a niche industry and most people won't encounter full-fledged biometrics unless they're working in a high-security area or law enforcement.

 

You would think that brainwave readers would be even more problematic. However, as you may have guessed, this may not be the case anymore. New innovations in consumer-grade biosensor technology might catapult brainwave "passthoughts" to the head of the biometric industry.

 

You may have seen the brainwave-controlled cat ears before or heard about the brainwave-controlled tail. Both of those products have made the blog rounds, and the cat ears are available on ThinkGeek. ThinkGeek also sells the base headset that can be used, among other things, to control robots.

 

The geniuses over at UC Berkley's School of Information took this readily available headset and started to run experiments to see if the headsets could be used for computer authentication, which it could.

 

Using customized thought tasks, the researchers reduced errors to below 1%. This is a better rate than I have typing out passwords. According to the research team, the best way to use a brainwave authentication system is to pick out thought tasks that are relatively easy but not too boring, like mentally counting a number of objects in a certain color or focusing on their breathing. If the biosenor technology firms start making their sensors smaller and less cumbersome, these might even start replacing smart cards or bank PINs.

 

One avenue of research that they haven't broached yet is how to authenticate when under stress, which I think will probably be a key area in high-security law enforcement or military use. Of course, they might just be concentrating on the areas where biometric security has not been a viable option.

 

In any case, computer security might be taking on a very different outlook in the next few years.

Filter Blog

By date: By tag: