1 2 3 4 Previous Next

Geek Speak

55 Posts authored by: docwhite

A corollary of the maxim on historical memory--those who forget mistakes are doomed to repeat them--is that the best way to go forward is to look back--if not at missed alternatives, then at least at the path taken.


Will 2014 be a turning point in the future of the internet? Let's review a few points of reference to put the question in context for those who someday might look back.


First point: As a context for Facebook's recent acquisition of the WhatsApp messaging service for $19 billion, CEO Mark Zuckerberg tells us in a white paper that his outlook on adding another 5 billion users to Facebook assumes connectivity for global citizens should be "a human right". "The knowledge economy is the future," says Zuckerberg. "By bringing everyone online, we’ll not only improve billions of lives, but we’ll also improve our own as we benefit from the ideas and productivity they contribute to the world."


Delivering remarks to the Mobile World Congress in Barcelona this month, Zuckerberg describes his plan as providing "a dial tone for the internet"; guaranteed access to carrier infrastructure that enables meaningful connectivity in the knowledge economy over a phone. If you have a phone, then you'll have basic access to the internet in terms of "text-based services [(messaging, social networks, search)] and very simple apps like weather".


Second point: In a 2014 DC Court of Appeals ruling on Verizon's case against the Federal Communications Commission, Judge David S.Tatel wrote that:

Given that the commission has chosen to classify broadband providers in a manner that exempts them from treatment as common carriers, the Communications Act expressly prohibits the commission from nonetheless regulating them as such.


The FCC has three basic rules for guaranteeing an open internet: transparency, no blocking, and no reasonable discrimination. These rules apply to all carriers. Since the FCC defines broadband providers as dealing in "information services" Tatel's majority opinion suspends the open internet rules for carriers that provide broadband services.


Of course the FCC need only reclassify broadband providers as carriers and the open internet rules would again immediately apply. So far they have not done it. And in the meantime, carriers are aggressively moving to exploit the difference between "common carrier" and "broadband provider," establishing different classes of broadband traffic with different costs. The big elephant in the room is the proposed merger of Time Warner Cable and Comcast; it would create a company that services a full third (33 million) of broadband subscribers in the US; aggressive commercializing of the rate at which data flows to and from such a large base of consumers would be almost impossible to prevent without very clear FCC regulations.


Since robber baron maneuvers of carriers have very wide and fundamental implications for access to the knowledge economy, one would expect Zuckerberg and the internet.org would oppose them. Yet in talking about the program to inclusively guarantee the human right of connectivity to current internet users and the billions of currently unwired people, Zuckerberg promotes the idea of up-selling data services to the minimally connected. A (cynical?) image of Zuckerberg's plan could be a large impoverished crowd of people given a little space before an enormous bakery window that pushes out intoxicating wafts to them through tiny holes.


Third point: Remember that scene in Minority Report (2002) in which a commercialized public pedestrian passage personally appeals to the main character as he passes through? While the scene's face recogntion technology may imply a ubiquitous computing grid, we can't tell how that grid of the imagined world of 2054 combines public and private services. Does the consumer, John Anderton, receive personalized ads based on data services to which he subscribes? Or does he receive them because vendors pay the owner of the computing infrastructure for Anderton's attention, and Anderton himself doesn't pay to block them?


In either case, that Anderton wears no computing device that could refuse the probing of face/iris recognition devices or otherwise negotiate the ads suggests an important implausibility in what Minority Report shows us. Though Google Glass may encounter resistance to its wearable computing technology, our ever-increasingly mediated culture foretells that resistance to wearables (and implants too in the long run) will be futile.


Since envisioning is the first step in making new technology real, the blurry edges and omissions of popular depictions of the future have value for choices of innovation in the present. And it may not be just a darkly ironic open question as to the details of the computing grid and user recognition technology that might be in play for John Andertons in the actual 2054.

Let's return an earlier topic of how face recognition and wearable computing technology may converge.


Google Glass is in a phase of development they are calling the Explorer Program; participants are would-be early adopters who complete an application to buy Google Glass and provide feedback as the product iterates and its distribution expands. Meanwhile, among the many companies already developing apps for Glass, facialnetwork.com offers a demo version of NameTag, which allows a user of Glass to get information about people in the wearer's field of vision.


A press release on NameTag evokes a dating and professional networking best-of-both-worlds. Glass wearers, preparing to break the ice, can check each other out in a bar, for example, by running images of each other through NameTag's face recognition database and reviewing information that includes dating site, LinkedIn, and FaceBook profiles, and public records (home appraisals, drunk-driving and other convictions perhaps).


NameTag has the attention of Senator Al Franken's Subcommittee on Privacy, Technology, and the Law. In response to the subcommittee's inquiries, the company declares its intention to "authenticate every user and every device on our system to ensure data security". And using NameTag requires that a Glass wearer create an NT profile, supply a number of photos from specified angles, login on the service with "at least one legitimate profile" from another social media site, and agree to exchange information.


Glass-less people in a wearer's view are obviously at an information disadvantage. Our only option would be to visit the NameTag website and "opt-out" of having our data accessible in NameTag searches. Otherwise, by default, we remain oblivious that a begoggled NameTag user has us in view, gazing at us through the filter of our social media information, visual and textual, before making any move in our direction.


Yet NameTag would seem to be unambiguously excluded from the Google Glass platform: "Developers can develop these types of apps but they will not get distributed on the Glass platform period," says a representative from the team.


As with any consumer technology, however, first means more-to-come, and usually sooner not later; it's just a matter of time before another brand of glass arrives with fewer restrictions to better-suit all types of voyeur.


Edge Devices


What happens when a Google Glass unit enters your network space? Are you going to let it obtain an IP address? If the wearer is an employee, does your single-sign-on system admit the glass device as an authenticated endpoint through which its user can do all of his or her usual kinds of company business?


Monitoring the inevitable wave of glass will be a high priority as the security holes in each new innovation reveal themselves during real-world use.

I recently discussed the NSA's project to develop a quantum computer in part for the purposes of cracking AES-encrypted data captured from internet backbones during the past 10 years and now stored in the enormous warehouses.


Yet, while true quantum computing probably is still years away, quantum key distribution systems already exist. And since these systems do not depend on the practical inability of a computer to factor very large numbers, and instead use the relationship of entangled particles to cipher the exchange of data, making quantum key distribution technology more widely available would be a crypto activist strategy that even a (still-hypothetical) quantum computer plausibly would be unable to defeat.


If you think of packet-switched data security as a chain, then the encryption algorithm provides the strongest link. Managing encryption keys is the weakest link; and stealing keys is what hackers (NSA included) most often succeed in doing. In some cases, as with RSA corporation, the creator of the technology that generates keys takes money to make key theft easy.




As I've discussed, quantum computing fundamentally depends on the engineering feat of manipulating particles into the state known as superposition. Quantum key distribution also uses superpositioned particles but also requires at least one pair of such particles that are entangled. With a pair of entangled particles, observing some aspect of state for one particle exactly predicts the state of the other particle. Prediction in this case amounts to a instantaneous communication between particles.


Since this phenomenon violates the theory of special relativity, which precludes particles influencing each other at any speed faster than light, Einstein derided the entanglement hypothesis, describing it as "spooky action at a distance". And yet 80 years of experimental physics has overwhelmingly confirmed entanglement as a reproducible physical reality.


Quantum Key Distribution and the Flow of Money


Entangled particles secure qubit-based key exchange by relying on the fact that you can neither copy a quantum state (the no-cloning theorem) nor measure all aspects of entangled particles without corrupting the quantum system--in effect, collapsing particles in superposition into particles with a non-random single set of values. As a result, parties using such a quantum system to secure their information exchange can detect the fact and extent of intrusion by a third party.


In 2004, a group of researchers based at Vienna University produced a set of entangled photons and used them as a key to cipher a transfer of funds from Bank Austria Creditanstalt:

At the transmitter station in the Bank Austria Creditanstalt branch office, a laser produces the two entangled photon pairs in a crystal. One of the two photons is sent via the glass fiber data channel to the City Hall, the other one remains at the bank. Both the receiver in the City Hall and the transmitter in the bank then measure the properties of their particles.


The measuring results are then converted into a string of 0s and 1s – the cryptographic key. The sequence of the numbers 0 and 1 is, due to the laws of quantum physics, completely random. Identical strings of random numbers, used as the key for encoding the information, are produced both in the bank and the City Hall. The information is encoded using the so-called “one time pad” procedures. Here, the key is as long as the message itself. The message is linked with the key bit by bit and then transferred via the glass fibre data channel.


Eavesdropping can be detected already during the production of the key – before the transfer of the encoded message has even started. Any intervention into the transfer of the photons changes the sequence of the number strings at the measuring stations. In case of eavesdropping, both partners receive an unequal sequence. By comparing part of the key, any eavesdropping effort can be discerned. Though the eavesdropper is able to prevent the transfer of the message, he is unable to gain any information contained in the message.

Currently there are  physical and financial limits on the availability of quantum key distribution: the network cannot extend beyond 120 miles, preventing open internet adoption; and every member of the network must have a pair of entangled photons generated for them to use for each encrypted session of data exchange, incurring significant entry cost in terms of equipment needed. This is a good example of William Gibson's observation that, technologically speaking, the future is already here but it's unevenly distributed.


Encrypting Data, Breaking Codes


Innovations in encrypting and hacking into data tend to leap-frog each other in the history of cryptography. And because who controls each innovation impacts everyone who creates and exchanges data over publicly accessible channels, keeping up on the latest innovations becomes any individual's or organization's vested interest.


The lesson seems to be that the security of any software product comes down to how carefully the keys to the system are guarded. As with any kind of business and social interaction, establishing trust often comes down to reputation; and this is so much more the case when it comes to data security and choosing to purchase technology instead of creating it yourself. That software products for polling network devices carefully adhere to the standard in implementing SNMPv3, for example, importantly indicates that the security features operate as expected.

Last time I explained how quantum computing relies on the phenomenon of 'superposition'. And though this year the NSA is spending $79.7 million on quantum computing research and development projects, the recent award-winning achievements in particle physics tell us that a quantum computing platform would most likely take years if not decades to engineer.


At stake in the effort, besides a new era of computing with mind-boggling power and scale, would be a breakthrough in code-breaking, enabling access to AES encrypted data already being warehoused in Bluffdale, Utah. Since the value of that data decreases based on the time it takes to break the cipher protecting it, a quantum computing platform that takes decades to complete would be of decreasing value with regards to data warehoused now. In short, assuming you use an AES cipher to protect the privacy of your data now, how much would you care if the NSA gained access to 2014 data sometime between 2034 and 2064?


Generating Encryption Keys


The National Institute of Standards and Technology (NIST) publishes a series Federal Information Processing Standards (FIPS) documents related to information security. FIPS PUB 140-2 lays out criteria for accrediting cryptographic modules. If you adhere to FIPS 197 in implementing AES within a computer application, for example, then NIST's Cryptographic Module Validation Program (CMVP), using criteria in FIPS 140-2, validates your application as FIPS-compliant.


If encryption software does not generate random keys and protect those keys from interception, then the software only guarantees that its ciphered data is secure from those who do not know how to exploit its key management flaws. You can imagine the trouble with CMVP's integrity were they to certify a non-secure key generation module--which, yes, they seem to have done with RSA Corporation's BSAFE cryptographic system. Since 2004 BSAFE has been generating keys that are accessible to the NSA via an engineered backdoor.


Worse than CMVP's implied incompetence in validating BSAFE is its possible collusion with the NSA in getting BSAFE's Dual_EC_DRBG key generation backdoor into circulation as  part of a trusted cipher system. And in any case, the verifiability of any cryptographic system is a sorely open issue. If we can't trust NIST, then who can we trust to verify the cryptography we use but do not create ourselves?


The Cost of RSA's Profitability


Security experts have been aware of the flaws in BSAFE's key generation since 2007, two years after the BSAFE specification was published. Only with a recent Snowden-sourced story did we learn that the NSA paid RSA $10 million to make the rigged Dual_EC_DRBG component the default random number generator for BSAFE.


Among other things, we have another confirmation that verifiably AES-based key generation and data encryption are the only truly secure cryptopgraphy options in our contemporary context. Trusting the source of your software for monitoring network devices is more important than you may have thought.

Since I revisited the topic of AES encryption and NSA surveillance the Washington Post published information sourced through Edward Snowden that the NSA is spending $79.7 million to pursue a quantum computer capable of running Schor's algorithm. If and when the NSA succeeds, all the currently-unreadable AES-encrypted data they routinely capture en masse from internet backbones and stored in the Bluffdale, Utah computing center would become readable.

To give some sense of the agency's ambition we need to talk about Schrödinger's cat.


Quantum Smearing


In Erwin Schrödinger's famous thought experiment a cat sits inside a Faraday Cage--a steel box from which electromagnetic energy cannot escape. Also in the box and inaccessible to the cat is a machine that contains: 1) some material whose probability of releasing radiation in an hour is exactly 50%; 2) a Geiger counter aimed at the material and rigged to release a hammer upon detecting any release of radiation; 3) a flask of poison positioned under the hammer. If the radioactive material releases radiation, the hammer smashes the flask, killing the cat.


In this box, however, as a quantum system, it is always equally probable that radiation is released and not released. Says the Copenhagen interpretation of quantum systems, with the idea of superposition, the cat in the box exists as a smear of its possible states, simultaneously both alive and dead; an idea Schrödinger along with Einstein ridiculed for being absurdly at odds with everyday life. Nobody has ever seen the material smear that is a Schrödinger cat*.


Qubits, or Herding Schrödinger Cats


David Wineland and team received their Nobel Prize in Physics in part for creating a very small Schrödinger cat. "They created 'cat states' consisting of single trapped ions entangled with coherent states of motion and observed their decoherence," explains the the Nobel Prize organization in making their 2012 award.


Wineland developed a process to trap a mercury ion, cause it to oscillate within the trap, and then use a laser to adjust its spin so that the ion's ground state aligns with one side of its oscillation and its excited state aligns with the other side. On each side of the oscillation the ion measures 10 nanometers; the ion's two resting points in the oscillation are separated by 80 nanometers. And in effect, the mercury ion is guided into a "cat state" of superposition.


In this state the ion has both a ground and excited charge and so meets the physical requirement for serving as a quantum computing "qubit"; using the difference in spin, superposition allows the ion to be both 0 and 1 depending on where in its oscillation it is "read".


A quantum computer would be capable of breaking AES because a qubit is an exponential not a linear quantity. For example, in linear binary computing, using electrical current transistors, 3 bits of data can give you 1 binary number--101, 001, 110, etc.; but in quantum computing, 3 qubits represent a quantity that is 2 to the 3rd.


So, to extrapolate how qubits scale, Wineland (43:00) offers this example: while 300 bits in linear computer memory may store a line of text, 300 qubits can store 2 to the 300th objects, holding a set that is much larger than all the elementary particles in the known universe. And qubit memory gates would allow a parallel processing quantum computer to operate on all of its 2 to the nth inputs simultaneously, making trivial the once-untouchable factoring problems upon which all currently known encryption schemes are based.


Big Black Boxes


The NSA project is underway in Faraday cages the size of rooms. Could that work proceed without the direct involvement of Wineland's award-winning NIST team? Even presuming that involvement, the technical challenges of going from an isolated and oscillating mercury ion to a fully developed quantum computing platform would seem to imply years not months of work.


This time next year we may know the answer to how long the project will take. In the meantime, we continue assuming that AES-encrypted data remains secure and that the SNMPv3-enabled tools for monitoring systems with secure data do not introduce breaches in the systems themselves.


* Schrodinger implicitly formalized his own feline paradox with a differential equation that calculates the state and behavior of matter within quantum systems as a wave function (Ψ) that brings together mutually exclusive possibilities.

NSA-related news in recent months warrants revisiting an earlier discussion of AES. In an article on the NSA's Bluffdale, Utah datacenter, I noted that despite the peta-flop processing power of those systems AES-192 and AES-256 are still currently unassailable encryption schemes: "For now, however, nobody and no system on Earth can decrypt AES if used with 192 or 256 bit key lengths. In fact, the US federal government requires AES with a 256 bit key length for encrypting digital documents classified as 'top secret.'"


Advanced Encryption Standard (AES) is the symmetric key algorithm certified by the NIST. According to NIST cryptographer Bruce Schneier, the NSA's Bluffdale systems are not aimed at breaking high-bit AES.


The only serious caveat in computer science is that Peter Schor's factorization algorithm could defeat AES were quantum computing a reality. And keep in mind that two teams of physicists were awarded the Nobel Prize in 2012 for making advances in capturing and manipulating ions that create an important milestone in the possibility of building a quantum computer capable of running Schor's algorithm. How far away that is from happening is a physicist's dream worth discussing.



For now let me point out that besides AES's symmetric key generation many software applications use asymmetric or public key encryption; and the RSA corporation's implementation of public key encryption is the most widely used.


While the NSA's Bluffdale wolf may not be able to blow down the house of AES, he is huffing and puffing at a plenitude of RSA-1028 encrypted data held among the storage arrays. Bruce Schneier puts it both more cryptically (as it were) and plainly on his blog: "I think the main point of the new Utah facility is to crack the past, not the present. The NSA has been hoovering up encrypted comms for decades and it may be that the combination of a petaflop computer plus terabytes of data might be enough to crack crypto weaker than 128-bit (and especially 64-bit)".


I'll say more in another article about the recent Snowden revelations surrounding RSA encryption. In conclusion here I'll reiterate this: use AES-based tools for handling any data that you really need to keep secure. That means for any systems that monitor your other systems that carry AES-encrypted data, SNMPv3 is your best option. And many of you already know that SolarWinds tools for monitoring nodes and configuring network devices support SNMPv3.

Four predominant IT use cases any network device configuration management tool must address are:


  • Configuration change management: scheduling device configuration backups, requiring change approval for configuration changes, scheduling execution of approved changed.
  • Compliance reporting: defining and enforcing configuration policies across all network devices through automated uploads and scheduled change reports.
  • Inventory reporting: tracking network device components, including serial numbers, interface names/specifications, port details, IP addresses, ARP tables, installed software manifests (with version levels).
  • Network device End of Sales and End of Life management: tracking sales/support status as integral part of strategic capacity and upgrade planning.


How well a configuration management system and IT best practices satisfy these cases on a daily basis impacts your company's ability to organize and promote the strategic collaborations among employees and partners that meet the business goals set at the executive level.


What business cases are the most important to you and why? What's missing from the system you currently use?


Visualizing Your Network

Posted by docwhite Oct 28, 2013

Every IT team scales its organization and practices to fit an evolving set of inter-related business, network, user, and security requirements. Any sized team must have tools that facilitate the basic work of deploying network devices (switches, routers, physical and virtual servers, desktop and laptop computers, IP and smartphones) and managing all of those devices as well as the users who depend of the network for the work they do for their business.


Monitoring nodes, users, applications, and the different kinds of network traffic, and addressing issues that impact performance and security are twin daily challenges. And a good alerting system is the key to efficiently relating monitoring to management.


The Power of Visualized Information

Trends in graphs and percentages in charts effectively tell the story of what’s happening with the different aspects of your network at the interface, node, and traffic level. Send the results of your measures to reports and your team gets a snapshot of daily, weekly, and monthly behavior. Policy adjustments, configuration changes, and capacity planning all depend on the metrics against which your monitoring systems generate its graphical information. Without these statistics an IT team’s anticipation and planning would be trapped on the edge of impending crisis; planning would be through the guesses occasioned by crises.


Seeing the Whole through the Particular

The most powerful view of your network is the one that shows how particular nodes are connected to each other. A set of alerts tell you what the team needs to triage; those same alerts distributed as signals on a topology map show how the pattern of alerts indicate—for example—that a particular switch sits in the path of all the nodes currently sending alerts. Triage becomes much more finely focused when you can see how impacted nodes are interconnected.


Among the requirements for a mapping tool that integrates with the other pieces of your monitoring system should be these:


  • Provides accurate, deep, and maintainable network discovery, using multiple discovery methods (SNMP, ICMP, WMI, CDP, VMWare) to map all types of devices (switches,  routers, servers, VMs, unmanaged nodes, desktop computers, peripheral devices) and their interconnections; and using scheduled rediscovery to regularly reconfirm topology details.
  • Enhances node management (by integrating with the primary node monitoring system), creating a visual analogue for all nodes being monitored; showing node details (including load stats) with rollover graphics down to the interface level; and capable of generating reports on switch ports, VLANs, subnets, and device inventory.
  • Facilitates IT monitoring, planning, trouble-shooting workflows by being able to export maps to multiple formats (for example, Visio, PNG, PDF & NTM Map format).


Check-out SolarWinds Network Topology Mapper as a mapping tool that satisfies all of these requirements.

Workflow within an IT team often involves both separation and coordination between management-related tasks and those performed by other team members. Within a config change approval system, for example, a manager might set the policy and strategy based on which team members make changes to their assigned areas of the network. Conversely, when a team member sets up a device config change that targets a specific set of nodes, a manager often provides approval for the change and may schedule the day the time for the change to take place.


One division of labor that often goes unintegrated in the daily processes of network maintenance is tracking the support status for the different devices running on the network. It’s common for a manager to track support and maintenance of equipment as an acitivity isolated from other more integrated network monitoring processes.


The disadvantage of isolating support, planning, and procurement to management oversight is in creating a single point of failure. While a manager checks the activity of team members, nobody tends to check the manager’s awareness of which devices on the network are nearing end of life or end of vendor support.


Tracking Device Support Status

The significant advantage of delegating consistent tracking of device support status to a team member with responsibility for the relevant area of the network is that the responsible team member consistently keeps support status in mind when planning device configuration work. As a result, in planning changes in the network, different team members can appropriately engage managers to plan and procure device upgrades as an integral part of maintaining the of integrity of the network. Also, a manager who receives reports on end of life and end of sales related to network devices can trust the point person on the team to provide the most strategic information about what to do about the devices that appear in a report. Team members gain additional ownership over the devices they maintain and managers gain an overview that helps them focus without being bogged down with unnecessary details.


SolarWinds Network Configuration Manager version 7.2 introduces an End of Life and End of Sales tracking feature that integrates this additional awareness into the IT team’s daily monitoring workflow.

This series relates technology trends and their implications (face recognition technology, Big Data storage and findability, Wearable Computing and Cyber-citizenship , Surveillance) to perennial IT concerns and challenges. The film Minority Report has served as a helpful point of reference in making the relevant connections.


It’s been a few weeks since the British news organization The Guardian began publishing stories about and sourced through a Booz Allen Hamilton contractor named Edward Snowden.


The story has many aspects and implications. In this case I want to simply point out Booz Allen Hamilton’s obviously inadequate IT policies and practices.


Booz Allen Hamilton is no novice in working within the US intelligence bureaucracy; it has a long history of securing very lucrative contracts since National Security Agency director John Poindexter made the decision to have private technology companies modernize that agency’s computing infrastructure.


Yet a relatively junior contract IT analyst, Snowden, was able to use his routine access to BAH computing resources to download a trove of classified documents onto a thumb drive. Apparently, the data related to the surveillance of all civilian US telecommunications traffic is so easily available to BAH employees that Snowden could take what he wanted without raising any flags.


In the sense that he apparently violated his employment contract, Edward Snowden might be called a rogue IT professional. But Booz Allen Hamilton made it very easy for him. The obvious conclusion is that BAH merely excels at getting US government contracts; having a credible program for ensuring that their customer’s information remains secure is an afterthought BAH didn’t have until now.


No Booz(e)

There are many ways to manage the security of the data within your network. One simple but very effective way is to encrypt all data passing through your network and make access to data dependent on role-based systems' use. And that includes access to data flowing through the tools that monitor and manage your network systems. Network monitoring and management products like Solarwinds Network Performance Monitor or SolarWinds Network Configuration Manager support data encryption at the level of AES 256 and impose a role hierarchy on which accounts within your IT systems can view and manipulate.

Users of SolarWinds network products know that a Microsoft SQL Server database is a required component. Though all SolarWinds network products still include a utility called Database Manager its feature set is now limited to query operations.


So in this article, as a general courtesy to those who have been relying on Database Manager for backing-up and restoring SolarWinds product database, I'm going to provide steps for performing these operations using Microsoft SQL Server Management Studio and in the context of moving an existing database from one SQL Server database server to another. The procedures are relevant to database created in SQL Server 2005 and 2008.




  1. Using an administrator account, log on to the SQL Server database server where your SolarWinds product database currently resides.
  2. Click Start > All Programs > Microsoft SQL Server 200X > SQL Server Management Studio.
  3. Specify the server name of the current SolarWinds database server on the Connect to Server window.
  4. If you are using SQL Server Authentication, click SQL Server Authentication in the Authentication field, and then specify your credentials in the User name and Password fields.
  5. Click Connect.
  6. In the pane on the left, expand the name of the server hosting the SQL instance you are using for your SolarWinds product, and then expand Databases.
  7. Right-click the name of your SolarWinds database (for example, right-click "NCM_database), and then click Tasks > Back Up.
  8. In the Source area, select Full as the Backup type.
  9. In the Backup set area, provide an appropriate Name and Description for your database backup.
  10. If there is not already an appropriate backup location listed in the Destination area, click Add, and then specify and remember the destination path and file name you provide. This is the location where your backup is stored. Note: Remember, if your database is on a remote server, as recommended, this backup file is also created on the remote database server; it is not created locally.
  11. Click Options in Select a page pane on the left.
  12. In the Reliability area, check Verify backup when finished.
  13. Click OK.
  14. Copy the .bak file from your current SolarWinds database server to your new database server.




Restoring a database happens differently depending on the version (2005/2008) of SQL Server you are running.


SQL Server 2005


To restore your database backup file:

  1. Log on to the new database server using an administrator account.
  2. Click Start > All Programs > Microsoft SQL Server 2005 > SQL Server Management Studio.
  3. Click File > Connect Object Explorer.
  4. Specify the name of the new SolarWinds database server on the Connect to Server window.
  5. If you are using SQL Server Authentication, click SQL Server Authentication in the Authentication field, and then specify your credentials in the User name and Password fields.
  6. Click Connect.
  7. Click the name of your server to view an expanded list of objects associated with your server, and then right‑click Databases.
  8. Click Restore Database.
  9. Leave To database blank.
  10. Click From device, and then browse (…) to the location of your .bak file.
  11. Click Add, and then navigate to the .bak file and click OK.
  12. Click OK on the Specify Backup window.
  13. Check Restore.
  14. Select the name of your database from the To database field. It will now be populated with the correct name. For example, select "NCM_database".
  15. Click Options in the left Select a page pane.
  16. Check Overwrite the existing database.
  17. For each Original File Name listed, complete the following steps to ensure a successful restoration:
    1. Click Browse ().
    2. Select a directory that already exists.
    3. Provide a name for the Restore As file that matches the Original File Name, and then click OK.
  18. Select Leave the database ready to use by rolling uncommitted transactions…(RESTORE WITH RECOVERY).
  19. Click OK.
  20. Open and run the appropriate SolarWinds Configuration Wizard to update your SolarWinds installation.
  21. Select Database and follow the prompts. Note: Due to the nature of security identifiers (SIDs) assigned to SQL Server 2005 database accounts, SolarWinds recommends that you create and use a new account for accessing your restored Orion database on the Database Account window of the Orion Configuration Wizard.


SQL Server 2008

To restore your database backup file on a server running SQL Server 2008:

  1. Log on to the new database server using an administrator account.
  2. Click Start > All Programs > Microsoft SQL Server 2008 > SQL Server Management Studio.
  3. Click File > Connect Object Explorer.
  4. Specify the name of the new SolarWinds database server on the Connect to Server window.
  5. If you are using SQL Server Authentication, click SQL Server Authentication in the Authentication field, and then specify your credentials in the User name and Password fields.
  6. Click Connect.
  7. Click the name of your server to view an expanded list of objects associated with your server, and then right‑click Databases.
  8. Click Restore Database.
  9. Leave To database blank.
  10. Select From device, and then click Browse ().
  11. Confirm that File is selected as the Backup media.
  12. Click Add.
  13. Navigate to the .bak file, select it, and then click OK.
  14. Click OK on the Specify Backup window.
  15. In the Destination for restore area, select the name of your database from the To database field. Note: The To database is now populated with the correct name. For example, select "NCM_database".
  16. Check Restore next to the database backup you are restoring.
  17. Click Options in the left Select a page pane.
  18. Check Overwrite the existing database (WITH REPLACE).
  19. For each Original File Name listed, complete the following steps to ensure a successful restoration:
    1. Click Browse ().
    2. Select a directory that already exists.
    3. Provide a name for the Restore As file that matches the Original File Name, and then click OK.
  20. Select Leave the database ready to use by rolling uncommitted transactions…(RESTORE WITH RECOVERY), and then click OK.
  21. Open and run the appropriate SolarWinds Configuration Wizard to update your SolarWinds installation.
  22. Select Database and follow the prompts.  Note: Due to the nature of security identifiers (SIDs) assigned to SQL Server 2008 database accounts, SolarWinds recommends that you create and use a new account for accessing your restored Orion database on the Database Account window of the Orion Configuration Wizard.

An important part of monitoring is reducing the noise of unnecessary alerts.


In monitoring the connection ports on your network devices, while you want to know what endpoints are connected through which ports, you can usually function best when those connections are presented through an event log. In contrast, an alert is most useful to receive when a rogue endpoint, either unsanctioned or explicitly prohibited, connects to a device port.


Identifying rogue endpoints on the network depends on first knowing that some set of devices are explicitly allowed. For that you need a white list.


With a white list setup, assuming your monitoring system supports this feature, you then need some way to generate alerts upon a rogue endpoint connecting. In this case, by having your network devices trigger a trap when each endpoint connects to it, your monitoring application should be able to receive and compare trap information against its white list, sending an alert only when the endpoint (identified by MAC address or hostname) in the trap data does not appear on the white list.


As a result, you will setup your device port monitoring to alert you in real-time when a rogue device connects to your network. This allows alerts to standout among other normal events.


SolarWinds User Device Tracker (UDT) supports white listing, rogue endpoint detection and alerting. Consider UDT's feature set a supplement that effectively takes node monitoring visibility down to the device port level.

This article presents a use case related to monitoring users and endpoints connected to the device ports that pass through VOIP phones on a network.


Many IT teams manage bandwidth consumption for a network that includes a VOIP telephone system. VOIP-enabled network switches often allow each VOIP phone to connect another device--typically a desktop or laptop computer--to the network . Both the phone and the device connected through it request and receive DHCP leases; and both are considered as being connected directly to the switch.


Should an endpoint connected through a VOIP phone become compromised (through a Trojan virus, for example), the network switch, directly connected to the compromised device, becomes vulnerable. Though a firewall is often setup with rules that block the egress of packets that a Trojan-infected endpoint is attempting to send back to its homing station from within the network, the risk and possible repercussions of sensitive information getting out remains. Hacking and securing networks endlessly depend on innovating tactics.


Monitoring and the Importance of Response Time


Probably managing bandwidth on your VOIP-enabled network already involves watching and shaping traffic in response to call quality indications and alerts. In fact, SolarWinds Network Performance Monitor, Network Traffic Analyzer, and VOIP & Network Quality Manager interoperate to give you a very granular view of bandwidth allocation and consumption to support VOIP quality of service.


However, while you may be able to see that specific endpoints are hogging bandwidth through a particular application, you will not necessarily be able to see who, internally, is contributing to a spike. For that you also need to correlate MAC and IP addresses, and user activity, with the traffic problem. SolarWinds User Device Tracker provides resources for tracking user logins with the MAC and IP of the device being used. Additionally, if monitoring tools reveal a breach in security, an IT team member can use UDT to remotely shutdown any network device port that might be compromised.

Waking in the hospital after surgery, having lost both his legs in one of the bomb blasts last week in Boston, Jeff Bauman asked for pen and paper and with great effort wrote a note to communicate that a short time before the two explosions a man with a hooded sweatshirt had looked directly at him while dropping a bag.


That note led the FBI to review security video footage from stores near the two points of detonation. Analysts edited together one and a half minutes of video from store cameras at different places along the street near the site. And within twelve hours the suspect shown wearing a white hat in the video was identified on campus last Friday at MIT. By the end of that night one of two men in the security video was dead and the other was being pursued through a few fully cordoned blocks of the Watertown neighborhood in Boston.


It took a bit less than four days; from Jeff Bauman’s hospital room note to the release of still and video images of two suspects.


Had the bombing occurred in Manhattan it’s very likely police there would have had video images to circulate within hours thanks to NYPD's Domain Awareness System (DAS). One of the explicit purposes of that system is counter-terrorism; and to that end any alert within the system immediately makes available the last three minutes of surveillance video from any of the 3000 street-level cameras within 500 feet of the alert. Not only would images of the suspects have been seen within minutes of the explosions but, based on the cameras that captured the images, police would also have known the direction in which the suspects traveled after leaving the site; it’s even possible that some number of cameras would have provided images of the suspects along their entire escape route.


Though DAS would not necessarily have stopped the bombings in Boston from occurring, the system would excel in helping law enforcement contain and investigate such an incident. NYPD and Microsoft partnered to develop DAS as a pilot program in NYC and now offer the system as a product to other US police departments.


There is another aspect of the Domain Awareness System that I want to discuss next time. Here I just want to reiterate what everyone knows about IT systems: critical systems (especially those with life and death implications, for example, in hospitals or on airplanes) require available and reliable monitoring that regularly confirms those systems are working properly and quickly escalate alerts when they are not.

In the early part of this series I discussed face recognition technology, Big Data storage and findability, privacy protection and encryption with reference to the movie Minority Report.


Let’s return to the future in Minority Report as a way to understand some implications of NYPD's Domain Awareness System (DAS).


You may remember that in Minority Report Tom Cruise’s character heads a “PreCrime” pilot program that is about to be expanded nationally. The PreCrime system so accurately predicts crimes before they are comitted that the future perpetrators are apprehended and prosecuted based on the system’s evidence. The story’s denouement involves a senior manager hacking the PreCrime system to frame Cruise’s character for a murder the manager intends to commit.


The movie highlights the risk of a powerful tool being used as a weapon; and suggests the risk is highest with those who best understand and control the most powerful and complex of tools. In the case of NYPD’s DAS, those who enforce the law in the biggest city in the United States can now in real-time, for example, “search for suspects using advanced technologies such as [over 3000 street level] smart cameras and license plate readers” and track unfolding events in overlay on detailed city maps.


Rogue IT Agents

I’ll talk about the “counterterrorism” emphasis of the DAS program in the next article. Here I want to point out that our safeguards against hackers within IT systems can also protect against rogue activity from occurring within the system. For example, if you have a good config change approval system, you can lock-down or significantly limit direct access to your most critical switches and routers. This reduces the potential for both accidental and deliberate damage.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.