Taking the eggs out of one basket
After a series of near disasters and repeated power failures, Monroe Horn explains how moving his firm's IT infrastructure down the street provided the perfect solution
October 06, 2009 at 02:13 AM
9 minute read
After a series of near disasters and repeated power failures, Monroe Horn explains how moving his firm's IT infrastructure down the street provided the perfect solution
It may have been the 10 times the cooling went out in the server room; or the four times in nine months we lost power; or the morning we came in to find gallons of water running down the wall 15 feet from our server room. Any of these might have been the final straw, but all of them combined to make it abundantly clear that housing our IT infrastructure inside our offices did not provide the uptime or the data security needed by ourlawyers.
Sunstein Kann Murphy & Timbers is a Boston-based intellectual property firm with 33 lawyers and 46 support staff. Our technology systems must be as resilient as those of our largest competitor, and available all the time, every day.
Expected benefits
From the very beginning, we saw a number of advantages to getting rid of our server room. One was power. With servers in our office, any power outage lasting more than 30 minutes forced us to shut everything down. A data centre offers both power protection and generator backup.
But why move to a data centre down the street? Why not move everything to somewhere far away from our downtown offices? In the first place, while we thought it might be economically feasible to move down the street, we knew there was no way we would be able to bear the costs of relocating all of our equipment to a data centre in the suburbs. The connectivity costs alone would be prohibitively high.
In order to move our entire infrastructure we needed a lot of very high-speed bandwidth, which becomes exponentially more expensive with distance.
Additionally, the longer the connection the more expensive the hardware is going to be at either end. Moving our equipment close by would keep connection costs relatively low and allow us to use moderately-priced switching equipment and optics.
As important as cost, however, was that our business continuity analysis determined that by far the most likely disaster scenarios we might face were linked to environmental failures. Having our equipment in a data centre – even one close by – would protect us from all but the most extreme disasters.
Perhaps the only scenario where we would not be protected would be one in which there was extensive physical damage to the downtown area. To protect against this eventuality we decided to continue replicating our critical data from our primary data centre to a backup data center about 30 miles from Boston.
It is also important to note that distance does not always equal safety. Disasters can happen in the suburbs too. The real danger is having all your eggs in one basket, wherever that basket may be.
Getting rid of our server room would also save a great deal of money. In the past, with our systems vulnerable to all of the vicissitudes of a downtown office environment, we needed to maintain more than just redundant data.
We needed completely redundant systems in an offsite facility ready to become functional and accessible within a few hours. Maintaining a complete disaster recovery environment requires constant investment of time and money – not to mention the time and expense of returning to your normal systems once the crisis is over. With our production equipment already in a highly-available environment, we would no longer need to support the cost of fully redundant systems.
The project had four phases: connecting our offices to the new data centre a few blocks away at One Summer Street, connecting the new data centre to the internet, reconfiguring our network, and moving the equipment.
Phase 1
The first phase, getting connected, took longer than the other three phases combined. Owner of the data centre, The Markley Group provided connections from our cabinets to a "meet me room" in their facility as part of our agreement with them.
Getting connected from our offices to that room, however, was our responsibility. Finding vendors who could provide the connectivity we needed, negotiating agreements, and then actually getting the connections installed took around six months.
For our primary connection, we chose to use dark fibre. Dark fibre is fibre optic cable in a city's infrastructure that is unused and available for users to lease. Utility and telecommunication companies have installed a great deal of this fibre throughout the years and it currently sits idle in conduits under the streets.
Because dark fibre is part of a city's infrastructure, our choice of vendors was limited to those that had existing capacity around our building.
AboveNet ran additional cable from the street to our offices and then spliced that into the street's cable in a manhole next to our building. Once they did a similar thing at the new data centre end, we had a dedicated fibre optic connection directly from our offices to the data centre.
Because leasing dark fibre provides only the fibre optic circuit itself, we needed to purchase equipment to "light" the fibre link so that we could pass data over it. To do this we purchased and installed layer three switches andoptics.
Our servers, once on a separate virtual local area network (VLAN) in our office server room, were now on that same VLAN at the data centre. The fibre connection gave us 10 gigabits of bandwidth with essentially no latency.
It was as if we had run a very long patch cord down the street. The computers in the office – and our users – had no idea anything had changed.
In addition to our primary fibre connection, we also leased a lower bandwidth backup circuit from Verizon Business.
Our choice of vendor in this area was determined largely by the fact that Verizon had existing capacity both in our building and at One Summer Street and could thus provide significant bandwidth at a reasonable cost.
Our backup circuit, which was also terminated in the switches, is a 150 megabit point-to-point ethernet circuit. This connection was configured in such a way that it is automatically used in the event that our primary connection goes down.
Phase 2
The second phase of the project was to switch our internet connections. Because so many vendors have a presence at One Summer Street, this aspect of the project ended up being relatively easy.
Before this project, we had connections from our office to the internet. As we moved our entire infrastructure to One Summer Street, it became advantageous to have our internet connections there instead.
As an additional benefit, the intense competition among vendors at One Summer Street makes internet access much cheaper there. We were able to get much more internet bandwidth at One Summer Street for less money than we had been paying for the circuits in our office.
Phase 3
Reconfiguring our network – the third phase of the project – was the most troublesome. While we needed to purchase some new pieces of networking equipment for the project, we wanted to continue to use some equipment already in service on our network. We also wanted to be able to move equipment out gradually – moving a subset of our servers and testing before committing to relocating everything.
All of this meant that we had to make all of our networking changes on a live network and, even though we did a lot of the work early in the morning or on weekends, there were times when it impacted our users. Mostly, though, adopting this strategy meant that things took longer than they might have if we had purchased all new equipment and built a new, parallel network.
Phase 4
By the time we were ready for the final phase, we had already moved a handful of production servers to One Summer Street. Our experiences moving these servers allowed us to put together a detailed plan for the rest of our equipment.
By the time we were ready to do our final move, we had about 30 pieces of equipment left to relocate. We started around 6pm on Friday, 27 March 2009, by bringing down the network and – once again – reconfiguring it. We then moved each piece of equipment in a predetermined order. By the time we finished around 4am the next morning, all systems were available to users. We came back on Saturday for a few hours to do some more testing, and then we were done.
Costs
The costs of doing a project like this are going to vary widely depending on individual circumstances. In terms of recurring costs, the most expensive element of the project is connectivity. Dark fibre leases can run anywhere from a few thousand to upwards of $10,000 per month.
The other significant recurring cost is the monthly fee paid to the co-location facility. This will vary from facility to facility and, of course, depends on the amount of space needed.
In retrospect
Now that it is complete, we have reaped all the benefits we expected to from the project. Our IT team has the security of knowing that our technology is protected and that we will be able to support our attorneys even if our offices are not available to us.
Our attorneys and other legal professionals know that they will be able to continue to serve their clients even when they turn on the television and see a 30-foot-high geyser of water erupting from the street in front of our office building – as they did one morning earlier this year.
Monroe Horn is the chief technology officer of Boston
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2025 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllIs KPMG’s Arizona ABS Strategy a Turning Point in U.S. Law? What London’s Experience Reveals
5 minute readKPMG Moves to Provide Legal Services in the US—Now All Eyes Are on Its Big Four Peers
International Arbitration: Key Developments of 2024 and Emerging Trends for 2025
4 minute readThe Quiet Revolution: Private Equity’s Calculated Push Into Law Firms
5 minute readTrending Stories
- 1Court Rejects San Francisco's Challenge to Robotaxi Licenses
- 2'Be Prepared and Practice': Paul Hastings' Michelle Reed Breaks Down Firm's First SEC Cybersecurity Incident Disclosure Report
- 3Lina Khan Gives Up the Gavel After Contentious 4 Years as FTC Chair
- 4Allstate Is Using Cell Phone Data to Raise Prices, Attorney General Claims
- 5Epiq Announces AI Discovery Assistant, Initially Developed by Laer AI, With Help From Sullivan & Cromwell
Who Got The Work
J. Brugh Lower of Gibbons has entered an appearance for industrial equipment supplier Devco Corporation in a pending trademark infringement lawsuit. The suit, accusing the defendant of selling knock-off Graco products, was filed Dec. 18 in New Jersey District Court by Rivkin Radler on behalf of Graco Inc. and Graco Minnesota. The case, assigned to U.S. District Judge Zahid N. Quraishi, is 3:24-cv-11294, Graco Inc. et al v. Devco Corporation.
Who Got The Work
Rebecca Maller-Stein and Kent A. Yalowitz of Arnold & Porter Kaye Scholer have entered their appearances for Hanaco Venture Capital and its executives, Lior Prosor and David Frankel, in a pending securities lawsuit. The action, filed on Dec. 24 in New York Southern District Court by Zell, Aron & Co. on behalf of Goldeneye Advisors, accuses the defendants of negligently and fraudulently managing the plaintiff's $1 million investment. The case, assigned to U.S. District Judge Vernon S. Broderick, is 1:24-cv-09918, Goldeneye Advisors, LLC v. Hanaco Venture Capital, Ltd. et al.
Who Got The Work
Attorneys from A&O Shearman has stepped in as defense counsel for Toronto-Dominion Bank and other defendants in a pending securities class action. The suit, filed Dec. 11 in New York Southern District Court by Bleichmar Fonti & Auld, accuses the defendants of concealing the bank's 'pervasive' deficiencies in regards to its compliance with the Bank Secrecy Act and the quality of its anti-money laundering controls. The case, assigned to U.S. District Judge Arun Subramanian, is 1:24-cv-09445, Gonzalez v. The Toronto-Dominion Bank et al.
Who Got The Work
Crown Castle International, a Pennsylvania company providing shared communications infrastructure, has turned to Luke D. Wolf of Gordon Rees Scully Mansukhani to fend off a pending breach-of-contract lawsuit. The court action, filed Nov. 25 in Michigan Eastern District Court by Hooper Hathaway PC on behalf of The Town Residences LLC, accuses Crown Castle of failing to transfer approximately $30,000 in utility payments from T-Mobile in breach of a roof-top lease and assignment agreement. The case, assigned to U.S. District Judge Susan K. Declercq, is 2:24-cv-13131, The Town Residences LLC v. T-Mobile US, Inc. et al.
Who Got The Work
Wilfred P. Coronato and Daniel M. Schwartz of McCarter & English have stepped in as defense counsel to Electrolux Home Products Inc. in a pending product liability lawsuit. The court action, filed Nov. 26 in New York Eastern District Court by Poulos Lopiccolo PC and Nagel Rice LLP on behalf of David Stern, alleges that the defendant's refrigerators’ drawers and shelving repeatedly break and fall apart within months after purchase. The case, assigned to U.S. District Judge Joan M. Azrack, is 2:24-cv-08204, Stern v. Electrolux Home Products, Inc.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250