IT Support and Hardware for Clinics
31.3K views | +4 today
Follow
IT Support and Hardware for Clinics
News, Information and Updates on Hardware and IT Tools to help improve your Medical practice
Your new post is loading...
Your new post is loading...
Scoop.it!

Internet used by 3.2 billion people in 2015 - BBC News

Internet used by 3.2 billion people in 2015 - BBC News | IT Support and Hardware for Clinics | Scoop.it

The International Telecommunication Union (ITU), a United Nations body, predicts that 3.2 billion people will be online. The population currently stands at 7.2 billion.


About 2 billion of those will be in the developing world, the report added.


But just 89 million will be in countries such as Somalia and Nepal.

These are part of a group of nations described as "least developed countries" by the United Nations, with a combined population of 940 million.

Mobile

There will also be more than 7 billion mobile device subscriptions, the ITU said.


It found that 78 out of 100 people in the US and Europe already use mobile broadband, and 69% of the world has 3G coverage - but only 29% of rural areas are served.


Africa lags behind with just 17.4% mobile broadband penetration.

By the end of the year 80% of households in developed countries and 34% of those in developing countries will have internet access in some form, the report continued.


The study focused on the growth of the Information and Communication Technology (ICT) sector over the past 15 years.

In the year 2000 there were just 400 million internet users worldwide, it said - an eighth of the current figure.


"Over the past 15 years the ICT revolution has driven global development in an unprecedented way," said Brahima Sanou, director of the ITU telecommunication development bureau.


"ICTs will play an even more significant role in the post 2015 development agenda and in achieving future sustainable development goals as the world moves faster and faster towards a digital society."

more...
No comment yet.
Scoop.it!

AT&T finally brings its gigabit internet to Chicago's suburbs

AT&T finally brings its gigabit internet to Chicago's suburbs | IT Support and Hardware for Clinics | Scoop.it

Back in October of last year, we learned about AT&T's plans to launch its 1Gbps fiber network, GigaPower, in cities like Chicago. And today, more than six months after the original announcement, the company's finally flipping the switch in some areas of The Windy City -- including Elgin, Oswego, Plainfield, Skokie, Yorkville and other "surrounding communities." The U-Verse gigabit internet will be available as a standalone service and as a bundle with a cable or phone package, with prices ranging from $90 to $150 per month, depending on your selection. If you're not in any of the aforementioned zones of coverage, fret not -- AT&T says it will be expanding the service across Chicago later this summer.

more...
No comment yet.
Scoop.it!

3 tips for easier migration to a new browser

3 tips for easier migration to a new browser | IT Support and Hardware for Clinics | Scoop.it

Recently, I switched to Firefox after Chrome became unresponsive and buggy one too many times. Switching between browsers never used to be a big deal, but that's just not the case anymore. We customize these programs with extensions, sync open tabs to our mobile devices, and, if you're using Chrome, run apps like they're native to the desktop.

If you're thinking about moving between browsers here are three things to consider as you plan your move.

Know your sync

A key feature for many users is being able to move seamlessly between open browser tabs on any device as well as access your complete browsing history. Google makes this really easy for Android users with Chrome, but other browsers also offer this feature.

Firefox has a feature called Sync that syncs all your browsing data between devices, and Opera has Link. Firefox is available on Android (an iOS browser is in the works), while Opera has mobile browsers on Android, iOS, and Windows Phone. If you're using Safari on an iPhone and a Windows PC you're a little more limited to what you can sync, but iCloud does allow you to sync your Safari bookmarks with Internet Explorer.

Bookmarks

You can't go anywhere without your bookmarks, but luckily this was a problem that was solved a long time ago. Moving your bookmarks between browsers is fairly easy. We won't go into detail on how to move your bookmarks here. Instead, I'll point you in the right direction for all four major browsers on Windows.

Check out your new extensions/add-ons

Everybody's usually got a few tweaks they've added to their browser via extensions--also known as add-ons in Mozilla-speak. The one thing you'll want to make sure of is that any mission critical services carry over to your new browser, such as a browser-based password manager or services like Evernote.

The reality, however, is that sometimes you may have to give things up or find an alternative. Firefox users won't find NoScript on Chrome, but there are a few alternatives that do something similar. Chrome users moving to Firefox will have to give up Hangouts as a stand-alone app unless they're willing to try an unofficial add-on or just run Chrome in the background to keep Hangouts going. Many of the mainstream services will offer extensions on all the major browsers, but niche stuff like Vimium may be harder to replace.

Those are the main points to think about when moving between browsers. There are other considerations such as the browser interface, how tabs behave on certain websites, and so on, but those are issues that come down to personal preference more than anything else.


more...
No comment yet.
Scoop.it!

Brave New World: The Future of Cyberspace & Cybersecurity

Brave New World: The Future of Cyberspace & Cybersecurity | IT Support and Hardware for Clinics | Scoop.it

“Since this is a challenge that we can only meet together, I’m announcing that next month we’ll convene a White House summit on cybersecurity and consumer protection. It’s a White House summit where we’re not going to do it at the White House; we’re going to go to Stanford University. And it’s going to bring everybody together — industry, tech companies, law enforcement, consumer and privacy advocates, law professors who are specialists in the field, as well as students — to make sure that we work through these issues in a public, transparent fashion.” – President Barack Obama, Jan. 13, 2015.

The future of cyberspace and cybersecurity has been debated by many theorists and academicians have rendered opinions and studies on the topic. Cyberspace and cybersecurity issues have retaken the center stage of national and homeland security discourse after having taken a sideline to the natural reaction against al-Qaida’s 9/11 attack on the homeland. Despite the renewed sense of purpose and the recognized need to mitigate the ills found in cyberspace, the issue of cybersecurity and the way ahead remain as unclear and obscure since these same theorists and academicians were predicting an “electronic Pearl Harbor” in the 1990s and the events leading up to the hype posed by the Y2K bug.

The Obama administration’s renewed sense of purpose in dealing with cybersecurity issues by calling for the Summit on Cybersecurity and Consumer Protection at Stanford University promises to reinvigorate the discussion on a vital topic of national security. That said, this initiative also sounds oddly familiar to similar initiatives from past administrations voicing similar concerns.

In Brave New World, Aldous Huxley portrayed a dystopian future where mankind was largely driven by the need for pleasure as a means to distract them from the weightier issues of their everyday lives. Huxley also stated one universal truism in that, “Most human beings have an almost infinite capacity for taking things for granted.”

In terms of cybersecurity, what have we taken for granted? The renewed focus on cyberspace and security issues, while laudable in the sense that it can promise a debate on issues that must be addressed, will ultimately fail if it does not fundamentally address the question: What are we taking for granted in terms of our understanding of cyberspace and cybersecurity? In other words, are we framing the current debate on flawed conceptions of the issue in general? Are our assumptions flawed? Without considering some of these questions, we risk missing the true and weightier questions that we need to address on an issue that is constantly changing in terms of its impact on humanity.

The question before us is a simple one, but harder in terms of envisioning or defining. As Anthony Codevilla and Paul Seabury clearly stated in their book War: Ends and Means: “Strategy is a fancy word for a road map for getting from here to there, from the situation at hand to the situation one wishes to attain.” While this does not mean that we need to quickly create another national strategy on cybersecurity or cyberspace with glossy photos and sweeping language that promises a utopian future, it does mean that we need to fundamentally address the more difficult question first, “What do we ultimately need to attain in terms of cybersecurity?”

In this sense, President Obama’s speech on the future of cyber issues is appropriately framed in that this really is a challenge that we can only meet together. Envisioning the future in a world that will become increasingly dominated by technology and the Digital Age also addresses the type of future that we want to create for subsequent generations. In short, what future are we giving our children and our grandchildren? While blatantly sophomoric, as a parent and grandparent, it also happens to be true.

By envisioning our future, we are forced to recognize where we are. The continued reports on data breaches, identity theft, insufficient cybersecurity protections for health care records, controversies over data retention by the U.S. government and private industry, terrorist recruitment via social media, and the implications of active targeting by foreign entities on U.S. intellectual property are just a few of the many concerns that define the cyberspace issue in the present age.

To date, we have embarked on a journey with no destination. We have not chartered the course to take us to where we want to go. As such, while we must bring national security specialists, policy-makers, private industry, academicians and civil liberty advocates together, we also need to recognize that these issues are the result of failed initiatives and incremental approaches to the overall topic of cyberspace and cybersecurity in general. If this incremental approach to cybersecurity remains unchecked, our generation will be the first to face the brave new world of cyberspace defined by the nefarious drivers that are presently framing the topic. As the noted philosopher, John Stuart Mill appropriately stated, “When we engage in a pursuit, a clear and precise conception of what we are pursuing would seem to be the first thing we need, instead of the last we are to look forward to.”

While the answers to this basic truism can take on a highly technical tone in terms of the development of cybersecurity standards, technologies and processes, the true nature of the answer centers on the ideals and cultural norms that we wish to preserve while advancing into the future that will be defined by technology. How do we preserve privacy in the Digital Age? What type of culture do we wish to establish for ourselves—innocent until proven guilty or questionable until we can verify who you are? What is the role of the government in terms of ensuring security and where does the responsibility for the private sector begin in terms of its obligation to protect its intellectual property?

The answers to these questions represent but a fraction of the answers that are necessary to define our future. The answers to these questions, however, are the ones that begin to define the parameters for how we get from here to there. The sooner we engage in this dialogue, the better off we will be in defining that future for subsequent generations.




Via Paulo Félix
more...
No comment yet.
Scoop.it!

Why Cyber Security Is All About The Right Hires

Why Cyber Security Is All About The Right Hires | IT Support and Hardware for Clinics | Scoop.it

The United Kingdom has estimated the global cyber security industry to be worth around US$200 billion per annum, and has created a strategy to place UK industry at the forefront of the global cyber security supply base, helping countries to combat cybercrime, cyber terrorism and state-sponsored espionage.

Likewise, the United States government is facilitating trade missions to emerging markets for companies that provide cyber security, critical infrastructure protection, and emergency management technology equipment and services with the goal of increasing US exports of these products and services.

Meanwhile, Australia is going through yet another iteration of a domestic cyber security review. Australia can’t afford to wait any longer to both enhance domestic capability and grasp international leadership.

The recent Australian debate about the government’s proposed data retention scheme has seen heavy focus on the security aspects of collecting, retaining and where authorised, distributing such data.

But much of this debate masks the broader issue facing the information security industry.

Failing to keep up

The constant evolution of the online environment presents cyber threats which are constantly evolving with increasing volume, intensity and complexity.

While organisations of all shapes and sizes are considering spending more money on cyber security, the supply side of information security professionals is not keeping up with the current, let alone future demand. High schools are not encouraging enough students (particularly girls) to get interested in the traditional STEM (science, technology, engineering and maths) subjects. The higher education and vocational sectors are likewise not creating enough coursework and research options to appeal to aspiring students who are faced with evermore study options.

One example of the types of programs needed to address the shortage is the Australian Government’s annual Cyber Security Challenge which is designed to attract talented people to become the next generation of information security professionals. The 2014 Challenge saw 55 teams from 22 Australian higher education institutions take part. At 200 students, this is but a drop in the ocean given what is required.

Even for those who graduate in this field, there is a lack of formal mentoring programs (again particularly for girls), and those which are available are often fragmented and insufficiently resourced. The information security industry is wide and varied, catering for all interests and many skill sets. It is not just for technical experts but also for professionals from other disciplines such as management, accounting, legal, etc, who could make mid-career moves adding to the diversity of thinking within the industry.

More and more organisations are adopting technology to create productivity gains, improve service delivery and drive untapped market opportunities. Their success, or otherwise, will hinge on a large pool of talented information security professionals.

We need to attract more people into cyber security roles. Universities need to produce graduates who understand the relationship between the organisation they work for, its people, its IT assets and the kinds of adversaries and threats they are facing. The vocational education sector needs to train technically adept people in real-world situations where a hands-on approach will enable them to better combat cyber attacks in their future employment roles.

Industry associations should focus on their sector — analysing the emerging information security trends and issues, and the governance surrounding information security strategy — to determine their own unique skills gap.

The government should develop a code of best practice for women in information security in collaboration with industry leaders, promoting internal and external mentoring services.


Via Paulo Félix
more...
No comment yet.
Scoop.it!

Google has delayed its Android encryption plans because they're crippling people's phones

Google has delayed its Android encryption plans because they're crippling people's phones | IT Support and Hardware for Clinics | Scoop.it

Google is delaying plans to encrypt all new Android phones by default, Ars Technica reports, because the technical demands of encryption are crippling people's devices.

Encryption slowed down some phones by 50% or more, speed tests show. 

In September 2014, Google — along with Apple — said that it planned to encrypt all new devices sold with its mobile OS by default. This means that unless a customer opted out, it would be impossible for anyone to gain access to their device without the passcode, including law enforcement (or Google itself).

This hardened stance on encryption from tech companies came after repeated revelations about the NSA, GCHQ and other government spy agencies snooping on ordinary citizens' data.

Default encryption has infuriated authorities. One US cop said that the iPhone would become "the phone of choice for the paedophile" because law enforcement wouldn't be able to access its contents. UK Prime Minister David Cameron has floated the idea of banning strong encryption altogether — though the proposal has been slammed by critics as technically unworkable.

Apple rolled out default-on encryption in iOS 8 back in September. Google's Android Lollipop system was first released in November — but because the phone manufacturers, rather than Google itself, are responsible for pushing out the update, it can take months for a new version of the OS to reach the majority of consumers.

But as Ars Technica reports, Lollipop smartphones are now finally coming to the market, and many do not have default-on encryption. So what's the reason? The devices couldn't actually handle it.

Speed tests show that even Google's flagship phone, the Google Nexus 6, suffers serious slowdown when encryption is turned on. A "random write" test measuring writing data to memory showed that the Nexus 6 performed more than twice as fast with encryption switched off — 2.85MB per second as compared with 1.41 per second with it on. The difference was even more striking in a "sequential read" test to measure memory reading speeds. An unecrypted device achieved 131.65MB/s; the encrypted version managed just 25.36MB/s. That's a third of even the Nexus 5, the previous model, which came in at 76.29MB/s.

As such, Google is now rowing back on its encryption stance. Its guidelines now say that full-disk encryption is "very strongly recommended" on devices, rather than the necessary requirement promised. Users can still encrypt their devices (even if it slows them down), but it won't happen by default.

Google says it still intends to force it in "future versions of Android".


more...
No comment yet.
Scoop.it!

First 64-bit Firefox build released, promising speed boost and beefier web gaming

First 64-bit Firefox build released, promising speed boost and beefier web gaming | IT Support and Hardware for Clinics | Scoop.it

Mozilla has joined the 64-bit browsing party with Firefox for Windows, but only in the Developer Edition for now.

The Developer Edition is a special version of Firefox with built-in tools for creating websites and web apps. While OS X and Linux already have a 64-bit version, Mozilla is just adding a Windows build with 64-bit support now.

The main advantage of 64-bit browsing is the ability to address more than 4 GB of RAM, allowing for beefier web apps. As an example, Mozilla points to games that run on Epic's Unreal Engine, noting that a 64-bit browser can store significantly more assets in memory. “For some of the largest of these apps, a 64-bit browser means the difference between whether or not a game will run,” Mozilla wrote in a blog post.


more...
No comment yet.
Scoop.it!

Congress Averts DHS Partial Shutdown

Congress Averts DHS Partial Shutdown | IT Support and Hardware for Clinics | Scoop.it

Congress, at the 11th hour, passed a bill to fund the Department of Homeland Security for the next seven days, averting for now a partial shutdown that would have curtailed some cybersecurity programs.

Funding of the department was to expire at midnight Feb. 27. Hours before the money was to run out, the Senate voted to fully fund DHS through September, the end of the fiscal year. The House, however, refused to take up that measure, and instead rejected a bill that would have funded DHS for three weeks. After the House failed to pass a funding bill, the Senate approved "a one-week patch," which the House enacted around 10 p.m. EST.


Without the temporary funding, a partial shutdown of DHS would have occurred. Critical IT security operations such as those that defend against cyber-attacks aimed at the government and the nation's critical infrastructure would have continued to function. But other cybersecurity initiatives, such as the rollout to agencies of the Einstein 3 intrusion prevention system and continuous diagnostic and mitigation systems to identify IT vulnerabilities, would have been placed on hold.

Still, Congress will have to pass a new appropriation if DHS is to continue fully operating beyond March 6.

DHS funding is caught in a political battle between Democrats and Republicans over immigration reform. The House last month approved a DHS funding bill without appropriating money for an executive action President Obama took on immigration, a move opposed by nearly all Republicans. The Senate, as a compromise, agreed to vote on two bills; one to fully fund DHS through September, which passed, and a second measure to strip the immigration provisions, which failed to muster the 60 votes needed to break a Democratic filibuster.

An estimated 80 percent of DHS employees would have worked during the partial shutdown, but without pay, with the remainder of the staff being told not to report to work. At the National Protection and Programs Directorate, the department unit responsible for cybersecurity and infrastructure protection, 57 percent of personnel would have remained on the job. In the 2013 federal government shutdown, all employees were paid once Congress funded operations.

Mark Weatherford, the former DHS deputy undersecretary for cybersecurity, said even with the shutdown being averted, at least temporarily, the potential exists of losing skilled IT security staffers, a matter that "is a more important issue than the stopping of the Einstein 3 or the CDM funding programs."

Even the threat to fail to fund DHS could drive key IT security personnel from the department, Weatherford said, adding that he knows of private-sector recruiters waiting to "pluck these people" out of DHS because they feel disgruntled by being victims of a political skirmish over immigration.

"The impact on morale is tremendous," says Weatherford, a principal at the security advisory firm The Chertoff Group. "To be treated like you really have no value, like you're a pawn in this game, is just not right. These people have greater value than that. They have opportunities, and you don't treat people with opportunities like this."


more...
No comment yet.
Scoop.it!

Creating cybersecurity that thinks

Creating cybersecurity that thinks | IT Support and Hardware for Clinics | Scoop.it

Until recently, using the terms “data science” and ”cybersecurity” in the same sentence would have seemed odd. Cybersecurity solutions have traditionally been based on signatures – relying on matches to patterns identified with previously identified malware to capture attacks in real time. In this context, the use of advanced analytical techniques, big data and all the traditional components that have become representative of “data science” have not been at the center of cybersecurity solutions focused on identification and prevention of cyber attacks.

This is not surprising. In a signature-based solution, any given malware or new flavor of it needs to be identified, sometimes reverse-engineered and have a matching signature deployed in an update of the product in order to be “detectable.” For this reason, signature-based solutions are not able to prevent zero-day attacks and provide very limited benefit compared to the predictive power offered by data science.

Among the many definitions of data science that have emerged in the last few years, “gaining knowledge from data using a scientific approach” best captures some of the different components that characterize it.

In this series of posts, we will investigate how data science can be used to extract knowledge that identifies malware and potential persistent cybersecurity threats.

The unprecedented number of companies that have reported breaches in 2014 are evidence that existing cybersecurity solutions are not effective at identifying malware or detecting attackers inside an organization’s network. The list of companies that have reported breaches and exfiltration of sensitive data grows at an alarming rate: from the large volume data breaches at Target and Home Depot earlier in 2014, to the recent breaches at Sony Entertainment, JP Morgan and the most recent attack at Anthem in February, where personally identifiable Information (PII) for 80 million Americans was stolen. Breaches involve big and small companies, showing that the time has come for a different approach to the identification and prevention of malware and malicious network activity.

Three technological advances enable data science to deliver new innovative cybersecurity solutions:

Storage – the ease of collecting and storing large amount of data on which analytics techniques can be applied (distributed systems as cluster deployments).
Computing – the prompt availability of large computing power allows easy use of sophisticated machine learning techniques to build models for malware identification.
Behavior – the fundamental transition from identifying malware with signatures to identifying the particular behaviors an infected computer will exhibit.

Let's discuss more in depth how each of the items above can be used for a rigorous application of data science techniques to solve today's cybersecurity problems.

Having a large amount of data is of paramount importance in building analytical models that identify cyber attacks. For either a heuristic or refined model based on machine learning, large numbers of data samples need to be analyzed to identify the relevant set of characteristics and aspects that will be part of the model – this is usually referred to as “feature engineering”. Then data needs to be used to cross check and evaluate the performance of the model – this should be thought of as a process of training, cross validation and testing a given “machine learning” approach.

In a separate post, we will discuss in more detail how and why data collection is a crucial part in the data science approach to cybersecurity, and why it presents unique challenges.

One of the reasons for the recent increase in machine learning’s popularity is the prompt availability of large computing resources: Moore’s law holds that the processing power and storage capacity of computer chips double approximately every 24 months.

These advances have enabled the introduction of many off-the-shelf machine learning packages that allow training and testing of machine learning algorithms of increasing complexity on large data samples. These two factors make the use of machine learning practical for use in cybersecurity solutions.

There is a distinction between data science and machine learning, and we will discuss in a dedicated post how machine learning can be used in cybersecurity solutions, and how it fits into the more generic solution of applying data science in malware identification and attack detection.

The fundamental transition from signatures to behavior for malware identification is the most important enabler of applying data science to cybersecurity. Intrusion Prevention System (IPS) and Next-generation Firewall (NGFW) perimeter security solutions inspect network traffic for matches with a signature that has been created in response to analysis of specific malware samples. Minor changes to malware reduce the IPS and NGFW efficacy. However, machines infected with malware can be identified through the observation of their abnormal, post-infection, behavior. Identifying abnormal behavior requires primarily the capability of first identifying what's normal and the use rigorous analytical methods – data science – to identify anomalies.

We have identified several key aspects that innovative cybersecurity solutions need to have. These require analysis of large data sample and application of advanced analytical methods in order to build data-driven solutions for malware identification and attack detection. A rigorous application of data science techniques is a natural solution to this problem, and represents a dramatic advancement of cybersecurity efficacy.

more...
sudo_reboot's curator insight, April 11, 2015 10:02 AM

I always find it interesting when the promise of “big data’, “cloud’, ‘on-demand compute resources’ - are touted as the solution. Where are the actual algorithms?  Where is the perfect blend of dev and analyst that can actually make full use of the technology who also knows the adversaries tradecraft?

Scoop.it!

Google Launches Cloud Security Scanner To Help Find Vulnerabilities In App Engine Sites

Google Launches Cloud Security Scanner To Help Find Vulnerabilities In App Engine Sites | IT Support and Hardware for Clinics | Scoop.it

Google today launched the beta of a new security tool for developers on its App Engine platform-as-a-service offering. The Google Cloud Security Scanner allows developers to regularly scan their applications for cross-site scripting and mixed content vulnerabilities.

Google is obviously not the first company to offer a tool like this, but as it argues in today’s announcement, the existing tools aren’t always “well-suited for Google App Engine developers.” Google also notes that these tools are typically hard to set up and “built for security professionals, not developers.”

To run its checks, Google sets up a small botnet on Compute Engine that scans your site. Requests are throttled to about 15 requests per second, which App Engine should be able to handle without problems.

On its first run, the scanner quickly crawls your site and app to parse the basic HTML code. Then, as Google describes it, it makes a second pass that fully renders the site to look at the more complex parts of the app. Once all of this is done, Google will try to attack your site with a benign payload. To do so, it uses the built-in debugger from the Chrome DevTools, and the tool checks for any changes in the browser and DOM to see whether the injection was successful (and could be exploited).

By using the debugger, Google can avoid false positives, but the team also acknowledges that this means it may miss some bugs. Google, however, argues that this tradeoff is worth it because “most developers will appreciate a low effort, low noise experience when checking for security issues.”

Because the scanner actually tries to populate any field it finds and clicks on every button and link, there is a chance that it will actually activate some of the features on the site it is testing (so it may post a blog comment about how its roommate’s aunt made $9,000 per week last month working from home). To avoid this, Google recommends you either run the scanner on a test site or block some UI elements by adding some custom CSS code to them or exclude some URLs from the test.

Using the scanner is free, but it will impact your quota limits and bandwidth charges.


more...
No comment yet.
Scoop.it!

How 3 smart devices can be dumb about the risks

How 3 smart devices can be dumb about the risks | IT Support and Hardware for Clinics | Scoop.it

Internet of Things security is no longer a foggy future issue, as more and more such devices enter the market—and our lives. From self-parking cars to home automation systems to wearable smart devices, analysts currently estimate that some 50 billion to 200 billion devices could be connected to the Internet in 2020. Google CEO Eric Schmidt told world leaders at the World Economic Forum in Davos, Switzerland, in January, "there will be so many sensors, so many devices, that you won't even sense it, it will be all around you," he said. "It will be part of your presence all the time."

That's hardly comforting when you consider how many of these smart devices still seem to be pretty dumb about security. A study by HP of ten popular IoT devices—including smart TVs, webcams and home automation devices—found an average of 25 security flaws per device. Seven of the ten devices had serious vulnerabilities.

Three smart devices with dumb security risks

Three of the hottest IoT categories offer examples of the risks. The Withings Activité Pop, shown off at CES, is an analog watch that records a user's daily habits, including sleep time, steps, swimming and other activities. Yet one Symantec study of sports bands and smart watches found the majority lacked privacy policies, nearly all connected to a cloud service, and 20 percent sent passwords without encrypting them.

Home automation faces the same issues. A 2013 report, for example, found significant vulnerabilities and poorly secured default configurations in home automation from vendors such as Belkin, Insteon, Linksys and Sonos. One intrepid reporter even used the information from the research to contact users and demonstrate that she could control their homes.

Some manufacturers are listening. Lock maker Kwikset, for example, has created a touchscreen deadbolt lock that uses a smudge-resistant touchpad, making it more difficult for an attacker to attempt to discern a homeowner's code from fingerprint residue left on the keypad.

Often, however, policymakers have to prod the industry to create better protections for consumers. Take BMW's self-parking car demoed at CES (and expected to come out as a feature about 2020). A car that can drive itself could also be hijacked by attackers. In a presentation at the Black Hat security conference in Las Vegas last August, two researchers—Charlie Miller, a security engineer with Twitter, and Chris Valasek, director of vehicle security research with security services firm ioActive—studied 19 different models of cars and found vulnerabilities in every vehicle. The researchers also showed that they could take control of the cars with self-steering mechanisms.

Even Congress has weighed in on this particular danger. On February 9, Senator Edward J. Markey, D-Mass., released a report claiming the automotive industry was not even close to securing its vehicles from cyberattacks. Almost all cars currently being sold have wireless technologies, the foremost application of which is to connect tire-pressure monitoring devices to the brains of the car. Yet, few manufacturers had taken steps to prevent remote access, the report concluded.

Vendors pay spotty attention to IoT security

Candid Wueest, principal security engineer with Symantec and the author of the Symantec report, stressed that often there is little consumers can do about IoT security, except to urge vendors to take it more seriously.

"Vendors are saying that they could implement security, but no one is asking for it," he says. "So, if no one is going to pay for it, it is not on their list of priorities."

David Grier, an associate professor of International Science and Technology Policy at George Washington University and past president of the Institute of Electrical and Electronics Engineers (IEEE), echoed Wueest's expectation that the industry would be reactive rather than proactive. "Everything in security in the past has required an incident," he says. "That gets people focused on the issue. At some point, you have to not only demonstrate the nature of the problem, but how it hurts."


more...
James Coombes's curator insight, March 9, 2015 12:14 AM

"Vendors are saying that they could implement security, but no one is asking for it,"

Scoop.it!

What a New $35 Million Agency Is Expected To Do for US Cyber Defense

What a New $35 Million Agency Is Expected To Do for US Cyber Defense | IT Support and Hardware for Clinics | Scoop.it
The new Cyber Threat Intelligence Integration Center is intended to coordinate intelligence among government agencies to better respond to cyber attacks.

Via Paulo Félix
more...
No comment yet.
Scoop.it!

Who's Hijacking Internet Routes?

Who's Hijacking Internet Routes? | IT Support and Hardware for Clinics | Scoop.it

Information security experts warn that Internet routes are being hijacked to serve malware and spam, and there's little you can do about it, simply because many aspects of the Internet were never designed to be secure.

See Also: Preparing for OCR Audits: Presented by Mac McMillan of the HIMSS Privacy and Policy Task Force

The Internet hijacking problem relates to Border Gateway Protocol, which is responsible for routing all Internet traffic. In the words of Dan Hubbard, CTO of OpenDNS Security Labs: "BGP distributes routing information and makes sure all routers on the Internet know how to get to a certain IP address."

BGP provides critical Internet infrastructure functionality, because the Internet isn't a single network, but rather a collection of many different networks. Accordingly, BGP routing tables give the different networks a way to hand off data and route it to its intended destination.

That assumes, of course, that no one tampers with BGP routing, in which case they could reroute traffic or disguise malicious activity. "The trouble is it ... all relies on trust between networks, so if someone hijacks an ISP router, you wouldn't know," Alan Woodward, a visiting professor at the department of computing at England's University of Surrey, and cybersecurity adviser to Europol, tells Information Security Media Group. "It's just another example of how people are forgetting that the Internet was never built to be a secure infrastructure, and we need to be mindful of that when relying upon it."

Spam, Malware, Bitcoins

Hijacking router tables could allow an attacker to spoof IP addresses and potentially intercept data being sent to a targeted IP address. Thankfully, Woodward says, that is "not a trivial task," and Internet service providers have some related defenses in place.

But some attacks get through. One four-month campaign, spotted by Dell Secureworks in 2014, involved redirecting traffic from major Internet service providers to fool bitcoin-mining pools into sharing their processing power - which is used to generate bitcoins - with the attacker. Dell estimates that the attacker netted about $84,000 in bitcoins, although it's not clear that such attacks are widespread.

What has been on the increase, however, are incidents in which malware and spam purveyors hijack an organization's autonomous system numbers, or ASNs, which indicate how traffic should move within and between multiple networks, says Doug Madory director of Internet analysis at Dyn Research, which was formed after Dyn last year acquired global Internet monitoring firm Renesys.

In a blog post, Madory describes six recent examples of bogus routing announcement campaigns, some of which remain under way, and all of which have been launched from Europe or Russia. By using bogus routing, attackers with IP addresses that have been labeled as malicious - for example by the Zeus abuse tracker, which catalogs botnet command-and-control servers - can hijack legitimate IP address space and trick targeted autonomous systems on the Internet into thinking the attack traffic is legitimate.

"These are not isolated incidents," Madory says of the recent attacks that he has documented. "First, these bogus routes are being circulated at a near-constant rate, and many separate entities are engaged in this practice, although with subtle differences in approach. Second, these techniques aren't solely for the relatively benign purpose of sending spam. Some of this host address space is known to circulate malware."

One takeaway, Madory says, is that any information security analysts who review alert logs should know that the IP addresses attached to alerts may have often been spoofed via BGP hijacking. "For example, an attack that appeared to come from a Comcast IP located in New Jersey may have really been from a hijacker located in Eastern Europe, briefly commandeering Comcast IP space," he says.

The security flaws associated with BGP that allow such attacks to occur haven't gone unnoticed. In January, the EU cybersecurity agency ENISA urged all Internet infrastructure providers to configure Border Gateway Protocol to ensure that only legitimate traffic flows over their over networks.

But ENISA's advice belies that while BGP can be fixed, it can't be done quickly. "There are efforts to cryptographically sign IP address announcements," Madory says. "However, these techniques aren't foolproof and until they achieve a critical mass of adoption, they won't make much difference."

No Quick Fix

"Why Is It Taking So Long to Secure Internet Routing?" is the title of a recent research paper from Boston University computing science professor Sharon Goldberg, who notes that any fix will require not just a critical mass, but coordinating thousands of different groups. "BGP is a global protocol, running across organizational and national borders," the paper notes. "As such, it lacks a single centralized authority that can mandate the deployment of a security solution; instead, every organization can autonomously decide which routing security solutions it will deploy in its own network." That's one reason why BGP hasn't gotten a security makeover, despite weaknesses in the protocol having been well-known by network-savvy engineers for the past two decades.

Lately, however, BGP abuse has been rising. "It appears to be more systematized now," Dyn's Madory warns. Pending a full fix, he says that service providers might combat these attacks by banding together and temporarily blocking Internet traffic from organizations that repeatedly fail to secure their infrastructure, thus allowing BGP attackers subvert it.

In the meantime, keep an eye on security logs for signs of related attacks. "There's no easy defense, but it is kind of possible [to spot attacks] by monitoring and watching for unexpected changes in routing," Woodward says.


more...
No comment yet.
Scoop.it!

Over 4 billion people still have no Internet connection

Over 4 billion people still have no Internet connection | IT Support and Hardware for Clinics | Scoop.it

The number of people using the Internet is growing at a steady rate, but 4.2 billion out of 7.4 billion will still be offline by the end of the year.

Overall, 35.3 percent of people in developing countries will use the Internet, compared to 82.2 percent in developed countries, according to data from the ITU (International Telecommunication Union). People who live in the so-called least developed countries will the worst off by far: In those nations only 9.5 percent will be connected by the end of December.


This digital divide has resulted in projects such as the Facebook-led Internet.org. Earlier this month, Facebook sought to address some of the criticism directed at the project, including charges that it is a so-called walled garden, putting a limit on the types of services that are available.


Mobile broadband is seen as the way to get a larger part of the world’s population connected. There are several reasons for this. It’s much easier to cover rural areas with mobile networks than it is with fixed broadband. Smartphones are also becoming more affordable.

But there are still barriers for getting more people online, especially in rural areas in poor countries.


The cost of maintaining and powering cell towers in remote, off-grid locations, combined with lower revenue expected from thinly spread, low income populations, are key hurdles, according to the GSM Association. Other barriers include taxes, illiteracy and a lack of content in local languages, according to the organization.


At the end of 2015, 29 percent of people living in rural areas around the world will be covered by 3G. Sixty-nine percent of the global population will be covered by a 3G network. That’s up from 45 percent four years ago.


The three countries with the fastest broadband speeds in the world are South Korea, France and Ireland, and at the bottom of the list are Senegal, Pakistan and Zambia, according to the ITU.

more...
No comment yet.
Scoop.it!

How DNS is Exploited

How DNS is Exploited | IT Support and Hardware for Clinics | Scoop.it

The Internet is a global engine of commerce today, but it was never designed with such grandiose applications in mind. In the underlying architecture of the Internet, hostility was never a design criterion, and this has been extensively exploited by criminals, who capitalize on the Domain Name System infrastructure - the map of the Internet - which is indispensable for the Internet as we know it to function.

"Right now the Internet is being used to transfer hundreds of billions of dollars per year from the productive part of the world's economy toward the unproductive part because it is such a gaping hole," says Internet pioneer and DNS thought leader Dr. Paul Vixie, CEO of Farsight Security, a provider of real-time passive DNS solutions that provide contextual intelligence to threat and reputation feeds.

The Internet was built without any thought of authentication, admission control or security, and so almost any application or website can be abused by a creative criminal, he says. But the DNS is proving essential to both the good guys and the bad guys - almost a unifying field theory.

"Everything you need to do on the Internet requires DNS - regardless of intent," says Vixie, who is also the principal author of version 8 of BIND, the most widely used DNS software on the Internet. "I think this makes DNS an interesting place to look for criminals and signs that criminals must leave," he says.

In part one of an exclusive two-part interview with Information Security Media Group (transcript below), Vixie talks about DNS and the impact it has on the Internet's security landscape. He shares insights on:

Part two of this interview will feature Vixie's views on the evolution of the Internet as an ecosystem that has evolved to make crime easier.

Vixie, CEO of Farsight Security, previously served as president, chairman and founder of the Internet Systems Consortium. He has served on the ARIN board of trustees since 2005, where he served as chairman in 2008 and 2009, and is a founding member of the ICANN Root Server System Advisory Committee and the ICANN Security and Stability Advisory Committee. He has been contributing to Internet protocols and UNIX systems as a protocol designer and software architect since 1980. He wrote Cron (for BSD and Linux), and is considered the primary author and technical architect of BIND 4.9 and BIND 8. He has authored or co-authored about a dozen Request for Comments, a publication of the principal technical development and standards-setting body for the Internet, the Internet Engineering Task Force - mostly on DNS and related topics. He was named to the Internet Hall of Fame in 2014.

Varun Haran: How are criminals exploiting DNS infrastructure to perpetrate crime today?

Dr. Paul Vixie: One main area where DNS is facilitating crime is denial-of-service attacks, where the purpose may be economic or ideological to prevent the victim from being able to use the Internet. This is achieved by filling their Internet connection with unsolicited traffic so that they cannot use their connection for good traffic.

Now, unfortunately, the Internet was designed by scientists and engineers to work in a completely friendly environment. Hostility was never one of the design criteria for the Internet. What that means is it is trivial to send packets forging someone else's address as the source. Which means that if you direct the packets forged with a victim's address towards a powerful server, a lot of response traffic will go to your victim. And because the victim did not solicit it, they cannot turn it off. This is a very popular attack, and anytime that you hear that Google or Spamhaus has been hit with a 400 Gbit/s DDoS attack, it is the exact same method being employed - IP source forgery.

This is not only something the Internet was designed without, it is something that the current Internet economy is resisting fixing, because in order to fix this problem, an ISP has to turn on some new features in their Internet routing equipment. Those features need to be tested, there needs to be documentation, there has to be monitoring, so there is a small cost - there may even be a performance cost in the routing equipment if you turn on this feature.

The cost is trivial, but not zero. The benefit that the operator will see, in exchange for that investment will be measurably zero, because what they are doing is protecting the rest of the Internet against their customers. So if an ISP does this, it is only for the greater good and it is very difficult to get an ISP - who has investors, shareholders, board of directors, management chain etc. - to act for the greater good at their own expense. It simply does not make good business sense to fix this problem.
Internet Vulnerabilities

Haran: The Internet wasn't designed for all the purposes it's being put to today. What are some of the security issues that the current nature of the Internet, in terms of infrastructure and architecture, gives rise to?

Vixie: I gave you one example, which is the lack of source address validation. But there are other admission control problems also. For example, there are control packets that you can transmit that can potentially interrupt other people's conversations. Various TCP and ICMP packets can be transmitted toward parts of the network that will respond by denying other people the ability to communicate for a few seconds.

This comes from when the Internet was just a collection of universities and government contractors. Everybody on the Internet for the first 10 years had a contract with the U.S. government. None of them had any incentive to transmit damaging traffic. The nature of the Internet took that into account. It was a very fragile network, which was intended only for mature computer science professionals to interact.

So, if we turn our attention now to spam, the email system has no admission control. Anyone can send an email to anyone. That was, in fact, an important design criteria to avoid central clearinghouses and make email an end-to-end activity. But what that means is that spammers are also endpoints and have the same right to transmit email to anyone. There is no differentiation, there is no privilege required.

Add to that the fact that, just like IP packets can have their sources forged, even email sources can be forged. And unless you are a technology expert or have a high-end email firewall appliance, you won't be able to tell the difference. This works at scale. Right now, the Internet is being used to transfer hundreds of billions of dollars per year from the productive part of the world's economy toward the unproductive part because it is such a gaping hole. The Internet is the backbone of global commerce today, and yet it was built without any thought of authentication, admission control or security, and so almost any application or website can be abused by a creative criminal.
The Internet's Map

Haran: You have said that DNS is like a unified field theory between the good guys and the bad guys. Can you elaborate? How indispensable is DNS to the structure of the Internet?

Vixie: If the Internet were a territory, the DNS would be its map. We who have grown up in a world that is completely mapped, completely discovered, find it impossible to conceptualize the idea of a territory without a map. Without DNS, the Internet would be a trackless wild, where things would exist but you wouldn't know how to get there or the cost of admission. So I mean it when I say that all Internet communication begins with a DNS transaction - at least in order for the initiator to discover the responder and to find out where to send the packets that will represent their conversation.

But there may be other things as well, such as looking up a key, so that they can build a secure conversation by sharing key-in information or for looking up directory servers for authentication and authorization. Pretty much everything you need to do on the Internet is going to be a TCP/IP session. And every TCP/IP session is going to begin with one or more DNS transactions. This is true regardless of your intent. You intent might be to create wealth, to innovate, to make the world a better place, or it could be that your intent is criminal and you want to lie, cheat, take, force, defraud and you have purposes which would be seen as evil in the eyes of your fellow man. Your intent does not matter - you are not going to be able to do anything on the Internet without DNS. And it is that that I think makes DNS such an interesting place to look for criminals and signs that criminals must leave.
DNS Response Rate Limiting

Haran: You are a strong advocate of DNS Response Rate Limiting, which is something that you have worked on yourself. What can you tell me about DNS RRL?

Vixie: In DNS, there are many different kinds of DNS agents. Some only ask questions and receive answers and some only provide answers. It is that second type that concerns rate limiting, because a server in the DNS - the so-called authority server, which is where DNS content comes from - must be very powerfully built, having a lot of capability. Otherwise, if someone sends you a DDoS, they will make your content unreachable because your network pipe would be full of attack traffic.

It is common to buy an extra-large connection to your authority servers and to buy not just one authority server, but maybe a dozen and put them behind load balancers, with redundant power and so forth, because you want to make sure that no matter what happens, you can address queries and your content is reachable.

The difficulty that this presents to the rest of us is that in DNS, a response is larger than a request and that means that you are a potential amplifier. And if you are hearing a question that was forged - the IP address used by the attacker is forged to become the IP address of their intended victim - then you as a very powerful content server would be willing to help that attacker DDoS that victim simply because you are a powerful content server, and you have to be powerful for reasons of your own.

So when we designed response rate limiting, it was to allow those servers to differentiate between attack flows and non-attack flows so that they would be not as usable as an amplifier of third-party attacks. The tricky part is that you have to be very careful not to drop legitimate queries. So there is a little bit of mathematical trickery involved in the DNS RRL system that helps to make sure that you can stop most DDoS attacks without causing collateral damage.

more...
No comment yet.
Scoop.it!

5G faces technical, political hurdles on the way to offering multigigabit speeds

5G faces technical, political hurdles on the way to offering multigigabit speeds | IT Support and Hardware for Clinics | Scoop.it

For 5G to be successful, the whole telecom industry has to re-evaluate how networks work and are developed. Multiple challenges, both political and technical, have to be overcome before the technology can become a reality.

“Availability of spectrum is obviously a big thing,” said Gerhard Fettweis, who heads a Vodafone-sponsored program at the Dresden University of Technology.

The amount of spectrum allocated to 5G will determine how fast networks based on the technology will eventually become. If they are to reach multiple gigabits per second, which proponents are already promising, operators are going to need a lot more bandwidth than they have today. A first step in securing that will hopefully be taken at the World Radiocommunication Conference in Geneva in November, according to Fettweis.

Network equipment makers and operators are hoping that the conference, organized by the International Telecommunications Union, will set aside at least 100MHz chunks of spectrum below 6GHz for 5G, Fettweis said.

That compares to the latest version of LTE, which offers download speeds at up to 450Mbps using 60MHz of spectrum. But the 100MHz chunks won’t be enough, and researchers are therefore looking at so-called millimeter waves, which use spectrum even higher than 6GHz.

The use of higher-frequency bands is something of a necessary evil the operators and equipment vendors. It’s the only way to get the spectrum they need, but also means the area each base station can cover becomes smaller.

Otherwise, getting spectrum and developing networks and devices that can take advantage of it aren’t the only potential stumbling blocks. For 5G to be a success, the specifications that drive how the technology works has to be developed in a way that’s more inclusive than how other protocols were established in the past, according to Eric Kuisch, technology director at Vodafone Germany.

LTE wasn’t developed to handle all the traffic types that networks carry today. For example, because of the growing popularity of connected wearables, smart meters and vehicles, the telecom industry has had to rethink LTE specifications to make them a better fit for related applications. The goal with 5G is to get more of that right from day one.

“We have to talk with industries, including the car industry and manufacturing, to really understand what their needs are. That’s new for us,” Kuisch said.

But what has Kuisch really worried is how 5G networks will be monitored and managed, which nobody is talking about at the moment. Getting this right will be extremely challenging, and it’s something mobile operators hasn’t done a good enough job steering the vendors, according to the Vodafone executive.

“You don’t want to be too late to understand that some part of the network is breaking down when all the cars in Germany are depending on it,” Kuisch said.


more...
No comment yet.
Scoop.it!

A lot of people are saying Microsoft is killing Internet Explorer, but that's not true

A lot of people are saying Microsoft is killing Internet Explorer, but that's not true | IT Support and Hardware for Clinics | Scoop.it

Microsoft has been working on a new web browser code-named "Project Spartan" specifically designed for Windows 10 for quite some time now, and that's left plenty of people wondering what will happen to Microsoft's Internet Explorer.

While some are claiming that Microsoft is "killing off" the Internet Explorer branding, that's not true.

Microsoft confirmed to us that Internet Explorer will still be included in Windows 10, but that it will play second fiddle to "Project Spartan," which is to be re-branded with a new name in the future.

"Project Spartan is Microsoft’s next generation browser, built just for Windows 10," a Microsoft spokesperson told Business Insider. "We will continue to make Internet Explorer available with Windows 10 for enterprises and other customers who require legacy browser support."

So while Internet Explorer will technically be included in Windows 10, Microsoft wants to shine a spotlight on Project Spartan as the main Microsoft browser, likely in an effort to shed the negative connotation many have with Internet Explorer (famously known for being the number one web browser for downloading other browsers) and to draw attention to the slew of new features Project Spartan will offer.

Project Spartan was first announced in January during Microsoft's Windows 10 unveiling, and it offers a new design and rendering engine, and includes integration with Cortana, Microsoft's virtual assistant.


Allowing Project Spartan to plug into Cortana will allow users to glean more information from the web without leaving their current tab, with Cortana being able to pull up directions, store hours, phone numbers, and addresses from a website. All you'll need to do is click on Cortana's tiny blue ring to see what she can find from the website you're visiting.

So what will Project Spartan eventually be called when Windows 10 launches?

Microsoft says it's still deciding on how to re-brand Project Spartan, according to The Verge, but don't be surprised if it includes the word "Microsoft" in the name — Microsoft's marketing chief Chris Capossela says their polling suggests Chrome users preferred including the company name in the re-branding. 


more...
No comment yet.
Scoop.it!

Online trust is at the breaking point

Online trust is at the breaking point | IT Support and Hardware for Clinics | Scoop.it

IT security professionals around the globe believe the system of trust established by cryptographic keys and digital certificates, as well as the security of trillions of dollars of the world's economy, is at the breaking point.

For the first time, half of the more than 2,300 IT security professionals surveyed by The Ponemon Institute now believe the technology behind the trust their business requires to operate is in jeopardy. 100% of organizations surveyed had responded to multiple attacks on keys and certificates over the last two years.


Research reveals that over the next two years, the risk facing every Global 5000 enterprise from attacks on keys and certificates is at least $53 million USD, an increase of 51 percent from 2013. For four years running, 100 percent of the companies surveyed said they had responded to multiple attacks on keys and certificates, and vulnerabilities have taken their toll.

"The overwhelming theme in this year's report is that online trust is at the breaking point. And it's no surprise. Leading researchers from FireEye, Intel, Kaspersky, and Mandiant, and many others consistently identify the misuse of key and certificates as an important part of APT and cybercriminal operations," said Kevin Bocek, VP of Security Strategy and Threat Intelligence at Venafi. "Whether they realize it or not, every business relies upon cryptographic keys and digital certificates to operate. Without the trust established by keys and certificates, we'd be back to the Internet 'stone age' – not knowing if a website, device, or mobile application can be trusted."

As risk increases, so does the number of keys and certificates: Over the last two years, the number of keys and certificates deployed on infrastructure such as web servers, network appliances, and cloud services grew more than 34 percent to almost 24,000 per enterprise. The use of more keys and certificates makes them a better target for attack. Stolen certificates sell for almost $1000 on underground marketplaces, and doubled in price in just one year. Researchers from Intel believe hacker interest is growing quickly.

Organizations are more uncertain than ever about how and where they use keys and certificates: Now 54 percent of organizations admit to not knowing where all keys and certificates are located and how they're being used. This leads to the logical conclusion: how can any enterprise know what's trusted or not?

Security pros worry about a Cryptoapocalypse-like event: A scenario where the standard algorithms of trust like RSA and SHA are compromised and exploited overnight is reported as the most alarming threat. Instantly transactions, payments, mobile applications, and a growing number of Internet of Things could not be trusted. Coined by researchers at Black Hat 2013, a Cryptoapocalypse would dwarf Heartbleed in scope, complexity, and time to remediate.

The misuse of enterprise mobile certificates is a lurking concern: The misuse of enterprise mobility certificates used for applications like WiFi, VPN, and MDM/EMM is a growing concern for security professionals. Misuse of enterprise mobility certificates was a close second to a Cryptoapocalypse-like event as the most alarming threat. Incidents involving enterprise mobility certificates were assessed to have the largest total impact, over $126 million, and the second largest risk. With a quickly expanding array of mobile devices and applications in enterprises, it's no wonder why security pros are so concerned.

"With the rising tide of attacks on keys and certificates, it's important that enterprises really understand the grave financial consequences. We couldn't run the world's digital economy without the system of trust they create," said Dr. Larry Ponemon, chairman and founder of the Ponemon Institute. "This research is incredibly timely for IT security professionals everywhere – they need a wake up call like this to realize they can no longer place blind trust in keys and certificates that are increasingly being misused by cybercriminals."survey


Via Paulo Félix
more...
No comment yet.
Scoop.it!

How Google's New Wireless Service Will Change the Internet

How Google's New Wireless Service Will Change the Internet | IT Support and Hardware for Clinics | Scoop.it

Google says its new wireless service will operate on a much smaller scale than the Verizons and the AT&Ts of the world, providing a new way for relatively few people to make calls, trade texts, and access the good old internet via their smartphones. But the implications are still enormous.

Google revealed on Monday it will soon start “experimenting” with wireless services and the ways we use them—and that’s no small thing. Such Google experiments have a way of morphing into something far bigger, particularly when they involve tinkering with the infrastructure that drives the internet.

As time goes on, the company may expand the scope of its ambitions as a wireless carrier, much as it had done with its super-high-speed landline internet service, Google Fiber. But the larger point is that Google’s experiments—if you can call them that—will help push the rest of the market in the same direction. The market is already moving this way thanks to other notable tech names, including mobile carrier T-Mobile, mobile chipmaker Qualcomm, and serial Silicon Valley inventor Steve Perlman, who recently unveiled a faster breed of wireless network known as pCell.

At the moment, Google says, it hopes to provide ways for phones to more easily move between cellular networks and WiFi connections, perhaps even juggling calls between the two. Others, such as T-Mobile and Qualcomm, are working on much the same. But with the leverage of its Android mobile operating system and general internet clout, Google can push things even further. Eventually, the company may even drive the market towards new kinds of wireless networks altogether, networks that provide connections when you don’t have cellular or WiFi—or that significantly boost the speed of your cellular connection, as Perlman hopes to do.

Richard Doherty—the director of a technology consulting firm called Envisioneering, who is closely following the evolution of the world’s mobile networks—points out that the carriers still have clout of their own, and that in many cases they will push to keep wireless networking as it is. But he also says the carriers won’t stand by if looks like Google will eclipse their services. “Do they really want all this happening on Google, when they’re not getting a penny?” he asks.

‘In the Coming Months’

On Monday, at the massive Mobile World Congress in Barcelona, Spain, Google big-wig Sundar Pichai revealed that the company will transform itself into a wireless carrier in “the coming months,” confirming earlier reports that it would sell wireless plans directly to smartphone buyers. And true to Google form, Pichai was careful to say that the company isn’t trying to compete with major carriers.

“Carriers in the US are what powers most of our Android phones,” he said, referring to the world of smartphones that run Google’s Android operating systems and all its associated Google apps. “That model works really well for us.”


more...
No comment yet.
Scoop.it!

Compromise on Info-Sharing Measure Grows

Compromise on Info-Sharing Measure Grows | IT Support and Hardware for Clinics | Scoop.it

A willingness to compromise expressed at a Feb. 25 House hearing on President Obama's cyberthreat information sharing initiative offered a sign of hope that long sought legislation to get businesses to share such data could pass Congress this year and be signed into law.

The tone of the discussion at the hearing was far different than in the past two congresses, when the White House threatened presidential vetoes of cyberthreat information sharing measures that passed the House of Representatives.


Congressional Republicans and the Democratic president and his supporters differed in the past over how an information sharing law should address liability protections and privacy safeguards. The White House maintained the liability protections in the Republican-sponsored legislation were too broad and that privacy safeguards were too weak. The GOP argued the liability provisions in their bills - which had some Democratic backers - were needed to get the private sector to participate in the voluntary information sharing program and that the privacy protections the White House sought would be too costly for some businesses to implement.

But those differences seem to have narrowed at the Feb. 25 House Homeland Security Committee, where an expression of willingness to seek compromise surfaced from both sides.

Bone of Contention

"It is, sometimes, a bone of contention between both sides of the aisle," House Homeland Security Committee Chairman Mike McCaul, R-Texas, said, referring to differing views on liability protection. But McCaul congratulated administration representatives at the hearing for presenting the president's plan and saw merit in its proposals. "I talked to the private sector; they like the liability protections that are presented here," he said, especially in regards to sharing data with the government.

Still, McCaul said some business leaders had reservations about the liability protection in Obama's plan for businesses that want to share cyberthreat information with other business.

The president's proposal would provide liability protection for businesses that share cyberthreat data with DHS's National Cybersecurity and Communications Integration Center, known as NCCIC. Under Obama's plan, those protections aren't extended to businesses that share information with each other directly but would be covered if the data is shared through newly formed information sharing and analysis organizations, or ISAOs. "What the legislation provides is that the private sector can share among themselves through these appropriate organizations and enjoy the same liability protections for providing that information to those organizations," said Undersecretary Suzanne Spaulding, who runs the National Protection and Programs Directorate, the DHS entity charged with collaborating with business on cybersecurity.

Working Out Legislative Language

McCaul responded that the liability protections to share information with NCCIC could serve as the "construct" to share data among businesses, suggesting specific legislative language could be worked out between Congress and the administration. "We can discuss that more as this legislation unfolds," he said.

Rep. Curt Clawson, a Florida Republican who led several multinational corporations before his election to Congress in 2014, said getting buy-in to share cyberthreat information with the U.S. government from companies with global operations and stakeholders could prove to be "a tough sale."

"My world is all about multiple stakeholders," Clawson said, addressing Spaulding. "We're trying to protect our customers, our suppliers, the communities that we live in, and what I've read so far of what you proposed just doesn't feel like a compelling case that I can take to my multinational board of directors. ... Any private-sector CEO would be negligent to go along on the basis of trust" without the U.S. government providing a detailed plan on what information is being sought and how it would be used.

Spaulding said the government will build that trust and agreed with Clawson that the "devil is in the details" of a final legislative plan. She said information to be shared would be minimal and technical, such as explicit cyberthreat indicators, IP address and specific types of malware. The undersecretary said the government would be transparent on the types of information it seeks and receives and develop policies and protocols to protect proprietary as well as personally identifiable information. "This isn't going to make every company open its doors," Spaulding said. "But it does address concerns that we've heard from the private sector, and there will be a fair amount of detail about precisely what we're talking about sharing here."

Though not totally persuaded, Clawson offered to work with DHS on the legislation, an offer Spaulding accepted.

Stripping PII from Shared Data

Another partisan difference is the Obama administration's insistence that companies strip personally identifiable information from data before it's shared, an act that some Republicans say puts a financial burden on businesses. Phyllis Schneck, DHS deputy undersecretary for cybersecurity, explained that under Obama's proposal, companies would need to make a "good-faith effort" to remove PII, conceding that it is a "policy puzzle" that needs to be solved by the private sector working with law enforcement and the intelligence community. "We're doing our best to get everybody to design that," Schneck said.

Regardless of how the final language of a cyberthreat sharing bill reads, such legislation is only one part of a solution to mitigate cyberspace risks. "Information sharing is no silver bullet," said Eric Fischer, senior specialist for science and technology at the Congressional Research Service. "It's an important tool for protecting systems and their contents. As long as organizations are not implementing even basic cyber hygiene, there are going to be some significant difficulties."

Fischer cited a Hewlett-Packard study that shows 45 percent of companies lack basic cyber hygiene. "There have been cases where companies had the information, but nevertheless did not pay sufficient attention to it," he said. "They had information that could have prevented an attack. If a company is not prepared to implement threat assessments that they receive, then that's going to be a problem."


more...
No comment yet.
Scoop.it!

Sizing Up the Impact of Partial DHS Shutdown

Sizing Up the Impact of Partial DHS Shutdown | IT Support and Hardware for Clinics | Scoop.it

The expansion of some major federal government cybersecurity initiatives would be suspended if Congress does not fund the Department of Homeland Security by week's end, triggering a partial shutdown.

Initiatives to expand the Einstein 3 intrusion prevention and continuous diagnostic and mitigation programs to a number of federal civilian agencies would be placed on hold if Congress fails to come up with the money by Feb. 27, when a temporary DHS appropriation ends.

"A shutdown would prevent us from bringing aboard those [programs] and essentially stop those agencies from receiving the protection that they need from the cyberthreats out there," says Andy Ozment, DHS assistant secretary for cybersecurity and communications.

About 43 percent of the staff at the National Protection and Program Directorate - the DHS entity that oversees its cybersecurity programs - would be furloughed if Congress fails to enact funding legislation that President Obama would sign, according to an estimate by the Congressional Research Service. Ozment says that furlough figure includes 140 employees from the National Cybersecurity and Communications Integration Center, the DHS unit that coordinates cyberthreat information sharing with federal agencies; local, territorial, tribal and state governments; the private sector and international organizations.

Will Systems Be at Risk?

Although Ozment, in testimony earlier this month to a House panel, said the furloughs would have an adverse impact on the government's cybersecurity activities, he stopped short of saying federal IT systems would be placed at risk by a partial shutdown.

"Without these staff, the NCCIC's capacity to provide a timely response to agencies or critical infrastructure customers seeking assistance after a cybersecurity incidents would be decreased and we would be less able to conduct expedited technical analysis of cybersecurity threats," Ozment testified at a Feb. 12 hearing of the House Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection and Security Technologies.

Funding DHS's cybersecurity initiatives - which has widespread support from among Democrats and Republicans in Congress - is caught up in a highly partisan political battle over President Obama's executive order to shield millions of illegal immigrants in the United States from deportation. The House in January passed a DHS appropriations bill that would fund most department programs, including those for cybersecurity, but withholds money from initiatives that would support Obama's executive action on immigration. With a threat of a Senate filibuster by Democratic members, as well as a presidential veto, the House bill has stalled in the upper chamber.

Lamentable But Not Perilous

Jason Healey, a cybersecurity expert at the think tank The Atlantic Council, says he doubts the failure to fund DHS cybersecurity initiatives would create significant risk to either government or critical private networks. "That seems like it's a lamentable thing that they can't continue [funding], but it doesn't worry me too much," he says, adding that other federal agencies work to help safeguard government networks and critical IT systems in the private sector, including the FBI.

Besides the temporary suspensions of the Einstein 3 and continuous diagnostic and mitigation programs, also known as continuous monitoring, Ozment said a partial shutdown would halt development of new programs to secure IT. "We would be unable to continue planning our next generation of information sharing capabilities that are necessary to make our information sharing real-time and automated in order to enable us to combat highly sophisticated cyberthreats," he said.


more...
No comment yet.
Scoop.it!

Secure Domains: The DNS Security Debate

Secure Domains: The DNS Security Debate | IT Support and Hardware for Clinics | Scoop.it

The importance of improving the Internet infrastructure was a dominant theme throughout President Obama's White House Summit on cybersecurity and consumer protection last week.

Making that happen, however, isn't a straightforward proposition, information security experts warn, owing to the Web never having been designed to be secure in the first place - which may seem ironic, given its importance now as the backbone of e-commerce and the world's payments infrastructure.

Furthermore, any attempt to strengthen Internet hygiene requires the participation and buy-in of many different key players, including standards bodies, government agencies, DNS providers, Internet service providers and more. Given all of the different parties involved, disagreement often rages about the best way forward.

Take DNSSEC, which is also known as Domain Name System Security Extensions. This evolving, open standard - or specification - is designed to authenticate the origin of DNS data used on Internet protocol networks by digitally signing it. At the White House cybersecurity summit, CloudFlare, which offers services that defend against DNS and distributed-denial-of-service attacks, announced that for all of the 2 million websites it supports, it will now enable DNSSEC.

CloudFlare's Rationale

By making DNSSEC widely available, CloudFlare says it hopes to enhance the overall security of the Internet. "Our ultimate goal is that DNSSEC will be easy to deploy, and thus widely adopted, to make the Internet a better, more secure place," says Ryan Lackey, a principal in the firm's security practice. The move comes after CloudFlare in September began offering Universal SSL (secure socket layer) certification, free of charge, to all of its clients. SSL provides a secure connection for Internet browsers and websites to transmit data, and helps defend against many types of attacks.

"Both SSL and DNSSEC have a role to play in keeping users safe on the Internet, from phishing, from cybercriminals and from malicious nation states," Lackey says. "Having proven that Universal SSL is possible at our scale, we hope many other organizations will follow in turning SSL on for all their customers - and at no additional cost."

But while Lackey describes DNSSEC as being "an important, foundational security technology," in the past it has been "incredibly difficult to deploy," he acknowledges, although his firm has been trying to simplify that process. "We're working with DNS registrars and registries to simplify the process of turning DNSSEC on for a domain, and we will soon be providing simple, robust DNS for our customers, which fully supports DNSSEC."

Debate: DNSSEC Valuable?

But there's a debate over DNSSEC, and just what it might - or might not - do for Internet users. Some security experts see it as crucial technology for blocking DNS amplification DDoS attacks. But others starkly disagree, saying instead that DNSSEC can actually be abused by attackers to fuel amplification attacks.

One critic is Dan Holden, director of the security engineering and response team for online security firm Arbor Networks - which competes with CloudFlare. Holden says that Universal SSL and DNSSEC won't stop phishing, and don't address authentication concerns facing payments providers and banking institutions.

"Many people do not believe DNSSEC is a good solution at all," Holden says.

But other information security experts and government agencies have backed the standard. "Registrars should consider supporting DNSSEC," advises the EU cybersecurity agency ENISA in a recent threat report.

"The use of DNSSEC is definitely a step in the right direction," says Europol cybersecurity advisor Alan Woodward, who's a visiting professor at the department of computing at England's University of Surrey. "It does help with attacks, such as DNS poisoning. However, I think some people misunderstand what DNSSEC does for us. It basically provides authentication of the source, not encryption of the data passed. It doesn't prevent DDoS attacks per se, but it can help counter it, as it allows you to shut off untrusted DNS sources."

But DNSSEC alone won't solve the world's DNS problems, Woodward says. "It would require much more widespread adoption to make a significant dent in the problem we see in DNS use across the Web. However, it might well help CloudFlare clients to see off DDOS attacks slightly more easily if they are mounted using DNS amplification."

Financial Sector Upsides

More widespread DNSSEC adoption could also benefit financial services firms, says Al Pascual, director of fraud and security at Javelin Strategy & Research - but only if the standard is implemented properly. "DNSSEC can help prevent malicious redirection, but it is a two-part equation, as infrastructure providers and site owners need to implement it in order for the solution to function correctly," Pascual says. "While DNSSEC isn't new, financial institutions that are not taking advantage could stand to benefit from DNSSEC's ability to reduce the risk of successful phishing attacks against accountholders."

DNSSEC can also protect domain records from spoofing and "poisoning," but will not protect sites from DNS records tampering - such as registration hijacking and malware-infected sites that compromise visitors through drive-by downloads - says Greg Rosenberg, security engineer at digital forensics investigation firm Trustwave.

"Many attackers utilize hijacked DNS information to redirect unsuspecting users to malicious websites to capture sensitive data, like payment card information, log-in data and/or Social Security numbers," he says. "As hackers continue to target Web and e-commerce assets at a quickening pace, it will be critical to help protect against man-in-the-middle attacks and phishing for credentials."

But DNSSEC was never designed to stop man-in-the-middle attacks, Arbor's Holden says, adding that it also cannot solve the ongoing challenge of poor user behavior. "Phishing is preying on the person, not the machine, and that's why it's so difficult to solve from a technology standpoint," he says.

Parallel Moves

CloudFlare's willingness to offer its clients a hosted DNSSEC offering is a move into relatively uncharted territory, says Dave Jevans, co-founder of the Anti-Phishing Working Group and chief technology officer of mobile security firm Marble Security. "However, it won't stop most DNS attacks, as those are typically phishing the DNS credentials of a website's admin, and taking over the site," he says. "But Cloudflare should be applauded for taking a leadership position with DNSSEC."

But CloudFlare's move - and DNSSEC itself - is just part of what's required, Jevans says. For the financial services industry in particular, he says that strengthening the security of the e-mail network itself, through initiatives such as DMARC - Domain-based Message Authentication, Reporting & Conformance - and the use of top-level domain names, such as ".bank," have the potential to deliver great security payoffs.


more...
pitechnologies's curator insight, February 20, 2015 4:32 PM

Domain names & web hosting company offers name registration, net hosting, internet vogue and computer builder tools low value.For details visit http://pitechnologies.biz/

Scoop.it!

Yes, You Can Afford a Hacker

Yes, You Can Afford a Hacker | IT Support and Hardware for Clinics | Scoop.it
Want to break into your partner’s email? Got a few hundred bucks lying around? You can afford your very own hacker.

If you’re looking to break into someone’s email account or snag a few compromising photos stored in the cloud, where would you go? Craigslist, of course.

“I am looking for someone who can get into a database to retrieve a few photos. Someone who is a genius at computers,” read a recent post. And it doesn’t stop there.

You can post “How do I get the password for my ex-girlfriend’s hotmail account?” or just “Need a computer hacker for a job!” on an online forum and just wait for people to respond, says Tyler Reguly, manager of security research at Tripwire. Then you just sit back and wait for the replies to roll in and strike a deal.

It’s that easy to hire a hacker.

Cybercrime used to be limited to the shadowy corners of the Internet and secret black market forums, but now these transactions are taking place on websites that millions of people use every day. Googling “hacker for hire” returns more than 1.6 million results. And for the slightly more tech-savvy, new marketplaces such as hackerslist.com, hackerforhire.org, and neighborhoodhacker.com provide a safe meeting place for hackers and those seeking their services. You can even leave Yelp-style feedback on forums like hackerforhirereview.com.

“It’s frightening that people have no qualms asking” for hacking in the same way they would ask someone to shovel snow from their driveway, Reguly says.

Black market websites have long offered a wide array of services for would-be cybercriminals—customized malware, carder forums selling stolen payment card details and cloned credit cards, exploit kits and other toolkits to craft campaigns, denial-of-service attack tools, and botnet rentals—at fairly affordable prices. Most of the sites accept the cryptocurrency Bitcoin, to keep transactions anonymous. Some sites welcome new users and others have strict membership requirements, but in general, these forums and stores are public, transparent, and easy to find, says Daniel Ingevaldson, CTO of Easy Solutions, a fraud detection company.

“It’s really hard to get in trouble for doing this, so there is no reason to hide,” Ingevaldson says. “It will take you only a few minutes to find it, even if you don’t know what you are doing.”

Hacking used to be thought of as a financial crime, but today’s hackers-for-hire will take personal jobs. Instead of offering botnets with hundreds or thousands of compromised machines or stolen payment card information, these sites target a much broader market. Offerings include breaking into email and social media accounts or hacking into online databases and services, says Grayson Milbourne, the security intelligence director at Webroot. Some sites may offer escrow accounts, letting customers transfer funds in and paying the hacker only after the service is complete. Prices vary, but usually range between $100 and $3,000, making these services “within reach of most,” he says.

That Craigslist ad for retrieving some photos off the database offered $500 for the gig.

If you’re willing to tread these muddy waters, finding a hacker is easy and just a simple Google search away.

That society doesn’t seem to care about this kind of hacking is “disconcerting,” Reguly says, noting that many people don’t view stealing digital assets as a real crime. The disconnect between the physical and digital worlds remains very strong, even as people’s offline and online lives merge.

The same person who would be upset when thieves steal credit card numbers would not consider breaking into email or Facebook accounts as serious, he said.

And some customers feel they deserve what they’re paying for or that they’re righting some wrong. A PhD student angry that his research paper has been posted without his permission on other sites might hire someone to make sure people can't search or link to those pirated copies. A mother might want someone to break into her son’s Facebook account and install something on his phone that would let her intercept both incoming and outgoing phone calls, text messages, and pictures.

Even though it’s relatively affordable, hiring a hacker for personal use is a risky business, Milbourne says.

Is there honor among thieves? There is no way to make sure the hacker will stop where you’ve told him or her to once they’ve done the job. That mom may receive her son’s Facebook password, but she can never be sure the hacker won’t use the information to steal her son’s identity, or to trick him into downloading a banking Trojan on the family computer to steal her bank account information.

The legal issues surrounding these transactions are murky.

The activities being posted online are criminal, but who is supposed to prosecute them? Hacking is a global service—the providers can be based anywhere in the world and out of U.S. jurisdiction. The customer looking for the services doesn’t need to know, and probably doesn’t even care, where the service is coming from. And the sellers know the odds of law enforcement coming after them are very low.

“Getting arrested is out of their realm of experience for what can possibly happen,” Ingevaldson said. “None of their friends have been arrested.”

Hacker-for-hire sites may or may not be breaking the law—no one has tested those limits yet. And mainstream sites such as Craigslist act as just a marketplace connecting buyers and sellers and so far have claimed they are not responsible for any resulting illegal activities.

“It should be simple … hacking into someone’s email is a crime, so discussing that with someone and paying them to do it should, therefore, be conspiracy to commit a crime,” Reguly says.

The recent proposals from the White House to amend the Racketeering Influenced and Corrupt Organizations Act—originally designed to prosecute the Mafia and gangs—to include hacking may change things. If RICO can be applied to cybercrime, just being in the same chatroom or forum as a hacker may make the person an accomplice.

If you’re willing to tread these muddy waters, finding a hacker is easy and just a simple Google search away.

“At this point, our lives are digital, the bits and bytes traversing the wires are as much a part of us as the clothes we choose to wear and the cards we carry in our wallets,” Reguly says. This means people have to protect their digital assets just as they take care of themselves in the physical world. “To make a mockery of that with sites like this is a great example of the decay of society.”


Via Roger Smith, Paulo Félix
more...
No comment yet.
Scoop.it!

Prepare for faster, safer web browsing: The next-gen HTTP/2 protocol is done

Prepare for faster, safer web browsing: The next-gen HTTP/2 protocol is done | IT Support and Hardware for Clinics | Scoop.it

The future of the web is almost ready for prime time.

Work on HTTP/2 by the Internet Engineering Task Force HTTP Working Group is finished, according to group chair Mark Nottingham, who made the announcement on his personal blog. HTTP/2 now has to go through the final editing process before it is published and becomes an official web standard.

The announcement comes a little more than a week after Google announced that it was discontinuing SPDY in favor of HTTP/2 inside Chrome. SPDY won’t fully disappear from Chrome until early 2016, while HTTP/2 support will roll out to Google’s browser in the coming weeks.

Why this matters: Since HTTP is part of the very foundation of the web, any changes that come to the protocol are a big deal. HTTP/2 promises to make response times faster for web clients (browsers) and reduce the load on servers. But it will take time for the new standard to roll out across the web and for all the kinks to get sorted out. As Nottingham explained in a blog post from 2014, “HTTP/2 isn’t magic Web performance pixie dust; you can’t drop it in and expect your page load times to decrease by 50%.”  Once server admins get the hang of HTTP/2, however, it should boost web performance.

HTTP/2 features

The biggest change with HTTP/2 is a new feature called mutliplexing that, together with header compression, allows multiple server requests to be sent at the same time. HTTP/2 also uses fewer connections between server and client, and allows servers to push content straight to a browser.

That last bit is important since it can also improve load times. With “server push” a website could, for example, send a CSS stylesheet to the browser before it requests it—a logical move since the browser needs the CSS data to know how to lay out the page.

One thing that won’t be coming to HTTP/2, however, is mandatory SSL/TLS (HTTPS) encryption. That was the original plan back in late 2013, but it has since been scrapped. HTTP/2 will still make TLS encryption easier to implement, according to Nottingham, because the new protocol is designed to reduce the speed hits that sites usually take using HTTPS right now. But it won’t be a mandatory part of the new standard.

That said, TLS may still be sort of mandatory for sites that want to use HTTP/2. According to Nottingham, developers for Chrome and Firefox have said that the two popular browsers will only use HTTP/2 over TLS. That means site developers that don’t add TLS to an HTTP/2-enabled site won’t be able to use the new standard with two of the most popular browsers out there.

While the bulk of the work on HTTP/2 is done, the IEFT HTTP WG isn’t going anywhere. In fact, it’s already looking ahead to the possibility of an HTTP/3, as well as improving current HTTP specs with other features like HTTP message signing for improved server-to-browser authentication.



more...
No comment yet.
Scoop.it!

Muni broadband providers don’t want to face common carrier rules

Muni broadband providers don’t want to face common carrier rules | IT Support and Hardware for Clinics | Scoop.it

It's no secret that the big Internet providers like Comcast, AT&T, and Verizon oppose a move toward heavier Internet regulation.

ISPs angry after lobbyist-turned-FCC chairman suggests Title II classification.

But many smaller providers don't want stricter rules, either. Today, 43 municipal broadband providers asked the Federal Communications Commission to avoid reclassifying them as common carriers, a move that would expose them to net neutrality rules and potentially other requirements under Title II of the Communications Act.

Municipal broadband providers have mixed feelings about the policies of President Obama and FCC Chairman Tom Wheeler. Obama and Wheeler are planning to eliminate state laws that restrict growth of municipal broadband networks, a move that is opposed by the big private ISPs but supported by the municipal broadband providers.

But at least a few dozen municipal broadband providers oppose Title II regulation, including Cedar Falls Utilities in Iowa, which recently hosted Obama when he was arguing against anti-municipal broadband laws. The 43 signers of the letter included Cedar Falls, though it did not include two municipal broadband providers in Tennessee and North Carolina that have asked the FCC to preempt state laws.

"The undersigned, municipal providers of broadband Internet access service, are strong supporters of net neutrality and an open Internet but are staunchly opposed, like other, small and medium-sized Internet service providers (ISPs) who are privately held, to the reclassification and regulation of this service as common carriage under Title II of the Communications Act," the 43 providers wrote to the FCC.

If the commission does reclassify broadband under Title II, it should exempt small and medium-sized providers "from any new and enhanced transparency obligations; and ensure smaller ISPs that utilize poles that are subject to the cable rate formula are not forced into paying higher fees based on the telecommunications rate," they wrote.

"As smaller ISPs, we do not have an incentive to harm the openness of the Internet," they continued. "All of the undersigned face competition from one or more wireline ISPs, and we compete hard to attract and serve customers who would depart to our competitors if we engage in any business practices that interfere with their Internet experience."

Although Wheeler says he does not intend to impose rate regulation, tariff requirements, or last-mile unbundling, the providers said this is "cold comfort."

"The Commission has in the past imposed structural separations, service unbundling and resale obligations under Sections 201 and 202, and this Commission cannot bind the actions of a future Commission should it wish to institute rate regulation, tariffing, unbundling or any other form of before-the-fact regulation, creating deep and lasting regulatory uncertainty," they wrote. "Moreover, even this Commission will be obligated to respond to complaints about rates or seeking open access to facilities by third-party providers."

In making the case for Title II, Wheeler claimed that small Internet providers "have all come in and said, 'we like Title II, we hope you’ll do Title II.’”

The American Cable Association (ACA), which represents more than 900 small and medium-sized providers, including 100 municipal providers, begs to differ.

"ACA applauds the 43 municipal broadband Internet providers that are also ACA members for speaking out about the harms of Title II reclassification for smaller ISPs," CEO Matthew Polka said in an announcement today. "ACA agrees with their clear message that the FCC Chairman should make changes to the order to accommodate these concerns before the scheduled vote on Feb. 26.”

more...
No comment yet.