Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

September 30 2011

02mydafsoup-01

To all Occupy Wall Street participants, here is the key to your victory... (for serious) : politics | reddit.com 2011-09-30



Guys, listen. Here's the deal.

 

I love you guys with every shred of my hard-left leaning heart. But I think you might be doing something wrong. Here is one thing that can help you.

Tomorrow, wear a polo and khakis

Seriously. polos and khakis. Every time you guys DO finally get some fucking press, it's a scrawny dude with dreads in a ratty t-shirt. You're going big here, dress it. Tomorrow, Polo shirt and Khakis.

Why? Because you need to get the right-leaning equivalent of me on your side. I'm 35 right now. I understand where the hippy thing comes from. I get it as well as a guy who's 35 can. My Counterparts do not. They think you are scummy druggies on welfare and when they see on tv a bunch of people who they think are S.D's on W, they root for the cops to hit you again.

Speaking of the cops, Who do you think they'll mace first? SD's on W, or a guy in khakis and a polo? Seriously, it's fucking cop camouflage. And if they DO come for you. When people at home see PEOPLE THAT LOOK LIKE THEM getting abused by police... That's when shit changes.

Seeing protesters get beat up means nothing because protesters get beat up all the time. Therefore, don't look like a protester! This connects you to the person watching and opens them to your side!

So for serious. Do it. You're almost about to tip this thing over. Polos and khakis. Cop Camo + target audience = Shave, shower and polos and khakis.

TL;DR: Polo shirt and Khakis = dress code for nyc protests tomorrow.

Do it.

Edit: Going to bed soon, one more thing before I turn it over to all of you.

Please spread this as much as you can. Professionalism will help push this thing over the edge. You have labor pushing you. National Media is starting to wake up to you. You're almost there. Keep pushing.

Think: Business Casual Friday. Don't play it up like the Billionaires for healthcare. You're just a guy, going to work in this big office building here.

Even if you don't think that you SHOULD be judged on appearances (which I do agree), You have to realize that you ARE. Fight the appearance fight another day. Polos and khakis.

Do it.

-------------------------

oAnth:

this entry is part of the OccupyWallStreet compilation 2011-09, here.

Reposted bydatenwolf datenwolf

August 01 2011

2011 The Public Voice Civil Society Meeting

The Public Voice will hold a privacy and consumer protection conference in conjunction with the 33rd Data Protection and Privacy Commissioners Conference on Monday, 31 October 2011 at the Hilton Mexico City Reforma. The OECD has also scheduled a symposium on November 1. The Data Protection Conference will take place 2-3 November 2011.

January 03 2011

2011 Watchlist: 6 themes to track

Now's the time of year for everyone to write about the trends they see in the coming year. I've resisted that in the past, but this year I'll make an exception. We'll see if it becomes a tradition. Here's my quick list of six themes to watch in 2011:

The Hadoop family

Big data is no secret, and it grew so big in 2010 it can hardly count as a "trend" for 2011. Hadoop grew up with big data, and big data grew up with Hadoop. But what I've seen recently is the flowering of the Hadoop platform. It's not just a single tool, it's an ecosystem of tools that interoperate -- and the total is more than the sum of its parts. Watch HBase, Pig, Hive, Mahout, Flume, ZooKeeper, and the rest of the elephantine family in the coming year.

Real time data

Websites may not be "real time" in a rigorous sense, but they certainly aren't static, and they've gone beyond the decade-old notion of "dynamic," in which the same set of inputs produced the same outputs. Sites like Twitter and Facebook change with time; users want to find out what's happening now (or some reasonably relaxed version of now). Most of the tools we have for working with big datasets are batch-oriented, like Hadoop. One of the most exciting announcements of 2010 was the brief glimpse of Google's Percolator, which enables streaming computation on Google-sized datasets. While Percolator is a proprietary product and will probably remain so, I would be willing to bet that there will be an open source tool performing the same function within the next year. Watch for it.


Strata: Making Data Work, being held Feb. 1-3, 2011 in Santa Clara, Calif., will focus on the business and practice of data. The conference will provide three days of training, breakout sessions, and plenary discussions -- along with an Executive Summit, a Sponsor Pavilion, and other events showcasing the new data ecosystem.

Save 30% off registration with the code STR11RAD



The rise of the GPU


Our ability to create data is outstripping our ability to compute with it. For a number of years, a subculture of data scientists have been using high-performance graphics cards as computational tools, whether or not they need graphics. The computational capabilities that are used for rendering graphics are equally useful for general vector computing. That trend is quickly becoming mainstream, as more and more industries find that they need the ability to process large amounts of data in real time ("real" real time, not web time): finance, biotech, robotics, almost anything that requires real-time results from large amounts of data.

Amazon's decision to provide GPU-enabled EC2 instances ("Cluster GPU Instances") validates the GPU trend. You won't get the processing power you need at a price you want just by enabling traditional multicore CPUs. You need the dedicated computational units that GPUs provide.

The return of P2P

P2P has been rumbling in the background ever since Napster appeared. Recently, the rumblings have been getting louder. Many factors are coming together to drive a search for a new architectural model: the inability of our current provider paradigm to supply the kind of network we'll need in the next decade, frustration with Facebook's "Oops, we made a mistake" privacy policies, and even WikiLeaks. Whether we're talking about Bob Frankston's Ambient Connectivity, the architecture of Diaspora, Tor onion routing, or even rebuilding the Internet's client services from the ground up on a peer-to-peer basis, the themes are the same: centralization of servers and network infrastructure are single points of control and single points of failure. And the solution is almost always some form of peer-to-peer architecture. The Internet routes around damage -- and in the coming years, we'll see the Internet repair itself. The time for P2P has finally come.

Everything is even more social

2010 was certainly been the year of Facebook. But I think that's just the beginning of the social story, rather than the end. I don't think the Internet will ossify into a Facebook-dominated world. Rather, I think we'll see social features incorporated into everything: corporate sites, ecommerce sites, mobile apps, music, and books. Although Apple's Ping is lame, and social music sites (such as MOG) are hardly new, Ping points the way: the incorporation of social features into new kinds of products.

The meaning of privacy

Any number of events this year have made it clear that we need to think seriously about what privacy means. We can't agree with the people who say "There's no such thing as privacy, get over it." At the same time, insisting on privacy in stupidly rigid ways will paralyze the Internet and make it difficult, if not impossible, to explore new areas -- including healthcare, government, sharing, and community. As Tim O'Reilly has said, what's needed isn't legislation, but a social consensus on what should and should not be done with data: how much privacy is reasonably needed, and what forms of privacy we can do without. We're now in a position where solving those problems is not only possible, but necessary. I don't expect much progress toward a solution in the next year, but I do expect to see the meaning of "privacy" discussed seriously.

A few more things

What? No mobile? No HTML5? No JavaScript? Yes, they're certainly going to grow, but I see them as 2010's news. You don't get any points for predicting "Mobile is going to be big in 2011." Duh. I might hazard a guess that HTML5 will become an equal partner to native apps on mobile platforms -- there's a good chance of that, but I'm not convinced that will happen. I am convinced that JavaScript is the language to watch; in the last few years, it has ceased to be a glue language for HTML and come into its own. Node.js is just what was needed to catapult it from a bit player into a starring role.



Related:



December 31 2010

What lies ahead: Gov 2.0

Tim O'Reilly recently offered his thoughts and predictions for a variety of topics we cover regularly on Radar. I'll be posting highlights from our conversation throughout the week. -- Mac


Is open government moving from theory to practice?

Tim O'ReillyTim O'Reilly: The initial rush of interest in open government and transparency is wearing off and people are getting to work. Gov 2.0 startup founders are figuring out business models -- such as advertising and providing back-end services for cities -- and the first crop of startups are being funded. Early entrants, like SeeClickFix and CrimeReports, are growing. I think we'll see a number of new startups in this space. [Disclosure: O'Reilly AlphaTech Ventures is an investor in SeeClickFix.]

Open government's transition is also leading to practical applications. The Blue Button initiative from Veterans Affairs, which allows veterans to easily download their medical records, is an idea that's bound to spread. Blue Button is a direct outcome of the Open Government Initiative, but that connection probably won't be recognized. As is so often the case, the things that really make a difference get put into a different category than those that fail.

Along those lines, people might say that open government failed because many of the items on the punch list didn't happen the way they were originally envisioned. When we look back, we'll realize that open government is not just about transparency and participation in government decision making, but the many ways that open data can be put to practical use.

There are profound open government projects taking shape. For example, the Department of Health and Human Services (HHS) could transform our healthcare system through open data and medical records. HHS Connect and The Direct Project are all about creating the standards for interoperability between medical records. We'll eventually see and benefit from larger efforts like these.

Another open data project that I'm fond of that started very early in the open government process is GTFS, the General Transit Feed Specification. That's the data standard that lets transit districts feed their bus and train arrival times to applications like Google Transit, or any of the many smartphone apps that help you plan your trip on public transit. This standard started as a collaboration between Google and the city of Portland, but is now available from many cities. It's a great example of how governments can think like platform providers. They have to equip their buses and trains with GPS, and report out the data. They could report it just to their own bus stops and train stations, or they could make it available to third parties to deliver in a hundred ways. Which is better for citizens? It's pretty obvious.

And of course, this is government data following in the footsteps of great open data projects of the past, such as the satellite weather data released by NOAA to power the world's weather forecasters, or even the GPS signals that were originally designed only for military use but then released for civilian use.

Much of what you're describing sounds like the Web 1.0-to-2.0 trajectory. Do you see similarities?

Tim O'Reilly: At the end of the Web 1.0 era, some people claimed the web had failed because banner advertising, pop-overs, pop-unders and all the increasingly intrusive forms of advertising didn't work. Then Google came along with a better idea. I think something similar will happen with Gov 2.0. Many of the things people label as "Gov 2.0" are really the early signals and efforts of a "Gov 1.0" period. Important shifts will eventually occur, and they won't have anything to do with government agencies putting up wikis or using Twitter. We will see many unexpected outcomes over time.


A collection of posts that look ahead to 2011 can be found here.





Related:




December 30 2010

What lies ahead: DIY and Make

Tim O'Reilly recently offered his thoughts and predictions for a variety of topics we cover regularly on Radar. I'll be posting highlights from our conversation throughout the week. -- Mac


How is DIY connected to industry?

Tim O'ReillyTim O'Reilly: When you look at any enthusiast movement where people are playing with something just for fun, there are deep tech trends hidden inside that movement.

"DIY" itself is a general term for the early stage of a technology revolution. The Homebrew Computer Club was a DIY effort, and then it turned into an industry. The web was a DIY space, and then it turned into an industry. The same thing is happening with the Maker revolution.

We noticed six or seven years ago that groups were playing with hardware in many different ways, and there was new interest in robotics and programmable manufacturing. At one of our early peer-to-peer conferences, a conversation about swapping songs on Napster expanded into a discussion about how people would eventually trade object designs for 3D printers. The hackers were thinking about these things years ago, and now we are closer to that reality.

Another example: Jeff Han of Perceptive Pixel demoed a big-screen multitouch display at ETech in 2006. Multitouch on the iPhone arrived a year later. That technology moved from an engineering hacker community to a mainstream product.

We discussed sensors before, but they apply here as well. When O’Reilly published "Learning Open CV" a few years ago, I was struck by how algorithms discussed in that book are similar to those in our other title, "Programming Collective Intelligence." I realized sensors and machine vision are both performing predictive analytics. That insight led me to write the Web Squared paper, which explored the potential impact of cheap and ubiquitous sensors.

Applications like RunKeeper, Foursquare [disclosure: OATV is an investor in both], Instant Heart Rate, CabSense, and Shazam all rely on sensors. When you connect the dots you see the DIY movement is telling us about sensors, and sensors are telling us about data. The people who understand the sensor-data relationship are the ones building innovative businesses that are ahead of the curve.



How do you see the Maker movement changing in the near term?


Tim O'Reilly: The next phase of the Maker movement is going to be marked by the emergence of new kinds of businesses. Adafruit Industries, DIY Drones, MakerBot, Instructables, iFixit, Etsy and other companies in this space have all tapped into new business models. Some sell kits, tools, and parts. Others provide a discovery and sales platform.

What's interesting is that many of these types of companies don't need a lot of venture capital -- if any -- to get started. In the years ahead, however, I imagine we'll hear about this trend hitting the radar of VCs.


Next in this series: What lies ahead in Gov 2.0
(Coming Dec. 31)





Related:




December 29 2010

What lies ahead: Net Neutrality

Tim O'Reilly recently offered his thoughts and predictions for a variety of topics we cover regularly on Radar. I'll be posting highlights from our conversation throughout the week. -- Mac


Is mobile creating a new digital divide?

Tim O'ReillyTim O'Reilly: Many people are fretting that limited access to smartphones is creating a new digital divide. I think that’s a misplaced worry because all phones will be smartphones before long. That problem will take care of itself.

If we assume that all phones are smartphones, what happens at that point? First off, we’ll have major capacity problems because a lot more data will go over the airwaves. That's why some of the FCC's efforts to free up spectrum are so critical. We can’t keep using spectrum inefficiently and hope to have enough.

There will be spectrum congestion and various problems related to that in the near term, but eventually it will get sorted out. The telecoms will need to make investments, and application developers will have to get smarter about how their apps use data. Apps that are bad network citizens are going to stand out.

What will happen with mobile and net neutrality?

Tim O'Reilly: I used to be in the religious net-neutrality camp, but the realities of capacity mean quality-of-service prioritization has to happen. To be clear, I'm still strongly against discrimination that targets a particular company or application.

I see the idea of "absolute" net neutrality going away at some point. Eric Schmidt made an important point at Web 2.0 Summit: There are two concepts of net neutrality. One is you can’t discriminate against any particular company or any particular application. But on the other hand you can discriminate against classes of applications. You could prioritize video lower than voice, or a bulk download of data lower than something that requires real-time communication. Prioritization will be contentious, but capacity limitations will make it clear why it's necessary.

Note: Video of Eric Schmidt at Web 2.0 Summit is posted below:



Next in this series: What lies ahead in DIY and Make
(Coming Dec. 30)




Related:




December 28 2010

What lies ahead: Publishing

Tim O'Reilly recently offered his thoughts and predictions for a variety of topics we cover regularly on Radar. I'll be posting highlights from our conversation throughout the week. -- Mac


How will ebooks change publishing?

Tim O'ReillyTim O'Reilly: Andrew Savikas, our VP of digital initiatives at O'Reilly, likes to make a distinction between "formats" and "forms." A hardback, a paperback, an audiobook, and many an ebook simply represent different forms of the same work. New formats, on the other hand, represent deeper changes in how authors develop content and readers consume it. The graphic novel is a recent format innovation in the West (albeit one with deep antecedents), as are the cell phone novels that have become popular in Japan.

People think of ebooks as simply another format, but ebooks actually represent an opportunity for a change in form. For example, you used to buy a printed atlas or a printed map, but now you have a dynamic, perpetually-updated, real-time map that shows you where you are. The old paper maps aren't very useful anymore. Applications from Yelp to Foursquare can be seen as elaborations of the potential of the map in its electronic form.

Or look at Wikipedia. As an encyclopedia, it's actually pretty close in form to what it replaced, but there are important layers of reinvention. A printed encyclopedia doesn't have articles on breaking news; it can't be a real-time encyclopedia in the way that Wikipedia now is. Notions about what an encyclopedia can do have changed.

Changes in form have significantly affected O'Reilly's publishing business by providing new kinds of competition. Our bestsellers are now tutorial books. The old reference-based books have been cannibalized by the web and search. This is why we try to define Safari Books Online as a library of content that people can search across. Reference material now carries an expectation that it will be searchable. And our tutorial books are increasingly challenged by other forms of tutorial, such as screencasts and online video.

O'Reilly may appear to be in the same category as HarperCollins -- we both put ink on paper and sell products through retailers -- but in other ways we're not even in the same business. HarperCollins publishes literary fiction, serious non-fiction, biographies, and other popular literature. We publish technical how-to and reference material. Their competitors include other forms of entertainment and erudition; ours include other forms of teaching and reference.

Does the definition of "publisher" need to expand?

Tim O'Reilly: Publishers think way too narrowly about what kind of business they are in, and as a result, are blind to how the competitive landscape is changing under their feet. If someone has roots in ink-on-paper, they are a publisher, but if they are web- or mobile-native, they are not. But this is wrong-headed! Put another way: Why would you think Zagat is a publisher but Yelp isn't? They both perform similar jobs. Competition should be defined by the jobs publishers do for users.

That being said, curation and aggregation are among the core jobs of publishing, and it's clear to me these jobs still need to be done. There is a real need for someone to winnow out the wheat from the chaff as more content becomes available online. (Of course, Google is also in the curation business, but they do it algorithmically.) Eventually, there will be new ways publishers get paid for doing these jobs, but there are also going to be new ways to do them.

TOC: 2011, being held Feb. 14-16, 2011 in New York City, will explore "publishing without boundaries" through a variety of workshops, keynotes and panel sessions.

Save 15% off registration with the code TOC11RAD

Does a focus on infrastructure block adaptation?

Tim O'Reilly: I gave a Publishing Point talk and someone in the audience asked how new publishing models could pay for "all this," and they pointed around to the lovely room and by reference, the building we were in, the headquarters of a storied publishing company. It was as if maintaining what they already own is the heart of the problem. That's like Digital Equipment Corporation asking, back when the PC era was just beginning, "Will the personal computer pay for all of this?"

HP and IBM figured out how to make the transition to the personal computer era. Digital didn't. Now, Microsoft is struggling with the transition from the PC era to the web era. Could you imagine somebody at a Microsoft conference asking, "But will the web pay for all of this?" You would think that was ridiculous. In technology, we understand the reality of competition and what Schumpeter called the "creative destruction" of capitalism. Why is it when somebody asks that same question in the context of publishing it's treated as a serious query?



How can publishers adapt to digital? What mindsets should they adopt?


Tim O'Reilly: Publishers, including O'Reilly, need to ask themselves: How can we make our content better online? How can we make it better through mobile?

In non-fiction, there are simple improvements to be made in the form of links -- after all, what is a link but a better version of the footnote? There are also ways to add more content, in much the way that DVD publishers add deleted scenes, director commentary, and other extras to the original movie. Other times, "better" will be defined by making something smaller -- at least from the user's point of view. For example, Google has more data than any print atlas, but the user sees less. Consumption is defined by the user's particular request: show me where I am now; show what's around me; show me how to get from where I am to somewhere else. There’s a huge opportunity for books to be reconceived as database-backed applications that show you just what you need to know. Former computer-book publisher Mitch Waite now publishes a fabulous birder’s guide for the iPhone, iBird Pro, demonstrating the power of this model.

Books give people information, entertainment, and education. If publishers focus on how those three elements can be performed better online and through mobile, innovation and business models will follow. If we don't innovate to do those jobs better for our customers, it's only a matter of time before someone else steps in.


Next in this series: What lies ahead in net neutrality
(Coming Dec. 29)





Related:




December 27 2010

What lies ahead: Data

Tim O'Reilly recently offered his thoughts and predictions for a variety of topics we cover regularly on Radar. I'll be posting highlights from our conversation throughout the week. -- Mac


Are companies catching on to the importance of data?

Tim O'ReillyTim O'Reilly: For a long time, data was a secret hiding in plain sight. It became clear to me quite a while ago that it was the key to competitive advantage in the Internet era. That was one of the points in my Web 2.0 paper back in 2005. It's pretty clear that everybody knows about it now. There are "chief data scientists" at companies like LinkedIn and Bit.ly. Data and algorithms are at the heart of what so many companies are doing, and that's just going to accelerate.

There's more data every day, and we're going to see creative applications that use data in new ways. Take Square, for example. It's a payment company that's hoping to do some degree-of-risk mitigation via social network analysis. Will that really work? Who knows? Even Google is struggling with the limits of algorithmic curation.

There was a story in the New York Times recently about a guy who figured out that getting lots of negative comments led to a high Page Rank. That raises new questions. Google doesn't like to do manual intervention, so they're saying to themselves, "How can we correct this algorithmically?" [Note: Google responded with an algorithmic solution.]

I'm not privy to what's happening inside Google's search quality team, but I think there are probably new sources of data that Google could mine to improve results. I've been urging Google to partner with people who have sources of data that aren't scrapeable.

Along those lines, I think more data cooperation is in the future. There are things multiple companies can accomplish working together that they couldn't do alone. It occurs to me that the era of Google was the era in which people didn't realize how valuable data was. A lot of data was there for the taking. That's not true anymore. There are sources of data that are now guarded.

Facebook, as an example, is not going to just let a company like Google take its data to improve Google's own results. Facebook has its own uses for that data. Meanwhile, Google, which was allowing Facebook users to extract their Gmail contacts to seed their Facebook friend lists, responded by setting their own limits. That's why I see more data-sharing agreements in the future, and more data licensing.

The contacts battle between Google and Facebook is an early example of the new calculus of data. I don't know why Google didn't stand up sooner and say, "If you're scraping our data to fill out your network, why can't we do the same in reverse?" That's a really good question. It's one thing if Facebook wants to keep their data private. But it's another thing if they take from the "data commons" and don't give anything back.

I also anticipate big open data movements. These will be different from the politically or religiously motivated open data movements of the past. The Google/Facebook conflict is an example of an open data battle that's not motivated by religion or principle. It's motivated by utility.

Strata: Making Data Work, being held Feb. 1-3, 2011 in Santa Clara, Calif., will focus on the business and practice of data. The conference will provide three days of training, breakout sessions, and plenary discussions -- along with an Executive Summit, a Sponsor Pavilion, and other events showcasing the new data ecosystem.

Save 30% off registration with the code STR11RAD

How will an influx of data change business analytics?

Tim O'Reilly: Jeff Hawkins says the brain is a prediction engine. The reason you stumble if a step isn't where you expect it to be is because your brain has performed a prediction and you're acting on that prediction.

Online services are becoming intelligent in similar ways. For example, Google's original competitive advantage in advertising came from their ability to predict which ads were the most likely to be clicked on. People don't really grasp the significance of that. Google had a better prediction engine, which means they were smarter. Having a better prediction engine is literally, in some sense, the definition of being smarter. You have a better map of what's true than the next guy.

The old prediction engine was built on business intelligence; analytics and reports that people study. The new prediction engine is reflex. It's autonomic. The new engine is at work when Google is running a real-time auction, figuring out which ad is going to appear and which one is going to give them the most money. The engine is present when someone on Wall Street is building real-time bid/ask algorithms to identify who they're going to sell shares to. These examples are built on predictive analytics that are managed automatically by a machine, not by a person studying a report.

Predictive analytics is an area that's worth thinking about in the years ahead: how it works, how you become proficient at it, how it can transform fields, and how it might conflict with existing business models.

Healthcare offers an example of the potential conflicts. There is no question in my mind that there's a huge opportunity for predictive analytics in healthcare and in the promise of personalized medicine. Certain therapies work better than others, and those conclusions are in the data, but we don't reimburse based on what works. Imagine if Medicare worked like Google. They would say, "You can use any medicine you want, but we're going to reimburse at the rate of the lowest one." Pretty soon, the doctors would be using the drugs that cost the least. That's opposed to what we have now, where doctors get drugs pushed on them by drug companies. Doctors end up recommending particular therapies that the data says don't work any better, but cost three times as much. The business models of pharmaceutical companies are all dependent on this market aberration. If we moved to a predictive analytics regime, we would actually cut a lot of cost and make the system work better. But how do you get there?

Predictive analytics will chip away at new areas and create opportunities. A great example is a company we had on stage at Gov 2.0 Summit, PASSUR Aerospace, that has been managing predictive analytics for airline on-time arrival. For 10 years or so, they've been tracking every commercial airline flight in the U.S. and correlating it with data like weather and other events. They're better than the airlines or the FAA at predicting when the planes will actually arrive. A number of airlines have hired them to help manage expectations about flight arrivals.

Mobile sensors have come up in many of your recent talks. Why are sensors important?

Tim O'Reilly: Recently I was talking with Bryce Roberts about how sensors connect a bunch of OATV's investments. Path Intelligence is using cell phone check-ins to count people in shopping malls and other locations. Foursquare uses sensors to check you in. RunKeeper, which tracks when you run, is another sensor-based application.

The idea that the smart phone is a mobile sensor platform is absolutely central to my thinking about the future. And it should be central to everyone's thinking, in my opinion, because the way that we learn to use the sensors in our phones and other devices is going to be one of the areas where breakthroughs will happen.

(Note: Tim will share more thoughts on mobile sensors and the opportunities they create in a post coming later this week.)


Next in this series: What lies ahead in publishing
(Coming Dec. 28)




Related:




December 25 2010

December 15 2010

My top 5 predictions for CIOs in 2011

We are living in amazing times. Technology is changing the way we work and play at a considerable pace and there is no letup in sight. Rather, the change we anticipate ahead will be greater and more profound than anything that has come before. If you, like me, are lucky enough to be part of implementing that change then you'll likely agree that we are extra fortunate.

To me, being a CIO in the early part of the 21st century couldn't be further from being in "just a job." If you're doing it right and having fun while you're doing it, you and your team can be inventors of the future. And that's really important and interesting work.

As we look to 2011, the to-do list and choices for CIOs are getting longer and more complex. The pace of change is adding a level of uncertainty that doesn't make any specific path clear. Knowing this, as most of us do, is not particularly helpful. But that's not the point to focus on: the enlightened CIO must help go after the most valuable projects and be a trusted adviser to those who commit dollars to organizational goals.

It's in this context that I present my top 5 predictions for CIOs in 2011. I've pondered whether they should be characterized as predictions. Regardless of what we call them, these areas will be featured on most CIO agendas in the year ahead. Think of them as unavoidable big ticket items that will consume considerable discussion and may be deserving of a deliberate strategy.

1. Cloud computing enters the mainstream

Okay, so one doesn't need to be a soothsayer to know that cloud computing is at a point of inflection. Emerging from a period of hype and niche investment, cloud computing is positioning as a transformative and central technology in the arsenal of enablers of value.

Worthy of particular note, with mobile increasingly at the center of our computing future, a strategy for the mobile cloud will be an essential subset of this space.

I've said it before, if the CIO is not driving the agenda on cloud in 2011, there are many in the C-suite who will be. This is because cloud computing provides solutions for reducing cost, simplifying and optimizing infrastructure, and shifting the role of the CIO from back-office manager to enabler of business opportunity.

The risk is no longer the cloud. The risk is not having the cloud as a priority in your strategy.

2. Real business intelligence

I have a term for business intelligence that I prefer and I believe conveys a more urgent sense of its value: I call it unleashing data. Somewhere on some system in your organization lie answers and patterns in data that could be worth millions of dollars. In an era where we create more data every two days than was created from the start of recorded history to 2003 (apparently that's about five exabytes of data), to say data is underutilized is a gross understatement.

Now, more than ever, we have tools to mine organizational data -- whether structured or unstructured -- and unleash its enormous value. What strikes me about business intelligence is that the CIO doesn't have to create anything new; it's about using what already exists.


Strata: Making Data Work, being held Feb. 1-3, 2011 in Santa Clara, Calif., will focus on the business and practice of data. The conference will provide three days of training, breakout sessions, and plenary discussions -- along with an Executive Summit, a Sponsor Pavilion, and other events showcasing the new data ecosystem.

Save 30% off registration with the code STR111RAD

3. The cost and value of technology

A notable manifestation of our recession recovery is the absence of rigorous business investment. Put another way, businesses have been shell-shocked into hoarding their profits at the cost of spending on necessary technology maintenance and new systems. Rather, the modus operandi is conservative spending and trying to get more technology value with less cost. CIOs are feeling it.

The year ahead will likely continue this trend as the economy remains unstable and uncertain. It's not the end of the world for CIOs, but it does mean that more work must be applied to developing watertight business cases and for increasing the innovative use of technology. For many CIOs, this trend will require necessary business skills that will be challenging. Break open that old college business textbook. You might need it.

4. Integrating social into the enterprise

While I don't think that integrating social computing deep into existing systems will hit an inflection point in 2011, nonetheless I believe this will be the year where the subject gets increasing attention both in the CIO discourse and in the emergence of new supporting technology.

The business advantages of social capabilities such as internal crowdsourcing, collaborative virtual spaces, video-on-the-desktop, social network analysis, creating serendipity, and consensus building are being gradually proven out on an ad hoc basis. The future will demand that a deliberate and rigorous plan be applied to it. The time to begin strategizing on a path forward begins now.

5. Temporary staffing

If you're an IT contractor, 2011 will likely continue to be a good year for you. Closely aligned with prediction No. 3, CIOs are increasingly reluctant to fill openings with full-time employees. Loath to risk further layoffs in the future, they continue to be highly conservative about growing the ranks. Market confidence will need to be restored before we see a sizeable shift to full-time employee hiring in the IT sector.

As a result, CIOs will be managing more hybrid-staffed organizations. These organizations will constitute full-time employees, contractors, and outsourcing. While not radically different from many IT organizations today, what makes 2011 different is the uncertainty around the extent and duration of the contractor requirements. Will it be permanent? What effects will it have on institutional knowledge, loyalty, and existing staff?


You may agree or disagree with my predictions and you may believe I left something big out. I'm confident that's true. So I'd like to hear from you. Add your comment below if you think there is another prediction that every CIO must be aware of for 2011.

As I did in 2010, I'll revisit these in late in 2011 and make an assessment of how they fared as the top ticket items for the CIO during the year.



Related:




Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl