Recently in Business Category

I attended three sessions this afternoon:  

  • One on cloud computing with a panelist from JPMorgan Chase,
  • One on authentication presented by the CIISP of Bradesco, a Brazilian bank, and
  • One that was a P2P discussion of identity facilitated by a SVP at Bank of America.

All of those are issues that I wrestle with all day long in the industry in which I work, so it was fantastic. Perhaps it's the marketing class I'm in ATM that has attuned my ears to the voice of the customer (VoC) because I heard them loud and clear. This is my interpretation of what they said about those topics.

What the Financial Institutions Said

My Interpretation

Cloud computing is a new name for things we've been doing for a long time.

Be careful and cautious about cloud computing. Scrutinize new cloud-based offerings using our established practices and procedures. Do not get sucked into the hype.

Once data gets out the door, it's gone forever.

You only get one chance. Cloud computing is still too new and unproven. Mistakes are bound to happen, and we can't afford for them to be made by us.

Everything is about risk management.

Be cautious and slow to adopt cloud computing. Let the early adopters go out of business trying to figure it out. Once they have worked out the technical, social, political, and legal kinks, consider it pursuant to our established practices, policies, and procedures.

The biggest risk is loss of reputation; the brand name must be upheld. You can't outsource your reputation.

Loosing the competitive advantage that a distinguishable and trustworthy brand offers is not worth the potential cost savings offered by cloud computing, especially considering that we have already invested in the computing infrastructure that IaaS and cloud computing offers.

Online banking will never be done in the cloud.

Public clouds such as Amazon's are not appropriate places to host online banking solutions. Host them on private or hybrid clouds instead.

Positively identifying legitimate users has been a long hard battle that has forced us to invest tons of money and effort; it has even forced us to do things we didn't want to do (e.g., biometry).

We are in an arms race. If you can help us make it cheaper and more cost effective, we're all ears.

Technology is not enough.

We need technological help in this war, but we will be especially interested if you can also help us with the people- and process-related problems.

Banks, governments/police, and customers must work together.

Your offerings need to be interoperable, UX tested, and compliant with government regulations.

We will constantly be confronted with new security challenges.

We need vendors who we can trust and that will continually provide products that are one step ahead of the fraudsters.

Users adopted biometrics much quicker and with less pushback then we expected.

We value solution providers that are willing to think outside the box; we know from past experience that it pays off.

Our customers love mobile devices.

We expect a whole host of new attacks and problems, so help, advice, and guidance is welcome.

Facebook can't be blown off.

Social networking Web sites represent a real opportunity given the mass adoption, but we're unsure how to capitalize on them.

If you disagree with my interpretations, are aware of other needs that these organizations have, or would like to ask me a question about other things they said about cloud computing, authentication, and digital identity, leave a comment here or let me know. Also, keep an eye on my Twitter stream for more frequent updates from the RSA Conference.

Part of the MBA program that I'm enrolled in involves taking a number of economics, accounting, and finance classes. I'm starting to use the knowledge I've gained from them to purchase stocks. One thing about investing that I'm finding is the importance of having good information. I'm sure that the pros have tons of tricks, and maybe I'll learn some of them over the years. One that occurred to me already was to look for companies that are about to go public. One way to find this information is to watch for companies filing an S-1 form with the SEC. The SEC makes these listing available in Atom format, so it's pretty easy to stay on top of it. When you subscribe to a feed of S-1 filings, however, the results include amendments and updates to previously submitted S-1 filings. I wanted to remove these false positives. To do this, I used YQL and Yahoo! Pipes.

I've been using Yahoo! Pipes for a long time for trivial data munging; however, this time, I could not find an easy way to do what I needed using this service. I had heard of YQL, but never dug into it. Unable to use Yahoo! Pipes in and of itself, I thought I'd see if YQL could help before turning to Perl or Python. In doing so, I found that YQL is simple yet powerful. Here's what I had to do to turn the SEC's feed into the one that I really wanted.

YQL stands for Yahoo! Query Language, and, as its name states, it's all about querying data. Yahoo! gives you lots of data sets that you can query, but YQL also supports a way of querying your own data (or the SEC's as the case may be). These non-Yahoo! data sets are called Open Data Tables, and many people are extending YQL using them. Open Data Tables are just an XML document that contains a few elements, the most significant being the execute element. As others have described, this element contains some JavaScript, but not just any old JavaScript; the element contains ECMAScript for XML (E4X). E4X is JavaScript with the ability to embed XML literals directly in the code and includes syntax sugar to make working with XML much easier (kinda like in VB.NET but with neutral sweeteners not LINQ). In YQL, E4X is the standard stuff with additional objects that Yahoo! has added. One of these, the y global object, includes helpers that allow you to easily call RESTful Web services with no effort whatsoever (relatively speaking). I used this to get the SEC's feed and begin munging it in my custom Open Data Table.

The actual Open Data Table XML document is pretty boring stuff except for the url element(s) and the E4X script in the execute element. The url element(s) contains the location(s) of the data you want to pull into your script; the execute element contains the code to process it. Here's the only bit of code I had to write to remove the amendments and alter the SEC's feed more to my liking:


        default xml namespace = "";


        var xml = request.get().response; // Call the URL defined in the url element

        var entries = <entries/>;


        y.log("Called SEC Web site and about to iterate over results.");


        for each(var entry in xml.entry)


            // Include only S-1 filings

            if (entry.category.@term.toString() === "S-1")


                y.log("Adding S-1 filing: " + entry.title);


                var link ='-index.htm', '.txt');


                y.log("Link to filing's plain text version: " + link);


                var newEntry = <entry>

                    <link rel="alternate" type="text/plain" href={link}/>

                    <title>{entry.title.toString().replace(/ \(.*/, "")}</title>




                y.log("Adding entry to collection of filings");

                entries.* += newEntry;




        response.object = entries;


This trivial, little snippet does what every program does: get some input (by fetching the feed from the stipulated URL), process it, and output the results. The syntax and objects provided are so high-level though that this cannot be much easier. The entire Open Data Table can be found here and you can see the result in the YQL Console. One really important thing about writing and debugging these scripts in that you tack on "?debug=true" to the URL of the YQL Console. Without this, YQL will cache stuff, making development almost impossible.

One really sucky part IMO about YQL is that it places your output in an "envelope" that can't be removed. What I mean by this is that the output of any YQL query is some XML surrounding whatever XML you generated from the script in the execute element. In my case, I started with Atom and wanted to end with Atom, so I could keep an eye out for new IPOs in my blog reader. Because of this limitation, I had to use Yahoo! Pipes in the end after all. The pipe is very simple; it contains a YQL module followed by a Sub-element module that picks out the entry element I created.

As you can see, YQL helps do some pretty cool things with relative ease. If you haven't checked it out, I would recommend that do. Start by visiting the YQL developer site, and let me know if you have questions, thoughts, or other YQL experiences by leaving a comment or by contacting me. Lastly, if you want to subscribe to the feed of S-1 filings, you can find it here (no promises about uptime or availability).

It's September again, so that means it's back to school, not for my little ones (they're too small) but for me.  This is the second school year of my MBA.  I'll be taking classes such as managerial finance, organizational systems, and others.  Great stuff!