After some time planning I’ve taken the plunge and started remodeling the upstairs of our home. I’m fully aware that even when starting small, a remodeling project often expands to include much more than you initially had intended. Knowing that, our plan is hopefully more realistic than last time we did it and includes quite a lot from the start.

In broad strokes, this is what we’re planning to do:

  • Convert a master bedroom den to a home office
  • Build a walk-in closet
  • Remodel the master bathroom
  • Update the guest bathroom
  • Make necessary electrical updates due to the remodel
  • Install a whole house fan system
  • Rip out the carpets and install hardwood floors

Where are we now with this project? Half of the new walls are framed and the space for the walk-in closet is gutted. Over the last couple of days I also took advantage of the slightly cooler weather and installed the first of 3 fans that go into the whole house fan system. It can get pretty hot in the attic on warm days so it’s necessary to do that work on the cooler days.

We live in Pleasanton, California and it can get pretty hot in the summer with averages around 90F/32C and peaks well above 100F and 40C.

We’ve had more or less a manual whole house fan system for the last 6 years or so. That’s how long it’s been since we decided to not use the old AC system anymore and “go green”. We’ve been using multiple fans to move cooler, fresh air into the house in the mornings and after sundown. It’s been working surprisingly well, with the exception of when we get multiple really hot days in a row. But it has been a chore to do it and sometimes you miss your window of opportunity, i.e. you sleep in and the day is already warm when you get up. So in our case, we’ve already made the energy savings by foregoing the AC several years ago and it’s now more about making it more convenient.

The system I chose is a QuietCool QC-1500, which I selected because it promises to be very quiet and also came with a wireless remote control (using Zigbee for the networking, maybe I can do more interesting things with that later?). The recommended size of system for our house had 3 fans to be installed in the attic. Actually, they seem to have changed the sizing chart now and maybe 4 fans would be more optimal according to their chart. But I’m betting they have changed it just to sell more fans. I can always install one more fan later if I’m wrong.

One QC-1500 whole house fan

One QC-1500 whole house fan

I ordered the system on line from A Trendy Home (they had a Father’s day sale) and it showed up after just a few of days. Yesterday I installed the first of the fans and have temporarily connected it with an extension cord to test it. Later I need to get an electrician to install a dedicated outlet for it.

I must say that so far it’s delivering as promised on most things. Even with just one fan operating for now there’s a definite breeze through the house and the remote control works in any part of the house. It’s not quite as quiet as I had expected though. There’s a faint low frequency, rumbling noise. It may just be that’s it’s a new sound because it’s definitely more quiet and less intrusive than the fans we were using before. I guess I may have had somewhat unrealistic expectations on how quiet it would be. Bottom line, and in anticipation of what the next 2 fans will do, I would still highly recommend it.


Going back to the HPC education issue I mentioned in my previous post, that actually touches on the other theme that was clear today at the sessions at the 23rd HPCC conference.

Seems like almost no one argues against that the future for continued increased performance in HPC is moving towards a future with more multithreading, multicore, many-core, GPUs and other accelerators, often in a heterogenous mix of thousands (or millions eventually) of each.

This is not a panacea however, there certainly are problems to be solved in those areas as well, the infamous memory wall and energy consumption would be two of them.

The biggest challenge in my mind though is on the software side. Our middleware, tools and applications are just not keeping up. We don’t have the software technology today that makes it easier to automatically take advantage of the inherent parallelism in the hardware infrastructure. We’re today edging into the Petascale era and providing essentially assembly level programming tools. That won’t work for the next level, closer to Exascale.

We need to invest in software that on one hand hides the underlying complexity and makes it easy to scale and on the other hand makes it possible to state the problem to be solved that is close to the natural representation of it. Much like Fortress allows mathematical notations to be used to easier represent equations. We need to bridge the gap between the domain knowledge that can describe the problem and the low level “magic touch” that is needed to get code to scale.

It’s not that this is new news. Many people have pointed this oput, but we don;t seem to make progress towards a solution. It’s not that we as a “collective ostrich” is hiding our head in the sand and hoping it will go away. It won’t.

The problem is that there’s no business case for a single software vendor to take on this huge challenge. This is an area that definitely requires government funding and industry wide attention.

One speaker suggested that HPC needs to be elevated to the same importance as a nationwide energy strategy. It’s that important. I tend to agree with him. We need to do whatever it takes to start to make progress in this area.

I also intend to continue twittering tomorrow under my Bearcrossings twitter id.

I just finished up the first day of the 23rd HPCC conference in Newport, RI. It’s a well organized conference with many good presentations. Well worth the time attending. Much good news and progress, discussion of futures but also highlighting some problems and challenges we’re facing.

There were a few recurring themes throughout the day when it comes to problems and challenges. One was that we have a problem with creating the pull for a new generation of students to get into HPC. Several speakers mentioned that university level courses for introduction to HPC or parallel programming are attracting small number and are not growing or actually being canceled. This seems to be especially true in the US, while interest seems to be greater elsewhere.

We as an industry have also failed to make the connection between HPC (which many still instinctively think of as only the classical, high end HPC) and commercial use of HPC technology. One speaker mentioned that the Financial Services industry spent $30B on computing in 2007-8 in search for a competitive edge. There’s sure to be interesting (and well paid) jobs in that area that require a solid HPC foundation. Other areas would be to work in bio-sciences and be part of finding the cure for some of our most dangerous illnesses. Or in the Energy field to solve our energy problems. Lots of big challenges where you can feel proud of being part of a team addressing them, not only monetary rewards.

If we don’t fix this, then we’ll eventually run into an age gap problem, an abyss is almost literally opening up in front of us.

In my next post I’ll write about the second theme among today’s discussions about challenges.

I also intend to continue twittering tomorrow under my Bearcrossings twitter id.

I’m researching new uses of technologies and methods originating from High Performance Computing (HPC), some call it Technical Computing and others Supercomputing. I’m focusing on use cases in more unconventional areas such as in business or marketing processes.

There are many examples of technologies that have made this transition beyond HPC and reached a wider user base. A few of them are grid computing, parallel computing, use of accelerators and many others. The Internet itself and web browsing are other famous examples. On the horizon we have Cloud Computing that also can be said to have its roots in HPC.

What I’m looking for are new trends and new use cases where “HPC technologies” help solve largely non-technical problems. For example, are substantial compute resources being used to simulate the performance or predict future behavior of business models or marketing programs before they are launched? If so, how common is this usage? With the rise of Cloud computing, are new technologies now within reach of almost anyone? for small businesses or for home use?

I’m exploring new usages and new trends and don’t want to limit it too much by asking too specific questions at this time. Please leave a comment on this entry if you’re aware of anything that you think may fall in this rather open area, or if you have suggestions for where to look.

What’s in a name?

I guess my first blog entry here should state my intentions and explain why in the world I chose to call my blog “Bear Crossings”?
Let’s start with the last part first. It’s pretty simple really if you know me and even simpler if you know some Swedish. My first name is Björn in Swedish, or usually simplified to Bjorn if you can’t figure out how to get the umlaut (the two dots) in there. It’s an old name that can be traced back to the early Viking ages. As of late 2005 there were about 40,000 Swedes that shared this first name. Not exactly an unusual name. Here in California I’m however often the only one with it, which has its benefits.
But enough about that, the point here is that it means Bear. Now you probably see the link to this blog and how a name like “Bear Crossings” might make sense. It also gives you a hint of my intentions with the blog. I simply intend to write about somewhat semi random observations about things that cross my path. It will likely range from thoughts about the mundane, over to observations about what  do and see at work to reflecting about the absurd.
The look and feel of this blog will most likely change over time as I experiment with different styles and templates.