Q: You have talked much of component-based transaction systems. Do you think they will take the place of procedural transaction systems, or will they evolve alongside each other?
Bill Coleman: Well, I think in a longer term it will take the place of procedural development. But it won't happen near as fast as people think. The infrastructure has to get there. And we have the most conservative developers, in that they have to really assure themselves that it will work.
So I think they'll start with light applications. And then, as the infrastructures like Enterprise JavaBeans become truly mission critical, and the tools assembly [of components], they'll use it more and more. It's going to be a long-term transition. It's going to be 10 years before it becomes the primary way for mission-critical applications to be developed. But I think it will start taking off in the next 12 to 18 months, as things like Iceberg and Enterprise JavaBeans come out.
Q: You are billing your Iceberg project as an Object Transaction Management system. Do see the role of such a system as compensating for the performance hit that applications will take when they move to objects?
Bill Coleman: Well, I don't think that this type of programming will become mainstream for mission critical applications if they have to take a performance hit. It is very important that these environments support scalability to high levels with high performance. And that's why we work so hard on this system. As a matter of fact, when we ship it we plan to run Corba-based TPC tests, so that we can demonstrate that it is performant.
Q: How bad is the performance hit when using component technology for a transaction system?
Bill Coleman: I think in general for object technology there will be a heavy performance hit because of this connection-based [technology] and the use of TCP/IP across systems as the connection architecture. I think as people move more towards true Object Transaction Managers which can provide connectionless [operation], we will see less of a performance hit.
Even Gartner Group says they believe no one is within two to three years of this except us. And actually, one of the things I am worried about is a backlash against objects for the enterprise in the next year or two, as people try to build big systems on some of these environments such as Component Broker or NCA or Orbix, and find that they do well for the low end but they just can't scale yet.
But that's one of the reasons that we are implementing Iceberg so it allows you to upgrade either way [from procedural or from object technology towards a mixed environment], and so it allows interoperability. So it is up to the customer to decide how fast and how mission critical they begin their development with objects.
Q: So where do you think is BEA's role in the object market?
Bill Coleman: We believe our role in the object market is to provide the first truly mission critical scalable platform, and support server-side objects. Our role is not in client-side objects. Although we can support them, we believe that space will be highly commoditised very rapidly, with people building right on object brokers, or right on DCOM with ActiveX. We view our role in objects the same as we see our role today, and that is [to support] high end, scalable applications.
Q: Who do you think you will be competing with in that space?
Bill Coleman: Well, in the short term it will be difficult to distinguish who has a truly scalable infrastructure because people won't be able to build large scale applications. But as they do, I think at first we will have a leadership position. Over time, obviously IBM and Microsoft will implement truly scalable situations.
I think we will have a sustainable leadership position by then, but over the next couple of years they are the obvious ones. Now, some of the dark horse candidates are Oracle, Sybase and Iona. But they have a long, long way to go technologically, and they haven't demonstrated the technical capabilities to build these kind of systems before.
Q: How similar is it solving the performance problems in an object environment to the way Tuxedo handles them in procedural applications?
Bill Coleman: It's even more complex. Because you have a much broader environment with many more entities. And there is an intermix between stateful and stateless. So it's a more complex environment. But fundamentally, architecturally, it has to do all the same things. To provide the load balancing, the fault tolerance and all the administrative infrastructure for distributed services. It is basically the same job, it's just one extra level of complexity - and its already tough.
Evil clowns, scary nurses and sharp machetes teased in autumn PUBG Hallowe'en event
Reservoir computing can achieve the higher-dimension calculations required by emerging AI
Astronomers studying first-ever reported merger of two neutron stars claim to have detect light and gravitational waves
Allen died from complications of non-Hodgkin's lymphoma