/ Interview / Web 3.0 and the Rise of Process Engines

Web 3.0 and the Rise of Process Engines

Matt Ball on July 11, 2012 - in Interview

Paul Torrens, an Associate Professor and Director of the Geosimulation Laboratory at Arizona State University’s School of Geographical Sciences and Urban Planning, has conducted many innovative research projects on topics such as city models, immersive modeling, space-time GIS and geographic analysis of human behavior. His work garnered such attention that Torrens was awarded the Presidential Early Career Award for Scientists and Engineers in 2008. Torrens recently wrote a piece for ArcNews titled, “Process models and next-generation geographic information technology ,” that sparked V1 Editor Matt Ball to conduct this interview.

This interview originally appeared in V1 Magazine on 7/11/2009.

Ball: I’d like to start with your take on the status of the Web. I just re-read the article that you wrote a year ago for the Institute for the Future about the shortcomings of online mapping services (http://www.signtific.org/en/node/666?q=en/node/666). I’m curious if any of your ideas have changed in the year since you wrote that piece.

Torrens: The field reinvents itself every six months now, so things have changed significantly since I wrote that article. Web 2.0 is based on what Michael Goodchild would call citizen volunteered information. We think of the GeoWeb as a component of Web 2.0, it’s all about people volunteering information. And if we can think about a shift from Web 2.0 to Web 3.0, the big phase transition is going to happen when people start volunteering processes into the same ecology of data.

Currently, that doesn’t really happen, and that’s part of the big difference between the ESRIs of the world and the Google or Yahoo Maps of the world. The difference is really about the functionality for spatial analysis and geoprocessing. Most of the geoprocessing on Google Maps or Yahoo Maps is pretty straightforward, related to things like geocoding and reverse geocoding. They haven’t delved into the sorts of spatial analysis and geoprocessing schemes that some of the commercial vendors have. And I can’t quite figure out why that is, but I imagine that push is going to come from the community of users at some stage.

Ball: One of our primary areas of interest lies in geospatial analysis and serious science. Another parallel is the difference between NASA’s World Wind and Google Earth. It seems like more serious science is taking place on NASA World Wind, but it doesn’t have the profile of Google Earth.

Torrens: Honestly, the boundaries between serious science and citizen science are completely dissolved. Some people are still hanging onto the notion that there are two versions of GI science and technology, one for academic development and R&D and another one for the users, but before anybody could figure out why, I think the boundary just dissolved between the two of them.

Ball: There’s a great deal of development happening on the mobile front as well with the new iPhone integrating a digital compass and video camera. The idea for augmented reality in that device will open up more context for content, or as you put it, “ the internet of things.”

Torrens: People are already building these things and have been building these things in prototype form. So this stuff is already upon us. This is what I mean when I say that the field completely reinvents itself every six months. Apple did something very simple by putting a compass into the iPhone, and we’re already seeing augmented reality apps being built. A lot of the time, they’re not actually called geographic information systems, although they clearly are geographic information systems.

Ball: I’m interested, too, in your push into the social sciences. Are you doing more of that as these platforms become more ubiquitous?

Torrens: Because I mostly work in basic research, my job is to try and make contributions to both geographic information science, but also to the substantive domains that that science is applied to. I mostly work in urban geography and behavioral geography, and I’m responsible for training PhD students in the latest technology. I’m also trying to get them to think about how to apply that technology to geographical systems. So we’ve really been working in computational social science.

There has been a real shift in the way that social scientists do their work, when they began to use sophisticated statistical tools to extract meaning from the data that they were collecting by questionnaires, surveys, and so on. Now, we have automatic generation of a lot of data. In some contexts, we even have the automatic tagging of those data with some contextual qualifiers that turn the data into information.

The field is moving quite quickly to adopt tools from artificial intelligence and from semantic analysis, machine-learning, and so on, to try and harness all of these data that are coming in, and to ally those data with the theory-building and hypothesis testing that they’re doing in the social sciences. This is really going on quite vigorously in geography, which is a social science that has always been very closely connected to computing and computer science, and even to things like physics and pretty sophisticated numerical analysis. So we’re riding that wave.

Ball: Do all these data inputs go into a GIS ultimately or are they linked to the statistical packages and GIS is the visualization medium?

Torrens: They’re all intertwined, and they’re all very symbiotic. The irony is that we’re using these tools to try and study many-coupled systems, systems that have fingers in the natural environment, in the social environment, in the built environment, maybe even in the technical environment. If you think about a transport system, although it’s physical, it’s actually a pretty sophisticated technical system, too. And so the tools that we’re using to study these complex systems are actually becoming as complex themselves. So there’s kind of a singularity thing happening where everything is merging.

These disparate parts used to be quite distinct and separated. We would do spatial database work, and we would add all that into the geographic information system. The geographic information system would then launch a simulation model that would run with some perturbations. We’d then run the simulation model output through statistical analysis, social network analysis, and visualization, and go through this iterative, recursive process.

But now everything is tightly integrated. In my team, we realized it was more efficient for us to build a geographic information system – or rather to build the geographic information science that we needed from a geographic information system – directly into our model.

I work mostly with a type of modeling called agent-based modeling, which is really a spinoff of basic computer science. The idea is that you have a processing mechanism, and that processing mechanism searches for information, and crunches that information, and delivers an output in the same way that a CPU or GPU on a standard PC would. But we’ve been using that scheme to try and build behavioral geography into our models. So we take agents, and we map their computational functionality to the spatial cognition of people in the real world.

Once we began to do that, I soon realized that those agent-based people need some basic geography functionality, and  I had originally connected them to a geographic information system to try and do that, but it wasn’t working out very well. So, we just built what we needed from the GIS directly into the agent’s behavior.

At the same time, I took a serious science interest in computer gaming, originally looking at the artificial intelligence that underpins most computer games. And then from that, I got interested in the graphics side of things, and how these things were rendered, and how the rendering was distributed over many cores on a network of machines, and how to do rendering in real time, and run it on gaming consoles.

That work began to filter back into our agent-based modeling, and instead of working with dots moving around a screen, we would animate them in a separate program. We built some of our rendering directly into the agent-based model itself. So the agent-based model has actually been the focus point for us to integrate all of these diverse things together.

Around about the same time, I was doing some work on Wi-Fi mapping and mobile Internet communications technology. We began to map some of that work into the agent-based modeling as well, so that we have agents that are interacting with their environments physically and are also interacting with the social infrastructure around them, but we’re also giving them cell phones.

We’re modeling what happens to the network of cell phones as an ad hoc swarm network, and how that relates to the built environment and the positioning of broadcast towers, and access points, and so on in the built environment. What we’d like to do is to begin to model the idea of there being an emerging or burgeoning code space. As people are moving through an urban environment, for example, physically or socially, their data shadow is also moving through some technical representation of that infrastructure.

BallAre you focused on the application of this sophisticated model, as well as building the modeling platform?

Torrens: We’re very much focused on the applications, and to a certain extent applications always drive what I do and what my group does. We’re fundamentally social scientists, so we’re applying it to human geography problems. We have models of people just going about their everyday activities, so we look at the activity geography of people’s movement through built spaces, through shopping malls, and along streetscapes, and so on. And that has a lot of relevance.

It relates to the general activity levels of populations, and for example, building downtown environments that foster and promote walking, and that foster social interaction. Although we don’t deal with commercial applications, some of my students are working on models of people’s movement in shopping malls because they have an interest in that type of environment, and the positioning of shops in a mall environment, the positioning of anchor stores and so on, to maximize flow of pedestrians and consumers and customers past window displays.

Another part of that work has involved us building synthetic downtown environments that have realistic populations doing realistic things in the right places at the right times, and once we have a reasonably good model built, we can subject it to lots of “what-if” scenarios.

One of the things we’ve been looking at is to ask what would happen if this environment needed to be cleared because some emergency broke out. What happens if people are unsure about which way to egress from the built environment? What happens if a traffic jam forms in the crowd? What happens if there’s uncertainty? What happens if people get injured? What happens if people panic? And so on.

Another thing we’re doing is to look at the transmission of everyday bugs — the common cold and flu. We essentially give a couple of people in this synthetic cityscape the flu and watch how it transmits through the population. This is something that epidemiologists have been doing for decades, but they tend to look at the population in very coarse spatial and temporal terms. With our model, we’re able to look at a real microcosm at the level of an individual person, and what they touch, and who they interact with on the order of seconds.

These things are always huge challenges for the model, so with every application that we build, we end up writing a lot of new things into the simulation platform. We had been doing that in a fairly ad hoc manner previously. So pretty much for every application, we had a brand new model essentially built from scratch. So I made a decision a couple of years ago to try and standardize that.

We’re also trying to build a standard toolkit, a set of libraries that will encapsulate some of these behaviors to the extent that they are common across these diverse systems, and also to look at some of the unique behaviors. We’re trying to wrap these up as libraries of code that we can reuse.

We hope sometime in the future to be able to open them up to the community at large. It’s one thing to build software for yourself and for your research team. It’s an entirely other thing to build software for other people. You have to document it, and make sure it works!

Ball: Would the idea be to open source that or commercialize?

Torrens: It would be to open source it. All of the funding that I’ve had has come largely from the National Science Foundation, so from taxpayers. We’ve had little bits of funding from commercial groups, but they tend to be targeted for very specific things, and we’ve done that separately from the basic science that we’re doing.

Ball: Another area that I’m really interested in is the interface between the environment and the built environment. Is that part of your research?

Torrens: The coupling between human systems and natural systems is really significant, especially where I live in Phoenix: It’s really quite obvious that the two are very intricately connected. There is a lot of work being done in that area, and there’s another branch of agent-based modeling that deals with land use and land cover change.

For example, in Rondônia in Brazil, which is in the Amazonian rainforest, researchers are looking at the decisions among people in those areas to clear trees, and plant canopies of different types, and what influence that decision making has upon the boundary layer climatology above the canopy, and what the feedback mechanisms between the two are. They’re also looking at what activities dampen deforestation, what activities accelerate deforestation.

This is an area that I’m peripherally aware of, just by virtue of the tools that they use. They use agent-based models in the same way that I do. But the sorts of things that I look at are spatial and temporal scales in which that sort of thing doesn’t really matter.

The only connection that we really have to environmental science is that in Phoenix it’s very hot most of the year, and particularly during the summer. With very hot temperatures, the materials that are used for the built environment can retain or reflect heat. We’ve looked in a very preliminary way at having the morphology of shade that is cast by the built environment play into the movement of our agents through that built environment. We have agents that will hug walls because the walls provide shade, and it’s pretty straightforward to put a solar model over a 3-D CAD environment to look at how buildings will cast different patterns of shade and different shade geometries at different times of the day.

It’s a topic that’s of real interest to the architecture community here in Arizona, and particularly in Phoenix where they’re doing a lot of construction. They’ve been trying to build shade structures into buildings, and they’ve been building some large public art projects that actually double as shade structures. We have some of them on the campus of Arizona State University.

Ball: With your city models, you’re obviously working in 3D. Do you play between CAD and GIS for some of these projects?

Torrens: Absolutely. This echoes my comments earlier about how we interface diverse software toolkits: the same thing happens with our forays into 3-D and CAD. Originally, I developed these models in CAD, and then developed some glue that would connect the CAD to the geographic information system.

Of course, this is really easy now because we have all sorts of products that connect ESRI products and SketchUp to allow you to do this relatively easily. At the time, it was quite difficult,  and we just got sick of doing this, so we started building the CAD directly into the models. I discovered the joys of coding in OpenGL quite some time ago, and this has really helped with that.

Ball: Are you following CityGML as a means to build models?

Torrens: I’ve looked at it, but we haven’t used it. Because our tools are just built for us, we don’t necessarily have the same sorts of needs for a formal ontology of entities in our model. But obviously, moving from tools that are built for us to tools that are built to be shared, that would be something that we would have to put a lot of work into.

Ball: I notice that you’ve done quite a bit of study on sprawl and urban planning. Is that focused on smart growth and sustainability issues?

Torrens: It is. I tend to take an objective approach. The idea of smart growth can be quite a loaded term, depending on who you’re talking to. When I developed an interest in sprawl, there was a huge debate in the planning community, a debate that particularly came from the United States. People were essentially arguing about what sprawl is, and depending on how you defined it, you could shape your arguments based on the definition. I was interested in leveraging some of the work being done in spatial analysis and spatial statistics to try and diagnose sprawl in a robust way, and in a way that was sensitive to context, and that was sensitive to spatial scale. So I developed a toolkit for measuring sprawl and published material on that.

That then filtered into the simulation work that I was doing for my PhD. I built a model of suburban sprawl that was pretty open, where the user of the model would pull different policy levers. They could look at different outcomes of intervening in the trajectory of a sprawling city in different ways. I left it up to the user as to what they would take from that, whether they would take a smart growth or a compact city spin on that.

When I joined Arizona State University, I discovered that there’s a very substantial group that works on sustainability. We have one of the country’s leading centers in sustainability, the Global Institute of Sustainability. And, I got connected with a group of people that are also doing modeling work, but were trying to tie models to environmental outcomes.

I did some work with them tying my sprawl metrics to some of the work that they were doing in measuring air pollutants, and urban albedo, and heat island effects, and water use, and the relationship between residential landscapes and carbon footprints, and that sort of thing.

Ball: Are any of the policy directives from Washington — the money allocated for infrastructure spending, energy, and carbon cap and trade — flavoring your research direction?

Torrens: It’s something that I have an interest in as a citizen. As a scientist, I really don’t have any more than a very amateur understanding of how that stuff works. The science behind that is really advanced, and so I tend not to mesh it directly with the work that I do.

Ball: Are you working with sensors in real time information in some of these models as well?

Torrens: Not real-time, but near real time readings. I’ve done some work on measuring the propagation of Wi-Fi signals through urban environments, and for that I had a need to capture that information in a geographic information system, and most of the analysis work that I did on it was spatial analysis work and geostatistics. From that, I had an interest in coupling the same information system’s architecture to some real time sensor data.

I tried at various stages to build partnerships with different people that were providing that information. The partnerships didn’t work out for various reasons, but I’m working on this with my students. At the moment, we’re working on the dual problem of validating and calibrating agent-based models of movement, and also from tracks of movement, developing machine-learning algorithms that will extract rules from that movement. And for that, we’re looking at some near real time sensors.

This is a project that was developed through a partnership I have with a sociologist on the ASU campus, and we have some funding from the National Science Foundation to study children’s play behavior in a daycare center on the university campus. My colleague in sociology is an expert on the social interactions of groups, and what I had promised to do for the project was to trace the movement of people within those groups. If you’ve ever seen kids move in a daycare center, they move very quickly, and so we had to come up with some novel schemes for tracking their movement.

Because my colleague also builds agent-based models, that was the inspiration for this machine learning work where we could collect data, but also turn those data into some context-specific information through the application of AI.

Ball: Is the idea to use that information to feed design work, so that the play structures and play spaces are optimized to help children move around?

Torrens: That’s an integral part of it. A lot of the play is organized around objects, around a red truck or around a computer, for example. Those objects can be fixed in space and time, or they can be quite mobile in space and time. Those objects take on different meaning depending on the different dynamics that are going on around them and based on the sociology of the group. For example, some of the kids might just slide down a slide, and other kids will play a game where they imagine that there’s a monster, under the slide, that is chasing them.

We’re fundamentally trying to collect information about this because in the literature in social science, very few people have looked at the play structures of children at this age. The kids range from ages 2 to 6, and this is the first time that they’ve ever been social. So they’re going to school for the first time, and they’re having to form these relationships.

We’re studying the movement of the children all day every day for three years. And to a certain extent, the group of children has remained pretty constant, so not only are we seeing them interact for the first time and seeing these relationships appear between them, but we’re able to view the evolution of these relationships over time.

Dealing with kids in a daycare center is a very niched, well-bounded world. And what we would like to do is generalize some of the findings from that work to sociology and urban environments. But, the step in scale is quite large, and not a lot of this transfers easily, so we’re trying to figure out how to do that.

Carrying out the same work in an urban environment would be near impossible. There’s a team of graduate students in sociology that sit in this daycare center all day every day, and watch these kids, and code their interactions every second using a very elaborate coding scheme. And then there’s another set of students that track their movement. Doing that even in a train station or a shopping mall would be prohibitively difficult.

What we have been looking into is using RFID tags to try and automate the collection of data. The experiments that we’ve tried were not as successful as we would have liked, partially because the resolution that we can get from the RFID tags is not sufficient to isolate physical interactions between people. The best you can get with some mathematical averaging is about 2 feet if you use what’s called an ultra wide band RFID tag, and even that is not sufficient. If we were to track them by GPS, we’d be at the scale of maybe 10 feet with differential correction, which is still not good enough. If you can imagine a group of people talking, trying to figure out who’s talking to who within a plus or minus 10-foot error bar is quite difficult.

Some of my colleagues are working with mobile phone data. The resolution there is even worse. It’s 30 meters. So that’s close to 85 feet in resolution. You can get broad stroke correlations about people coalescing around certain parts of town, but getting at the one-to-one interactions of those people is very difficult using cell phone data.

Ball: Are you building an individual geodemographic profile, and maybe even in terms of the kids, a kind of a psychological profile, into your models?

Torrens: Not really, and this is the difference between academic science and citizen science. Some of my colleagues in Silicon Valley ask why don’t we just videotape people and study them. We can’t do that at the University because we’re subject to review for the protection of human subjects.

We have to be very careful when we’re doing social science work that involves people, or even that involves personal information about people. We have to be very careful that we don’t violate their privacy, and that the people that we’re studying have complete buy-in to what we’re doing, and that they understand what we’re doing, and what the implications of our work for their privacy is.

You can imagine when you’re working with children, that it becomes even more complex. And so getting into issues of personality and so on is somewhat of a no-no in this work.

BallThe amount of work that’s going into this project sounds impressive, and tying back to social interactions sounds like it could reveal a great deal about human behavior.

Torrens: It’s actually a really fascinating project because the first year was pretty much a fishing expedition. Part of the reason why we’re doing this work is because theory is inadequate to explain some of these group interactions. We’re starting out in a theory vacuum, just collecting this information. At the very start, we didn’t really know what we should collect, and so we collected everything, which is not necessarily a good strategy.

But now, we’ve nearly finished with the three-year cycle of the project, and so we have a much richer understanding of the dynamics, but we have absolutely massive amounts of data. We’re having to develop tools just to manage the data, and just to extract signatures from the data.

Part of what we’re doing is going back to geographic information systems and spatial analysis, but we’re also working quite heavily with scientific visualization, just to figure out what’s going on in the data, what the trends are, so we can delve deeper into some of the relationships.

BallYou recently wrote about your excitement regarding the evolution of GIS, and what the future holds. I started in GIS in 1995, and there was a lot of buzz about a lot of different application areas. That excitement died down as the technology became more about collecting data than exploiting data, but my sense is that we’re coming back around where more analysis is going to take place, with more process based models that will give us a much better understanding of the Earth around us. Do you think GI Science will really take off?

Torrens: Definitely, and some of this has already happened. I just presume at some stage in the very near future that people are going to start building process engines the same way that they’re building mapping apps, and it’s going to be very interesting to see what happens when that starts to take off. My sense is that again this is going to come from citizen geographers, rather than coming from academic geographers or GIS professionals.

I know that in the retail industry, this work is already going on, and it’s tightly bound to the idea of geodemographics. When you use your customer loyalty card or when you use your credit card, certain pieces of data are skimmed off of that transaction, and then they put those data into the context of the transaction and their own inventory systems.

For a really long time now, they’ve also been using geography to put those things into context, and now they’re starting to use social networks to put them into a social context, too. All of this is tightly integrated and feeds back into their product designs, and pricing, and marketing strategies, and so on.

Matt Ball

About Matt Ball

Matt Ball is founder and editorial director of V1 Media, publisher of Informed Infrastructure, Earth Imaging Journal, Sensors & Systems, Asian Surveying & Mapping and the video news site GeoSpatial Stream.

Comments are disabled