Most recent crowd simulation algorithms equip agents with a synthetic vision component for steering. They offer promising perspectives through a more realistic simulation of the way humans navigate according to their perception of the surrounding environment. In this paper, we propose a new perception/motion loop to steering agents along collision free trajectories that significantly improves the quality of vision-based crowd simulators. In contrast with solutions where agents avoid collisions in a purely reactive (binary) way, we suggest exploring the full range of possible adaptations and retaining the locally optimal one. To this end, we introduce a cost function, based on perceptual variables, which estimates an agent's situation considering both the risks of future collision and a desired destination. We then compute the partial derivatives of that function with respect to all possible motion adaptations. The agent then adapts its motion by following the gradient. This paper has thus two main contributions: the definition of a general purpose control scheme for steering synthetic vision-based agents; and the proposition of cost functions for evaluating the perceived danger of the current situation. We demonstrate improvements in several cases.
Julien Pettre
Gradient-based steering for vision-based crowd simulation algorithms. T. Dutra, R. Marques, J. B. Cavalcante-Neto, C. A. Vidal and J. Pettré. Computer Graphics Forum (Eurographics 2017), to appear.
Most recent crowd simulation algorithms equip agents with a synthetic vision component for steering. They offer promising perspectives through a more realistic simulation of the way humans navigate according to their perception of the surrounding environment. In this paper, we propose a new perception/motion loop to steering agents along collision free trajectories that significantly improves the quality of vision-based crowd simulators. In contrast with solutions where agents avoid collisions in a purely reactive (binary) way, we suggest exploring the full range of possible adaptations and retaining the locally optimal one. To this end, we introduce a cost function, based on perceptual variables, which estimates an agent's situation considering both the risks of future collision and a desired destination. We then compute the partial derivatives of that function with respect to all possible motion adaptations. The agent then adapts its motion by following the gradient. This paper has thus two main contributions: the definition of a general purpose control scheme for steering synthetic vision-based agents; and the proposition of cost functions for evaluating the perceived danger of the current situation. We demonstrate improvements in several cases.
Most recent crowd simulation algorithms equip agents with a synthetic vision component for steering. They offer promising perspectives through a more realistic simulation of the way humans navigate according to their perception of the surrounding environment. In this paper, we propose a new perception/motion loop to steering agents along collision free trajectories that significantly improves the quality of vision-based crowd simulators. In contrast with solutions where agents avoid collisions in a purely reactive (binary) way, we suggest exploring the full range of possible adaptations and retaining the locally optimal one. To this end, we introduce a cost function, based on perceptual variables, which estimates an agent's situation considering both the risks of future collision and a desired destination. We then compute the partial derivatives of that function with respect to all possible motion adaptations. The agent then adapts its motion by following the gradient. This paper has thus two main contributions: the definition of a general purpose control scheme for steering synthetic vision-based agents; and the proposition of cost functions for evaluating the perceived danger of the current situation. We demonstrate improvements in several cases.
updated 7 years ago
Most recent crowd simulation algorithms equip agents with a synthetic vision component for steering. They offer promising perspectives through a more realistic simulation of the way humans navigate according to their perception of the surrounding environment. In this paper, we propose a new perception/motion loop to steering agents along collision free trajectories that significantly improves the quality of vision-based crowd simulators. In contrast with solutions where agents avoid collisions in a purely reactive (binary) way, we suggest exploring the full range of possible adaptations and retaining the locally optimal one. To this end, we introduce a cost function, based on perceptual variables, which estimates an agent's situation considering both the risks of future collision and a desired destination. We then compute the partial derivatives of that function with respect to all possible motion adaptations. The agent then adapts its motion by following the gradient. This paper has thus two main contributions: the definition of a general purpose control scheme for steering synthetic vision-based agents; and the proposition of cost functions for evaluating the perceived danger of the current situation. We demonstrate improvements in several cases.
Text.
I am VirtUs!
I am a pludisciplinary team of 7 researchers coming from different french institutes and universities. I also gather a few PhD students, post-docs and engineers!
I am located in Rennes in the western part of France, it’s a great place to visit.
VirtUs means « the Virtual Us ». The reason for that is that our main scientific objective is to create immersive simulation of populated spaces, where one can interact with our virtual alter egos.
But, why are we doing this?
Our team has been working on crowd modelling and simulation for a long time. This activity requires crowd motion data, for example to calibrate or evaluate models, which we have been trying to capture in the lab... but also in the field. Yes, what you see are some PhD students wearing motion capture suits to record motion in crowds during a music festival. As you can see, capturing data is fun, but quite demanding.
The Virtus team was born out of the idea that Virtual Reality is a good way to collect crowd motion data, it gives better experimental control, it simplifies the logistics of campaigns and it makes data collection easier. For example, we have shown that virtual reality makes it possible to record both navigation and gaze tracking with high ecological validity, which helps to decipher how we as humans interact in crowds. We have also shown that you can record fully human crowd movement data by recording yourself multiple times using a method called "one-man-crowd".
Again, I am VirtUs and I thank you for your attention.
Abstract: Crowd motion data is fundamental for understanding and simulating realistic crowd behaviours. Such data is usually collected through controlled experiments to ensure that both desired individual interactions and collective behaviours can be observed. It is however scarce, due to ethical concerns and logistical difficulties involved in its gathering, and only covers a few typical crowd scenarios. In this work, we propose and evaluate a novel Virtual Reality based approach lifting the limitations of real-world experiments for the acquisition of crowd motion data. Our approach immerses a single user in virtual scenarios where he/she successively acts each crowd member. By recording the past trajectories and body movements of the user, and displaying them on virtual characters, the user progressively builds the overall crowd behaviour by him/herself. We validate the feasibility of our approach by replicating three real experiments, and compare both the resulting emergent phenomena and the individual interactions to existing real datasets. Our results suggest that realistic collective behaviours can naturally emerge from virtual crowd data generated using our approach, even though the variety in behaviours is lower than in real situations. These results provide valuable insights to the building of virtual crowd experiences, and reveal key directions for further improvements.
Presented at Eurographics 2022, Reims, France
A. Colas , W. van Toll, K. Zibrek , L. Hoyet , A.-H. Olivier , J. Pettré
Univ Rennes, Inria, CNRS, IRISA, France
Breda University of Applied Sciences, The Netherlands
The real-time simulation of human crowds has many applications. In a typical crowd simulation, each person (`agent') in the crowd moves towards a goal while adhering to local constraints. Many algorithms exist for specific local `steering' tasks such as collision avoidance or group behavior. However, these do not easily extend to completely new types of behavior, such as circling around another agent or hiding behind an obstacle. They also tend to focus purely on an agent's velocity without explicitly controlling its orientation. This paper presents a novel sketch-based method for modelling and simulating many steering behaviors for agents in a crowd. Central to this is the concept of an interaction field (IF): a vector field that describes the velocities or orientations that agents should use around a given `source' agent or obstacle. An IF can also change dynamically according to parameters, such as the walking speed of the source agent.
IFs can be easily combined with other aspects of crowd simulation, such as collision avoidance. Using an implementation of IFs in a real-time crowd simulation framework, we demonstrate the capabilities of IFs in various scenarios. This includes game-like scenarios where the crowd responds to a user-controlled avatar. We also present an interactive tool that computes an IF based on input sketches. This IF editor lets users intuitively and quickly design new types of behavior, without the need for programming extra behavioral rules. We thoroughly evaluate the efficacy of the IF editor through a user study, which demonstrates that our method enables non-expert users to easily enrich any agent-based crowd simulation with new agent interactions.
W van Toll, C Braga, B Solenthaler, J Pettré
13th ACM SIGGRAPH Conference on Motion, Interaction and Games
In highly dense crowds of humans, collisions between people occur often. It is common to simulate such a crowd as one fluid-like entity (macroscopic), and not as a set of individuals (microscopic, agent-based). Agent-based simulations are preferred for lower densities because they preserve the properties of individual people. However, their collision handling is too simplistic for extreme-density crowds. Therefore, neither paradigm is ideal for all possible densities.
In this paper, we combine agent-based crowd simulation with the concept of Smoothed Particle Hydrodynamics (SPH), a particle-based method that is popular for fluid simulation. Our combination augments the usual agent-collision handling with fluid dynamics when the crowd density is sufficiently high. A novel component of our method is a dynamic rest density per agent,
which intuitively controls the crowd density that an agent is willing to accept.
Experiments show that SPH improves agent-based simulation in several ways: better stability at high densities, more intuitive control over the crowd density, and easier replication of wave-propagation effects. Our implementation can simulate tens of thousands of agents in real-time. As such, this work successfully prepares the agent-based paradigm for crowd simulation at all densities.
a local navigation algorithm computes how a single person (`agent') should move based on its surroundings. Many algorithms for this purpose have been proposed, each using different principles and implementation details that are difficult to compare.
This paper presents a novel framework that describes local agent navigation generically as optimizing a cost function in a velocity space. We show that many state-of-the-art algorithms can be translated to this framework,
by combining a particular cost function with a particular optimization method. As such, we can reproduce many types of local algorithms using a single general principle.
Our implementation of this framework, named UMANS, is freely available online. This software enables easy experimentation with different algorithms and parameters. We expect that our work will help understand the true differences between navigation methods, enable honest comparisons between them, simplify the development of new local algorithms,
make techniques available to other communities, and stimulate further research on crowd simulation.
authors: Fabien Grzeskowiak, Marie Babel, Julien Pettre
conference: IEEE VR 2020 Conference Papers
abstract: This paper explores the use of Virtual Reality (VR) to study human-robot interactions during navigation tasks by both immersing a user and a robot in a shared virtual spaces. VR combines the advantages of being safe (as robots and humans interacting by the means of VR but can physically be in remote places) and ecological (realistic environments are perceived by the robot and the human, and natural behaviors can be observed). Nevertheless, VR can introduce perceptual biases in the interaction and affect in some ways the observed behaviors, which can be problematic when used to acquire experimental data. In our case, not only human perception is concerned, but also the one of the robot which requires to be simulated to perceive the VR world. Thus, the contribution of this paper is twofold. It first provides a technical solution to perform human robot interactions in navigation tasks through VR: we describe how we combine motion tracking, VR devices, as well as robot sensors simulation algorithms to immerse together a human and a robot in a shared virtual space. We then assess a simple interaction task that we replicate in real and in virtual conditions to perform a first estimation of the importance of the biases introduced by the use of VR on both a Human and a robot. Our conclusions are in favor of using VR to study human-robot interactions, and we are developing directions for future work.
by Wouter van Toll (Univ Rennes, Inria, CNRS, IRISA) Julien Pettre (Univ Rennes, Inria, CNRS, IRISA)
We present a novel topology-driven method for improving the navigation of agents in virtual environments. In agent-based crowd simulations, the combination of global path planning and local collision avoidance can cause conflicts and undesired motion. These conflicts are related to the decisions to pass obstacles or agents on certain sides. In this paper, we define an agent’s navigation behavior as a topological strategy amidst obstacles and other agents. We show how to extract such a strategy from a global path and from a local velocity. Next, we propose a simulation framework that computes these strategies for path planning, path following, and collision avoidance. By detecting conflicts between strategies, we can decide reliably when and how an agent should re-plan an alternative path. As such, this work bridges a long-existing gap between global and local planning. Experiments show that our method can improve the behavior of agents while preserving real-time performance. It can be applied to many agent-based simulations, regardless of their specific navigation algorithms. The strategy concept is also suitable for explicitly sending agents in particular directions.
S Tonneau, P Fernbach, AD Prete, J Pettré, N Mansard
ACM Transactions on Graphics (TOG) 37 (5), 176
Synthesizing motions for legged characters in arbitrary environments is a long-standing problem that has recently received a lot of attention from the computer graphics community. We tackle this problem with a procedural approach that is generic, fully automatic, and independent from motion capture data. The main contribution of this article is a point-mass-model-based method to synthesize Center Of Mass trajectories. These trajectories are then used to generate the whole-body motion of the character.
The use of a point mass model results in physically inconsistent motions and joint limit violations when mapped back to a full- body motion. We mitigate these issues through the use of a novel formulation of the kinematic constraints that allows us to generate a quasi-static Center Of Mass trajectory in a way that is both user-friendly and computationally efficient. We also show that the quasi-static constraint can be relaxed to generate motions usable for computer animation at the cost of a moderate violation of the dynamic constraints.
Our method was integrated in our open-source contact planner and tested with different scenarios—some never addressed before—featuring legged characters performing non-gaited motions in cluttered environments. The computational efficiency of our trajectory generation algorithm (under one ms to compute one second of trajectory) enables us to synthesize motions in a few seconds, one order of magnitude faster than state-of-the-art methods. Although our method is empirically able to synthesize collision-free motions, the formal handling of environmental constraints is not part of the proposed method and left for future work.
S Tonneau, P Fernbach, AD Prete, J Pettré, N Mansard
ACM Transactions on Graphics (TOG) 37 (5), 176
Synthesizing motions for legged characters in arbitrary environments is a long-standing problem that has recently received a lot of attention from the computer graphics community. We tackle this problem with a procedural approach that is generic, fully automatic, and independent from motion capture data. The main contribution of this article is a point-mass-model-based method to synthesize Center Of Mass trajectories. These trajectories are then used to generate the whole-body motion of the character.
The use of a point mass model results in physically inconsistent motions and joint limit violations when mapped back to a full- body motion. We mitigate these issues through the use of a novel formulation of the kinematic constraints that allows us to generate a quasi-static Center Of Mass trajectory in a way that is both user-friendly and computationally efficient. We also show that the quasi-static constraint can be relaxed to generate motions usable for computer animation at the cost of a moderate violation of the dynamic constraints.
Our method was integrated in our open-source contact planner and tested with different scenarios—some never addressed before—featuring legged characters performing non-gaited motions in cluttered environments. The computational efficiency of our trajectory generation algorithm (under one ms to compute one second of trajectory) enables us to synthesize motions in a few seconds, one order of magnitude faster than state-of-the-art methods. Although our method is empirically able to synthesize collision-free motions, the formal handling of environmental constraints is not part of the proposed method and left for future work.
Steering and navigation are important components of character animation systems to enable them to autonomously move in their environment.
In this work, we propose a synthetic vision model that uses visual features to steer agents through dynamic environments. Our agents perceive optical flow resulting from their relative motion with the objects of the environment. The optical flow is then segmented and processed to extract visual features such as the focus of expansion and time-to-collision. Then, we establish the relations between these visual features and the agent motion, and use them to design a set of control functions which allow characters to perform object-dependent tasks, such as following, avoiding and reaching.
Control functions are then combined to let characters perform more complex navigation tasks in dynamic environments, such as reaching a goal while avoiding multiple obstacles. Agent's motion is achieved by local minimization of these functions. We demonstrate the efficiency of our approach through a number of scenarios.
Our work sets the basis for building a character animation system which imitates human sensorimotor actions. It opens new perspectives to achieve realistic simulation of human characters taking into account perceptual factors, such as the lighting conditions of the environment.
Authors: D. Wolinski, M. Lin, J. Pettré (Inria - UNC Chapel Hill)
Microscopic crowd simulators rely on models of local interaction (e.g. collision avoidance) to synthesize the individual motion of each virtual agent. The quality of the resulting motions heavily depends on this component, which has significantly improved in the past few years. Recent advances have been in particular due to the introduction of a short-horizon motion prediction strategy that enables anticipated motion adaptation during local interactions among agents. However, the simplicity of prediction techniques of existing models somewhat limits their domain of validity. In this paper, our key objective is to significantly improve the quality of simulations by expanding the applicable range of motion predictions. To this end, we present a novel local interaction algorithm with a new context-aware, probabilistic motion prediction model. By context-aware, we mean that this approach allows crowd simulators to account for many factors, such as the influence of environment layouts or in-progress interactions among agents, and has the ability to simultaneously maintain several possible alternate scenarios for future motions and to cope with uncertainties on sensing and other agent's motions. Technically, this model introduces "collision probability fields" between agents, efficiently computed through the cumulative application of Warp Operators on a source Intrinsic Field. We demonstrate how this model significantly improves the quality of simulated motions in challenging scenarios, such as dense crowds and complex environments.
to consider the presence of groups for the believability of a virtual crowd, most crowd simulations only take into account
individual characters or a limited set of group behaviors. We introduce a unified solution that allows for simulations of crowds
that have diverse group properties such as social groups, marches, tourists and guides, etc. We extend the Velocity Obstacle
approach for agent based crowd simulations by introducing Velocity Connection; the set of velocities that keep agents moving
together whilst avoiding collisions and achieving goals. We demonstrate our approach to be robust, controllable, and able to
cover a large set of group behaviors.
Weizi Li 1 David Wolinski 2 Julien Pettré 2 Ming C. Lin 1
1 University of North Carolina at Chapel Hill, USA
2 Inria Rennes, France
Computer Graphics Forum (Eurographics 2015)
Representing the majority of living animals, insects are the most ubiquitous biological organisms on Earth. Being able to simulate insect swarms could enhance visual realism of various graphical applications. However, the very complex nature of insect behaviors makes its simulation a challenging computational problem. To address this, we present a general biologically-inspired framework for visual simulation of insect swarms. Our approach is inspired by the observation that insects exhibit emergent behaviors at various scales in nature. At the low level, our framework automatically selects and configures the most suitable steering algorithm for the local collision avoidance task. At the intermediate level, it processes insect trajectories into piecewise-linear segments and constructs probability distribution functions for sampling waypoints. These waypoints are then evaluated by the Metropolis-Hastings algorithm to preserve global structures of insect swarms at the high level. With this biologically inspired, data-driven approach, we are able to simulate insect behaviors at different scales and we evaluate our simulation using both qualitative and quantitative metrics. Furthermore, as insect data could be difficult to acquire, our framework can be adopted as a computer-assisted animation tool to interpret sketch-like input as user control and generate simulations of complex insect swarming phenomena.
A common issue in three-dimensional animation is the creation of contacts between a virtual creature and the environment. Contacts allow force exertion, which produces motion. This paper addresses the problem of computing contact configurations allowing to perform motion tasks such as getting up from a sofa, pushing an object or climbing. We propose a two-step method to generate contact configurations suitable for such tasks. The first step is an offline sampling of the range of motion (ROM) of a virtual creature. The ROM of the human arms and legs is precisely determined experimentally. The second step is a run time request confronting the samples with the current environment. The best contact configurations are then selected according to a heuristic for task efficiency. The heuristic is inspired by the force transmission ratio.
Given a contact configuration, it measures the potential force that can be exerted in a given direction. The contact configurations are then used as inputs for an inverse kinematics solver that will compute the final animation. Our method is automatic and does not require examples or motion capture data. It is suitable for real time applications and applies to arbitrary creatures in arbitrary environments. Various scenarios (such as climbing, crawling, getting up, pushing or pulling objects) are used to demonstrate that our method enhances motion autonomy and interactivity in constrained environments.
When avoiding a group, a walker has two possibilities: either he goes through it or around it. Going through very dense groups or around huge ones would not seem natural and could break any sense of presence in a virtual environment. This paper aims to enable crowd simulators to handle such situations correctly. To this end, we need to understand how real humans decide to go through or around groups. As a first hypothesis, we apply the Principle of Minimum Energy - PME - on different group sizes and density. According to this principle, a walker should go around small and dense groups whereas he should go through large and sparse groups. Such principle has already been used for crowd simulation; the novelty here is to apply it to decide on a global avoidance strategy instead of local adaptations only. Our study quantifies decision thresholds. However, PME leaves some inconclusive situations for which the two solutions paths have similar energetic costs. In a second part, we propose an experiment to corroborate PME decisions thresholds with real observations. As controlling the factors of an experiment with many people is extremely hard, we propose to use Virtual Reality as a new method to observe human behavior. This work represent the first crowd simulation algorithm component directly designed from a VR-based study. We also consider the role of secondary factors in inconclusive situations. We show the influence of the group appearance and direction of relative motion in the decision process. Finally, we draw some guidelines to integrate our conclusions to existing crowd simulators and show an example of such integration. We evaluate the achieved improvements.
Pages 119-127
Abstract:
When navigating in crowds, humans are able to move efficiently between people. They look ahead to know which path would reduce the complexity of their interactions with others. Current navigation systems for virtual agents consider the long-term planning to find a path in the static environment and the short term reaction to avoid collision with close obstacles. Recently some mid-term considerations have been added to avoid high density areas. However, there is no mid-term planning among static and dynamic obstacles that would enable the agent to look ahead and avoid difficult paths or find easy ones as human do. In this paper we present a system for such mid-term planning. This system is added to the navigation process between the path finding and the local avoidance to improve the navigation of virtual agents. We show the capacities of such system on several case studies. Finally we use an energy criterion to compare trajectories computed with and without the mid-term planning.
In this paper, we present a new model to simulate following behavior.
This model is based on a dynamic following distance that
changes according to the follower’s speed and to the leader’s motion.
The following distance is associated with a prediction of the
leader’s future position to give a following ideal position. We show
the resulting following trajectory and detail the importance of the
distance variation in different situations. The model is evaluated
using real data. We demonstrate the capacity of our model to reproduce
macroscopic patterns and show that it is also able to synthesize
trajectories similar to real ones. Finally, we compare our results
with other following models and point out the improvements.
algorithms based on real-world observations of crowd movements.
A key aspect of our approach is to enable fair comparisons
by automatically estimating
the parameters that enable the simulation algorithms to best fit the given data.
We formulate parameter estimation as an optimization problem, and propose a
general framework to solve the combinatorial optimization problem for all
parameterized crowd simulation algorithms. %in a algorithm-independent manner.
Our framework supports a variety of metrics to compare reference data
and simulation outputs.
The reference data may correspond to recorded trajectories,
macroscopic parameters, or artist-driven sketches.
We demonstrate the benefits of our framework for
example-based simulation, modeling of cultural variations,
artist-driven crowd animation, and relative comparison of some widely-used
multi-agent simulation algorithms.
in addition, we ensure a predictable level of realism for animations. We provide virtual population designers and animators with a powerful framework which (i) enables them to clone crowd motion examples while preserving the complexity and the aspect of group motion and (ii) is able to animate large-scale crowds in real-time. Our contribution is the formulation of the cloning problem as a double search problem. Firstly, we search for almost periodic portions of crowd motion data through the available examples. Secondly, we search for almost symmetries between the conditions at the limits of these portions in order to interconnect them. The result of our searches is
a set of crowd patches that contain portions of example data that can be used to compose large and endless animations. Through several examples prepared from real crowd motion data, we demonstrate the advantageous properties of our approach as well as identify its potential for future developments.
The nature of these interactions is varied, and it has been observed that macroscopic phenomena emerge from the combination of these local interactions.
Crowd models have hitherto considered collision avoidance as the unique type of interactions between individuals, few have considered walking in groups.
By contrast, our paper focuses on interactions due to the following behaviors of pedestrians.
Following is frequently observed when people walk in corridors or when they queue.
Typical macroscopic stop-and-go waves emerge under such traffic conditions.
Our contributions are, first, an experimental study on following behaviors, second, a numerical model for simulating such interactions, and third, its calibration, evaluation and applications.
Through an experimental approach, we elaborate and calibrate a model from microscopic analysis of real kinematics data collected during experiments.
We carefully evaluate our model both at the microscopic and the macroscopic levels.
We also demonstrate our approach on applications where following interactions are prominent.
for immersive locomotion in virtual environments. Whereas many
previous interfaces preserve or stimulate the users proprioception,
the Joyman aims at preserving equilibrioception in order to improve
the feeling of immersion during virtual locomotion tasks. The proposed
interface is based on the metaphor of a human-scale joystick.
The device has a simple mechanical design that allows a user to
indicate his virtual navigation intentions by leaning accordingly.
We also propose a control law inspired by the biomechanics of the
human locomotion to transform the measured leaning angle into a
walking direction and speed - i.e., a virtual velocity vector.
Populating virtual environments (VEs) with large crowds is a subject that has been tackled for several years. Solutions have been proposed to offer realistic trajectories as well as interactivity, but limitations remain on the environment dimensions with respect to population density. In this paper, we extend the concept of motion patches to densely populate large environments. We build a population from a set of blocks containing a pre-computed local crowd simulation. Each block is called a crowd patch. We address the problem of computing patches, assembling them to create VEs, and controlling their content to answer designers' needs. Our major contribution is to provide a drastic lowering of computation needs for simulating a virtual crowd at run-time. We can thus handle dense populations in large-scale environments with performances never reached so far. Our results illustrate the real-time population of a potentially infinite city with realistic and varied crowds interacting with each other and their environment. We discuss the advantages and drawbacks of the proposed solution, and its possible improvements in the future.
Siggraph 2010 Papers
In the everyday exercise of controlling their locomotion, humans rely on their optic flow of the perceived environment to achieve collision-free navigation. In crowds, in spite of the complexity of the environment made of numerous obstacles, humans demonstrate remarkable capacities in avoiding collisions. Cognitive science work on the human locomotion stated that a relatively succinct information is extracted from the optic flow to achieve a safe locomotion. In this paper, we explore a novel vision-based approach of collision avoidance between walkers that fit the requirements of interactive crowd simulation. In imitation of humans and based on cognitive science results, we detect future collisions as well as their dangerousness from visual-stimuli. The motor-response is twofold: reorientation strategy is set to avoid future collision, whereas a deceleration strategy is used to avoid imminent collisions. Several examples of our simulation results show that the emergence of self-organized patterns of walkers is reinforced using our approach. Emergent phenomena are visually appealing. More importantly, they improve the overall efficiency of the walkers traffic and allow avoiding improbable locking situations.
Barbara Yersin, Jean-Paul Laumond and Daniel Thalmann
CASA 2006
--
This paper introduces a framework for real-time simulation and rendering of crowds navigating in a virtual environment. The solution first consists in a specific environment preprocessing technique giving rise to navigation graphs, which are then used by the navigation and simulation tasks. Second, navigation planning interactively provides various solutions to the user queries, allowing to spread a crowd by individualizing trajectories. A scalable simulation model enables the management of large crowds, while saving computation time for rendering tasks. Pedestrian graphical models are divided into three rendering fidelities ranging from billboards to dynamic meshes, allowing close-up views of detailed digital actors with a large variety of locomotion animations. Examples illustrate our method in several environments with crowds of up to 35'000 pedestrians with real-time performance.