visual control

Top PDF visual control:

VISUAL CONTROL OF VELOCITY OF APPROACH BY PIGEONS WHEN LANDING

VISUAL CONTROL OF VELOCITY OF APPROACH BY PIGEONS WHEN LANDING

Whereas previous work on the visual control of birds’ landing manoeuvres has concentrated on the timing of a single discrete action – the extension of the feet – the present analysis is concerned primarily with a different aspect of landing, the control of braking. Braking appropriately is important: if the bird brakes too hard it will stop short, drop and miss the perch; if it does not brake hard enough it will be unable to check its momentum when its feet hit the perch and will tip forward. In either case injury could result. We will first outline a theory of how the bird might visually control its braking and then present evidence testing the theory from film analyses of pigeons landing.

20 Read more

Biologically Inspired Visual Control of Flying Robots

Biologically Inspired Visual Control of Flying Robots

Insects posses an incredible ability to navigate their environment at high speed, despite having small brains and limited visual acuity . Through selective pressure they have evolved computationally efficient means for simultaneously performing navigation tasks and instantaneous control responses. The insect’s main source of information is visual, and through a hierarchy of processes this information is used for perception; at the lowest level are local neurons for detecting image motion and edges, at the higher level are interneurons to spatially integrate the output of previous stages. These higher level processes could be considered as models of the insect’s environment, reducing the amount of information to only that which evolution has determined relevant. The scope of this thesis is experimenting with biologically inspired visual control of flying robots through information processing, models of the environment, and flight behaviour.

259 Read more

Visual control of steering in the box jellyfish Tripedalia cystophora

Visual control of steering in the box jellyfish Tripedalia cystophora

Tripedalia cystophora is a remarkable cnidarian. It lives in the mangrove swamps of the Caribbean, which are rich in food but potentially dangerous for a fragile animal like a jellyfish. Tripedalia cystophora preys on small copepods of the species Dioithona oculata that swarm between the prop roots of the mangrove trees (Buskey, 2003). The copepods gather in the light shafts filtering through the overhead canopy. Tripedalia cystophora uses its visual system to detect the light shafts but it cannot see the copepods themselves, and would forage readily in empty light shafts (Buskey, 2003). The visual system of all box jellyfish is distributed at four sensory clusters, called rhopalia (Fig.1A), each carrying six eyes (Claus, 1878; Conant, 1898; Berger, 1900; Laska and Hündgen, 1982; Yamasu and Yoshida, 1976). Each rhopalium contains one lens eye looking upward (upper lens eye), one lens eye looking obliquely downwards (lower lens eye), one pair of lens-less pit eyes looking upward (pit eyes) and one pair of slit-shaped lens-less eyes looking obliquely downward (slit eyes). Interestingly, the visual fields of the eyes that monitor the underwater world, the large lens eye and the slit eyes, are normally directed inward towards the centre of the bell, with the result that the animal ‘looks through’ its own bell. The unique visual system enables the medusae to display visually guided behaviours that appear remarkable for a ‘simple’ cnidarian. They can (1) navigate towards, and maintain position within, the light shafts where their prey gathers (Buskey, 2003; Garm and Bielecki, 2008), (2) avoid obstacles in the water (Garm et al., 2007b) and (3) use visual cues seen through the water surface to find their way back to the mangrove trees when washed out (Garm et al., 2011).

7 Read more

Visual control of flight speed in Drosophila melanogaster

Visual control of flight speed in Drosophila melanogaster

While our method allowed us to characterize an important free- flight reflex in a two-dimensional spatio–temporal parameter space, the one-parameter open-loop condition was experimentally induced, raising the question of whether our measurements were subject to artefacts, resulting from the highly artificial experimental conditions. A first question relates to other sensory modalities, such as mechanosensory feedback from the halteres and antennae and olfactory cues, which remained in closed-loop and might have provided conflicting cues. The experimentally induced disparity between the visual and other sensory inputs is in fact by no means unnatural and mimics a control scenario faced by a fly flying upwind quite closely. Whether flying against a constant wind or in still air, a fly adjusts its air speed so as to maintain a constant ‘preferred’ retinal pattern slip speed [see fig. 3 in David (David, 1982)] (see also Introduction). This situation is similar to our pre-test condition, in which the fly was induced to hover near the middle of the wind tunnel, where the visual pattern motion matched the fly’s preferred retinal slip speed. In a natural environment, a gust of wind from the front could easily cause the fly to momentarily slow down or even be carried backward, in which case the fly would perceive regressive (back-to-front) retinal slip. At this moment, the retinal slip depends largely on the strength of the wind gust and not the fly’s behavior; a situation closely corresponding to the open-loop condition we implemented using TrackFly.

11 Read more

Circuit Breaker Screw Mounting MachineBased on Visual Control

Circuit Breaker Screw Mounting MachineBased on Visual Control

Molded Case Circuit Breaker Screw mounting machine requires not only the design of the mechanism, but also the need of accurate positioning in three-dimensional space.Through the industrial CCD camera, shooting screw and screw tip of the bottom and front, to capture the three-dimensional coordinate information of the screw hole and the screw head,and sent them to the STM32 motion controller for information processing.The STM32 motion controller transfers the processed information to the stepper motor that drives the X, Y, Z axes to complete the adjustment and positioning of the screw and screwdriver.After positioning, start the screwdriver to rotate and feed the stepper motor to complete the installation of the screw.In the light of the signals transmit to the motion control card torque sensor and CCD camera,determine whether the degree of tightening of the screw and the installation position and attitude deviation meets the requirements.The STM32 motion control board's LEDs help us determine if the next screw is going to be installed.

5 Read more

Visual control of flight speed in honeybees

Visual control of flight speed in honeybees

All of the experiments were conducted in a rectangular tunnel that had clear Perspex walls, which allowed bees flying through the tunnel to view a variety of stationary or moving visual patterns (see below). The tunnel was 320·cm long, 20·cm high and 22·cm wide. A clear Perspex ceiling permitted observation and filming of the bees as they flew in the tunnel (Fig.·1A). The floor of the tunnel was white and provided no visual texture. For each experiment, up to 20 bees were individually marked and trained to fly to a feeder containing sugar solution placed at the far end of the tunnel. Flights to the feeder were filmed in the central 1.45·m segment of the tunnel by a digital video camera (Sony DCR-TRV410E; Sony Corporation, Toyko, Japan) positioned 2.5·m above the tunnel floor (Fig.·1A). The recorded flights were analysed by an automated tracking program developed in-house, using Matlab software (v.6.5.0; The Mathworks Inc., Natick, MA, USA).

11 Read more

Visual Control of an Industrial Robot Manipulator: Accuracy Estimation

Visual Control of an Industrial Robot Manipulator: Accuracy Estimation

Lately, the robot producers have been putting many efforts in incorporating visual and other sensors into industrial robots, thus making a significant improvement in accuracy, flexibility and adaptability. The vision is still probably the most promising sensor [2] in real robotic 3D servoing issues [3]. It has been vastly investigated for the last two decades in the laboratories but it is only now that it has found its way in industrial implementation [4] in contrast to machine vision, which became a well established industry during the last years [5]. The vision systems used in robots must satisfy a few constraints that makes them different from machine vision measuring systems. Firstly, camera working distances are much larger, especially with larger robots that can reach a few meters. Measuring with high precision at such distances demands much higher resolution which consequently, increases the cost

7 Read more

Seeing things

Seeing things

We went on to propose a state-space model of active vision, in which visual signals provide feedback, leading to the view of perception as state estimation . The problem of map-building was used to illustrate this idea and to show that even in this essen- tially perceptual task, active systems can have a considerable advantage, in terms of learning time. Moreover, it could be argued that many learning tasks can be modelled in this way - the ability to `navigate around' some conceptual domain is an appealing metaphor for that which we commonly call knowledge. As a nal example, these ideas were put together in a simple visual control task, which showed that the learning of a state estimator based on visual input is eective in motor control: the system learns and uses a symbolic representation of the world state to control movement in the presence of errors. In this connection, it is interesting to speculate on what vision has to tell us about the more general use of symbols. It is clear that concepts such as invariance and equivariance make sense in an essentially geometric domain, such as vision, but their usefulness in more general contexts seems questionable: could they help to explain what is going on in the Chinese Room ? It may be that, just as we were obliged in the Active Vision framework, to introduce a world model into our analysis, so as that world model becomes more complex, there will be symbols of a more abstract nature denable in the same terms as those we have discussed. We are in no position to make general prognostications on this subject, but perhaps it is appropriate to quote someone who gave some thought to such matters - Wittgenstein: \A name signies only what is an element of reality. What cannot be destroyed; what remains the same in all changes." [78].

54 Read more

Sparse visual models for biologically inspired sensorimotor control

Sparse visual models for biologically inspired sensorimotor control

in the V1 layer. Then, the training set X = { ,..., N } for V2 layer was obtained by randomly extracting 24×24 patches (recall that the receptive field of a V2 cell is typically 2~3 times larger than that of a V1 cell) from 50 complex cell responses among CCs. Using 20,000 such patches, we trained the weights in the V2 layer using the methods described in (Olshausen and Field 1997) and (Hoyer and Hyvarinen 2002). Combining the sparse coding and non-negative constraint, after 40 iterations, the learned 288 weights/receptive fields of V2 cells are shown in Fig. 5. This process took 6 hours by running MATLAB on the same computer mentioned above. Visually, the basis patterns are in different position, different orientation, and different length. Moreover, for characterizing the learned V2 cell receptive fields, we approximated them in the parameter space as done in (Hoyer and Hyvarinen 2002). The main results are shown in Fig. 6, which shows a richer tuning of orientation and length than what has been reported before. This kind of length tuning, or the property of end-stopped cell, is very interesting for visual features representation. As pointed out in (Hoyer and Hyvarinen 2002), the necessity for different length basis patterns comes from the fact that long basis patterns simply cannot code short (or curved) contours and short basis patterns are inefficient at representing long, straight contours.

8 Read more

Active gaze, visual look-ahead, and locomotor control

Active gaze, visual look-ahead, and locomotor control

To examine the use of gaze during locomotor control we carried out a set of 4 behavioral experiments to compare our predictions against human performance. In Experiment 1 we undertook some observational analyses of where participants look when they are steering freely through a challenging slalom course to see whether they adopted a tight temporal gaze pattern when setting up their trajectories between gates. Experiment 2 examined gaze patterns experimentally to examine when participants preferred to move gaze from the immediate slalom gate to the next gate in the series. In Experiment 3 we explored what happened when gaze patterns were disrupted and participants were forced to move their gaze ahead earlier than they would prefer, or were prevented from looking ahead as early as they would like. We also addressed whether participants monitored their current target alignment, through peripheral vision or some form of visual- spatial buffer. In Experiment 4 we examined whether precise target fixation was important, or whether the steering task could be completed when gaze is directed towards the approximate zone of a gate but slightly offset. To probe the nature of the visual-spatial buffer Experiment 4 also examined how well observers were able to monitor their time to passage for steering targets while still attending to future waypoints.

29 Read more

Abnormal visual gain control in a Parkinson's disease model

Abnormal visual gain control in a Parkinson's disease model

Figure 1.Recording and analysing the visual response with SSVEP. (A) Flies are restrained in a Gilson pipette tip and illuminated by a blue, light-emitting diode (LED) is driven with by a continuously flickering wave. Electrodes on the eye and mouth record the response of the visual network. The signal is amplified and digitized. (B) The stimulus is the sum of 2 square waves (1F1 and 1F2). (C) A typical recording, showing 1 s of data from a single trial in one white-eyed fly. (D) The response is separated out into its separate parts by frequency. In this experiment, the stimulus had two input frequencies (12 and 15 Hz). Harmonics of the inputs are shown in the Fourier transform of the signal as green bars. Low-order intermodulation terms (e.g. 1F2-1F1, 2F1+2F2) are shown in brown. (E) Responses at any given frequency have a complex phase as well as an amplitude. This can be illustrated in a polar plot where amplitude is mapped along the radial direction and decreasing angle in the clockwise direction indicates increasing phase lag. Here, the response to a 60% contrast measured at 1F1 in a mutant phenotype is illustrated. The shaded circle indi- cates the complex standard error of the mean computed across individual flies. (F) Diagram of the structure of the fly visual system. This includes the photoreceptors and the second-order amacrine (A) and lamina neurons (L1, L2). It also shows two types of medulla neurons (C and T) that project to the lamina. The visual lobes also include dopaminergic cells (DA), some intrinsic to the medulla, others projecting from the CNS to the lamina. For each category of neuron, only one or two repre- sentative cells are shown. Diagram based on silver staining (23) and dopaminergic reporters (9).

15 Read more

Optimizing Stress with a Microsoft Visual Basic . Net Control

Optimizing Stress with a Microsoft Visual Basic . Net Control

Human response to stress varies in time and space greatly. Sleep and fear of God are vital stress reducer. A customized server, dvRTFCls that enhanced the usefulness of Microsoft Visual Basic.NET RichTextBox control was developed using Microsoft Visual Studio 2010. dvRTFCls contains twenty six functionalities that were exposed in a client application, StressSoft to seamlessly display medical stresses, types, symptoms, causes and remedies in various ways and colours to meet users’ dynamic tastes and needs. One hundred well- annotated stress diagrams could be shown. dvRTFCls has unlimited applications to display text with differs attributes. It was shown that sleep and knowledge of God could effectively reduce stress. Godly fear, which removes distresses and enthrones Eustress, was comprehensively discussed; the effect of stress on productivity was also addressed. It was established that the Holy Bible is adequate to drastically reduce stress, if the divine injunctions in it are adhere to. Software developers will find the customized control and generic method for data inputs in this work beneficial in their works. Researchers and Medical personnel, besides distressful folks, will tremendously gain from using StressSoft package. General Terms

9 Read more

Summation of visual and mechanosensory feedback in Drosophila
flight control

Summation of visual and mechanosensory feedback in Drosophila flight control

It is important to note that although the haltere is the likely source of the signal that modifies visual input, it is not the only possibility. Hengstenberg (1991) presented evidence for as many as eight reflexes that can provide feedback to the neck motor system in Calliphora, and many of these could function similarly to detect mechanical oscillations and control wing motion in Drosophila. Although a previous ablation study indicated that the halteres are required for the major component of the wingbeat response to mechanical oscillation (Dickinson, 1999), interpretation of ablation experiments is somewhat ambiguous and we did not repeat such methods in this study. Aside from the compound eyes, other non-haltere sources of equilibrium feedback include the ocelli, prosternal hairs on the neck, and wing campaniform sensilla. These modalities could contribute to both the basic response to mechanical oscillation and the attenuation of the visual reflex during concurrent presentation. Given that the head was fixed to the thorax and the fly was rigidly fixed to the light display when oscillated, it is unlikely that the ocelli or neck receptors are involved in these effects. However, it is impossible to rule out the contribution of wing sensilla, which could respond to changes in loading during mechanical oscillation or Coriolis forces acting on the wing.

10 Read more

Visual Monitoring for Pouring Quality Control of Fresh Concrete

Visual Monitoring for Pouring Quality Control of Fresh Concrete

In 2004, Chen developed a monitoring-alarm device mounted on a vibrating machine [4]. The device used ultrasonic sensors to get the distance between the device and the surface of fresh concrete in order to judge whether the poker has been inserted into the mixture or not. Liu [5] applied GPRS technology to wireless data communication between the remote offices including database, sever, monitoring PCs and the field watering station when he developed an automatic control and real- time monitoring system for trunk watering on earth-rock dam construction. Burlingame [6]tried to apply infrared thermal imaging technology to determine the extent of vibration, because a working poker vibrator has higher temperature than surrounding concrete.

8 Read more

Steady as She Goes: Visual Autocorrelators and Antenna Mediated Airspeed Feedback in the Control of Flight Dynamics in Fruit Flies and Robotics

Steady as She Goes: Visual Autocorrelators and Antenna Mediated Airspeed Feedback in the Control of Flight Dynamics in Fruit Flies and Robotics

Previous work in bio-inspired feedback control has considered the role of vision, but the ramications of using visual feedback with a signicant delayand compensation with a second sensehave not previously been addressed. Neumann [82] created a simulated heli- copter that maintained its attitude relative to the world using horizon detection and could traverse terrain using an omnidirectional visual sensor and correlators. In that work, there was no dynamic feedback for forward velocity regulation: forward velocity was regulated by simply applying a forward force and letting the modeled robot accelerate until the force was counteracted by an equal amount of drag. Thus, the matter of visual time delay in forward ight control is avoided by dispensing with velocity control altogether. Optimistic open-loop feedforward ight control of this form is unlikely to be used by the y for the reason that disturbances, such as wind or wing damage, cannot be compensated for. And apping wing ight is a fast, almost violent and complicated mechanical motion that can generate strong aerodynamic forces. It seems unlikely that such a complicated mechanism can produce an arbitrary desired thrust without some feedback.

117 Read more

The sensitive period for tactile remapping does not include early infancy

The sensitive period for tactile remapping does not include early infancy

Visual input during development seems crucial in tactile spatial perception, given that late, but not congenitally, blind people are impaired when skin-based and tactile external representations are in conflict (when crossing the limbs). To test whether there is a sensitive period during which visual input is necessary, fourteen children (age=7.95) and a teenager (LM, age=17.38) deprived of early vision by cataracts, and whose sight was restored during the first five months and at age seven, respectively, were tested. Tactile localization with arms crossed and uncrossed was measured. Children showed a crossing effect indistinguishable from a control group (Ns=28, age=8.24) while LM showed no crossing effect (Ns controls=14, age=20.78). This demonstrates a sensitive period which, critically, does not include early infancy.

29 Read more

The roles of relevance and expectation for the control of attention in visual search

The roles of relevance and expectation for the control of attention in visual search

were recorded to measure N2pc components elicited by target-color matching and nonmatching cue stimuli. The N2pc is an enhanced negativity triggered during visual search tasks over posterior scalp electrodes contralateral to the visual hemifield where a stimulus with target-matching attributes is presented, typically emerges at around 180-200 ms post-stimulus onset, and is interpreted as a marker of the allocation of attention to task-relevant stimuli (e.g., Eimer, 1996; Luck & Hillyard, 1994). By measuring N2pcs to target-matching and non-matching cues that appear prior to the presentation of a search display, these components can also be used to measure the activation states of attentional templates for target-defining features (e.g., Eimer & Kiss, 2008; Grubert & Eimer, 2018). During two-color search, only target-matching cues elicited N2pc components, which were similar in size to the N2pcs measured in a single-color task where all targets were defined by the same color (Grubert & Eimer, 2016). In contrast, no N2pc was triggered by nonmatching cues. These observations provide on-line ERP evidence for the hypothesis that multiple color templates can be activated in parallel during the preparation for search, prior to the arrival of search displays.

41 Read more

Distributed Formation Control for Ground Vehicles with Visual Sensing Constraint

Distributed Formation Control for Ground Vehicles with Visual Sensing Constraint

Formation control combined with different tasks enables a group of robots to reach a geographical location, avoid a collision, and simultaneously maintain the designed formation pattern. The connection and perception are critical for a multi-agent for- mation system, mainly when the robots only use vision as a communication method. However, most visual sensors have limited Field-of-view (FOV), which leaves some blind zones. In this case, a gradient-based distributed control law can be designed to keep every robot in the visible zones of other robots during the formation. This control strategy is designed to be processed independently on each vehicle with no network connection. This thesis assesses the feasibility of applying the gradient de- scent method to the problem of visual constraint vehicle formation.

79 Read more

Dynamic Ball Tracking and Hitting Robot Using Vision System

Dynamic Ball Tracking and Hitting Robot Using Vision System

The computer vision technique is cheaper compared to special image processing processors. The image analysis technique is effective in identifying the ball in the 3D as well as 2D space which is moving with a velocity. Here the robot works based on the image sequence. Overall it must have a faster response time. If this is achievable then this concept can be implemented in industrial applications where more precise control movement is required.[3]

6 Read more

Visual Servo Control Based on HSV

Visual Servo Control Based on HSV

At present, in the field of servo control, visual servo control has become an important research direction, but also the current research hotspot. Jia Bingxi et al. expounded the visual servo control from three aspects of vision system, control strategy and implementation strategy, and prospected the future visual servo control [1]. The visual servo control of robot based on RBF neural network can simplify the control model. The simulation results show that the visual servo control time and the real-time performance of the control system are well optimized [2]. Aiming at the problem of poor real-time control in visual servo system due to the complexity of Jacobian matrix calculation, the identification using neural network can effectively improve the control accuracy [3]. The double closed-loop structure is used for the dynamic visual servo control of the robot. The performance of the system is verified by simulation using V-rep. A visual servo operation system is developed, which combines autonomous control with teleoperation to control the robot accurately. So-Youn Park et al. proposed an image-based servo control method which did not know the depth information of the image in advance. The obtained image information was transformed into angle information by forward and backward kinematics, and the effective control of the robot was realized. Linear quadratic Gauss (LQG) controller is used to compensate disturbance motion, and image visual servo control is realized. Aiming at the problem of time-consuming acquisition of robot and camera models, a quadratic fuzzy hybrid controller based on expert knowledge is designed to improve the control accuracy [8]. Due to the complexity and difficulty of image Jacobian identification, Ren Xiaolin et al. proposed an adaptive Kalman filter to process the covariance information of uncertain noise. The simulation results show that the algorithm has good performance [9]. For monocular vision system, the motion mapping, error representation and control law design of servo control are analyzed, and the future trend of monocular vision servo is described. In this paper, HSV color space visual servo control is proposed, which not only provides a good man-machine interface, but also simplifies the detection model of the system and achieves effective servo control for the robot.

6 Read more

Show all 10000 documents...

Related subjects