In addition to the virtual animal, computer animators must create the surrounding environment referred to as the scene. In the case of 2D animations, most commonly a single background color is used or stimuli are animated on a background image taken from the nat- ural environment (e.g., coral reef in Levy et al. 2014). It is advisable to test the effect of the background color on the test animal, as some colors might produce behavioral changes (e.g., sailfin mollies Poecilia latipinna avoid screens with a black background; Witte K, personal communication). Whether the animation is produced in 2D or 3D on a computer screen, both animation styles are repre- sented on a 2D surface. However, there are possibilities to make the 3-dimensional effect more obvious (Zeil 2000) such as creating an environment with reference objects for size and for depth perception (e.g., plants: Baldauf et al. 2009; pictures of artificial structures known by the test animals: Ku¨nzler and Bakker 1998; Zbinden et al. 2004; Mehlis et al. 2008). Depth and size cues in an animation might be provided by illusionary effects (e.g., occlusion of objects and texture gradients), since various animals have been shown to re- spond to visual illusions in a manner similar to humans (Nieder 2002). All standard animation software provides different options for light sources that can be placed to illuminate the virtual environ- ment. Usually, there are options to change number, angle, position, filtering, color, and intensity of the light source so it might be pos- sible to simulate illumination as found in natural environments (e.g., the flickering of underwater light, diffuse scattering of light, or re- flection). Illuminating a scene is also a prerequisite for adding realis- tic shadows to improve the illusion of 3Dspace (see Gierszewski et al. 2017), a feature also implemented in anyFish 2.0 (Ingley et al. 2015; see Supplementary Box S1 and Table S2).
This section contains a literature review about topics that relate to the application of tracking dynamics objects with cameras. These include camera modelling, stereo geometry, feature detection, measurement filtering, and data association strategies. The model used to describe the transformation of a 3D point to a 2D point in the image plane is derived. Once the cam- era model for an individual camera is derived, stereo vision geometry is expounded. This geometry is ultimately used for the purposes of determining the position of a 3D point in the environment, given a matched pair of features from the two stereo cameras. Determining the location of this 3D point is required in order to track a moving point in the environment. Various feature detection algorithms are explored and investigated. Once points are found, they are matched using feature descriptors. The match can then be triangulated into the envi- ronment, resulting in the 3D state space location of the detected point. The issue of determin- ing which measurements where caused by which moving objects needs to be addressed. The most popular techniques are evaluated and inspected.
An Efficient method for face recognition is Principal Component Analysis (PCA). The PCA has been extensively employed for face recognition algorithms. It is one of the most popular representation methods for a face image. It not only reduces the dimensionality of the image, but also retains some of the variations in the image data. The system functions by projecting face image onto a featurespace that spans the significant variations among known face images. The significant features are known as “Eigen faces”, because they are the eigenvectors (Principal Component) of the set of faces they do not necessarily correspond to the features such as eyes, ears, and noses. The projection operation characterize an individual face by a weighted sum of the Eigen faces features and so to recognize a particular face it is necessary only to compare these weights to those individuals. The Eigen Object Recognizer class applies PCA on each image, the results of which will be an array of Eigen values that a Neural Network can be trained to recognize. PCA is a commonly used method of object recognition as its results, when used properly can be fairly accurate and resilient to noise. The method of which PCA is applied can vary at different stages so what will be demonstrated is a clear method for PCA application that can be followed. It is up for individuals to experiment in finding the best method for producing accurate results from PCA. To perform PCA several steps are undertaken: 
After features have been detected (e.g. salient points or small blocks), they should be matched so that the locations of corresponding features from different images can be determined. The easiest approach to match all the corresponding features between two images is to compare all the features detected in one image against those detected in the other. However, it should take into account the fact that the physically corresponding features can become dissimilar due to the different image acquisition conditions and/or the different sensor spectral sensitivities, and some features do not have corresponding pairs in the other image. To address this, a variety of more robust methods have been proposed. For example, the chamfer matching algorithm introduced in  matches detected line features from different images, so that the generalised distances between them are minimised. The method proposed in  is based on relaxation, where it recomputes the figures of merit iteratively for all feature pairs while considering the matching quality of these features pairs, until an optimal result is reached. However, the time taken for these methods makes this kind of approach unsuitable for some practical applications. To devise a more efficient matching scheme, the idea of finding nearest neighbours in the featurespace is frequently employed. For example, a technique called “slicing”, developed in , discards a group of candidates that lie outside a hypercube of a test data sample in the featurespace, and then employs a series of 1D binary searches. An algorithm called Best Bin First (BBF) was proposed in , where it approximates the k-d tree algorithm  (i.e. a space partitioning tree for organising data samples in a k-dimensional space) and returns the nearest neighbours with high probability. It makes the search of bins (i.e. candidates) in the featurespace corresponding to the ascending sort of the distances from the test location, by employing a modified search ordering for the k-d tree algorithm. Then, by cutting off the search after a specific number of nearest bins have been explored, an approximate result can be obtained at low computational cost. Unfortunately, there are almost always outliers among the matched results of feature correspondences (i.e. data samples whose distribution fits some model), it is better to use robust model fitting techniques to find a good starting inlier set of feature correspondences (i.e. feature correspondences that fit the estimated transformation model).
Recently, CNNs have also been used to learn a regression model mapping sketches to parameters for procedural modeling of man- made objects (Huang et al. 2016) or buildings (Nishida et al. 2016). Incorporating CNN based parameter inference, Nishida et al. (2016) also developed a user interface for real-time urban modeling. Major differences exist between our work and the work in (Huang et al. 2016) other than they deal with different types of objects. First, this paper presents a labor-efficient user interface that incorpo- rates real-time sketching for interactive modeling and deep learning based gesture recognition for shape refinement while the modeling process adopted in (Huang et al. 2016) is not interactive and their proposed algorithm simply generates a 3D model from a complete drawing. Second, our network architecture has crucial novel com- ponents with respect to the architecture used in (Huang et al. 2016). For example, our network has two independent branches of fully connected layers, and it also has a vertex loss layer which directly calculates vertex position errors. Both components effectively re- duce model prediction errors. Third, this paper publicly releases a dataset while (Huang et al. 2016) did not. The dataset used in this paper has better connections with the real world because part of the 3D face models were originally digitized from real people and part of the sketches are real drawings collected from artists while the dataset used in (Huang et al. 2016) was entirely generated from 3D virtual models.
This article argues the roles of electrical resistivity in magnetic reconnection, and also presents recent 3D particle simulations of coalescing magnetized flux bundles. Anomalous resistivity of the lower-hybrid-drift (LHD) instability, and collisionless effects of electron inertia and/or off-diagonal terms of electron pressure tensor are thought to break the frozen-in state that prohibits magnetic reconnection. Studies show that, while well-known stabilization of the LHD instability in high-beta plasma condition makes anomalous resistivity less likely, the electron inertia and/or the off-diagonal electron pressure tensor terms make adequate contributions to break the frozen-in state, depending on strength of the toroidal magnetic field. Large time and space scale particle simulations show that reconnection in magnetized plasmas proceeds by means of electron inertia effect, and that electron acceleration results instead of Joule heating of the MHD picture. Ion inertia contributes positively to reconnection, but ion finite Larmor radius effect does negatively because of charge separation of ions and magnetized electrons. The collisionless processes of the 2D and 3D simulations are similar in essence, and support the mediative role of electron inertia in magnetic reconnection of magnetized plasmas.
In the present paper I present an extension of  in the 3D case to general 3D systems with no twist assumption. My main motivation is to establish regions of a magnetic field through which pass no flux surfaces (invariant 2-tori) transverse to a given 1D foliation, without assuming shear, and then to do the same for guiding-centre motion. The method does not require Hamiltonian structure, and indeed  already made an application of the idea of  to a dissipative system, but the principal intended applications are to systems that can be viewed as 3D Hamiltonian vector fields (in the odd-dimensional approach of Cartan-Arnol’d).
However, the most common approach is to use 2D plate elements with the surface Winkler foundation model (Figure-1) which introduces subsoil effects in the vertical direction only. However, it is used today even for analysis of important structures, see  for its use in the process of assessment of a part of nuclear power plant or the paper  for an example of an analysis of tanks with fluids on the Winkler foundation.
However, because these both software, are using in these days, and look that more and more architects, and the others companies, which are connecting with it, I recommend in the future the Cadastral measurements have to be in 3D model like the buildingSMART. Of course, for owners, in my opinion, would be better to have a 2D model on the paper, because it is easy to use. But a 3D model, especially buildingSMART, would be useful for electricians, plumbers and other house supervisors, because in buildingSMART model everybody can find the required information. For example, if we have a digital 3D buildingSMART model of our house, when we want to make changes of our house, we can go to architect with a digital model for consultation. It needs less time and money, because the architect hasn’t to come to us.
In this Motivation, the concept of stereoscopy has existed for a long time. But the breakthrough from conventional 2D broadcasting to real-time 3D broadcasting is still pending. In recent years, with the giant leap in image and video processing technologies the introduction of three-dimensional televisions (3D TVs) into the commercial market is becoming a reality . Nowadays, there are many commercial companies, such as Samsung, Sony, Panasonic and LG, producing 3D TVs. The 3D TV can be more attractive to viewers because they produce stereo scenes, which create a sense of physical real space. 3D vision for humans is caused by the fact that the projected points of the same point in space on the two human eyes are located at different distances from the center of focus (center of fovea). The difference between the distances of the two projected points, one on each eye, is called disparity. Disparity information is processed by high levels of the human brain to produce a feeling of the distance of objects in 3Dspace. A 3D television employs some techniques of 3D presentation, such as stereoscopic capture, 3D display and 2D plus depth map technologies. Due to the success of introducing 3D visual technologies, including 3D games and 3D TVs, to the commercial market, the demand for a wide variety of 3D content such as 3D images, 3D videos and 3D games is increasing significantly. To satisfy this demand, there is an increasing need to create new 3D video content as well as converting existing 2D videos to 3D format. Converting 2D content into 3D depends on different 2D to 3D conversion tools. Three- dimensional television (3D-TV) is anticipated to be the next step in the advancement of television Stereoscopic images that are displayed on 3D displays can increase the visual impact and heighten the sense of presence for viewers . The successful adoption of 3D-TV by the general public will depend not only on technological advances in 3D displays  and 3D-TV broadcasting systems ,  but also on the availability of a wide variety of program content in stereoscopic 3D (S3D) format for 3D-TV services . The supply of adequate S3D content will be especially critical in the early stages of 3D-TV rollout to ensure that the public would be willing to spend money for 3D displays and 3D-TV services. However, a certain length of time will be required for content providers to capture and to create enough S3D material with stereoscopic cameras.2D-to-3D conversion techniques can be profitable for content providers who are always looking for new sources of revenue for their vast library of program materials. This potential market is attracting many companies to invest their manpower and money for developing 2D-to-3D conversion techniques.
Light detection and ranging (LiDAR) investigations from aerial platforms are frequently used for the investigation of forests, woodlands or individual trees. These remote sensing (RS) operations are used to capture various forest attributes, including; data for forest inventory and mensuration, investigations into carbon accounting, the assessment of ecological habitats and for a range of environmental modelling purposes using geo-spatial analyses (Eysn et al., 2012; Jakubowksi et al., 2013; Wu et al., 2016). For these and other similar investigations, it is common practice to establish ground reference (GR) plots within the area that is scanned as part of the LiDAR flightline. The GR plots would typically be directly measured in as high level of detail as it is possible to achieve using manual methods, with the intention of enabling data training or validation of the indirectly measured RS data in attempts to minimise errors (Chen et al., 2006). Typically, this process requires the absolute position of the tree being recorded as where the stem emerges from the ground, attempting to avoid any ground assessment errors of determining where the aerial parts of the tree are located within space, albeit assessed only from the ground viewpoint position (Mills et al., 2010). When using LiDAR for individual tree crown (ITC) delineation, methods may vary according to the scale of the investigation and the intended use of the data. Frequent delineation approaches include; manual delineation, watershed or inverse watershed delineation, and local maxima delineation methods. For coniferous trees with their regular shape and form, ITC delineation is a readily repeatable exercise. However, broadleaved trees with their uniquely shaped crowns that differ in relation to their location and exposure to available sunlight, which causes unpredictable changes to the tree shape (Loehle, 1986), offer a more complex ITC delineation problem: this is the Broccoli problem.
Most studies using 3D models have been performed with cancer cell lines originating from earlier decades and numerous passages of cells in vitro is known to cause genetic alterations and different behaviour when exposed to various treatments. Although some 3D HNSCC cell culture models have been published, they mostly use high passages of commercially available cell lines such as FaDu, SCC25 or CAL27 HNSCC cell lines [19, 20]. In our study, we use recently established cell lines from our own collection of tumor-derived cell lines (from tumors in the tongue, larynx and gingiva) that reflect the actual tumor behaviour better. When these cell lines were cul- tured in 3D, the expression of CDH1 and the expression of the stem cells markers Nanog and Sox2 were increased compared to 2D (Table 2). It has recently been published that the morphology of HNSCC spheroids was related to E-cadherin and Ki67 expression . The upregulation of E-cadherin seems to be important for the formation of tumor spheroids along with decrease in DNA synthesis as visualized by Ki67 staining.
In this architecture, the 3D NoC is implemented on two tiers where each tier has identical blocks as shown in Figure 2 and Figure 3. This is the straight forward extension of 2D Mesh NoC architecture where we just take a copy of a tile (a router and a NIU) and put it on top of another tile. Compared with the area of 2D Mesh NoC, this architecture has about 50% less footprint area. This 4x2x2 mesh NoC architecture is based on 3D router architecture that has vertical links for inter-tier connections between routers. It provides latency improvement through reducing its network diameter (reducing number of hops through vertical links) from six to five hops. From implementation perspective, this architecture has both 2D and 3D critical paths. The 2D critical paths are for the NoC from bottom layer to the top layer while the 3D critical paths are for the processor architecture since it is placed completely separate on each layer.
2D broadcast transmission to binocular 3D SBS video sequences of the same length in real-time making use of the inherent depth perception from the binocular human visual systems to generate visually acceptable 3D information. The paradigm is also able to handle degenerate cases where other simple 2D-3D conversion algorithms fail. Comparison results have shown that the proposed methodology is overall as good as native 3D stereo video with the exception of degenerate cases which are handled properly to minimize the unpleasant effects of these otherwise unsynchronized stereo frames.
In this study, two simplified 2D models and one detailed 3D FE model were developed to investigate the seismic performance of condensate storage tanks when subjected to ground motions. The 2D CST models are developed using commercial software SAP2000 using horizontal springs between the liquid mass and tank walls to represent the impulsive and convective load effects. ANSYS software is selected for the 3D CST model due to its ability to effectively include shell and fluid elements, contact elements, and geometric nonlinearities. To evaluate the dynamic response of CSTs at NPPs, structural properties for a CST were retrieved from literature and used to develop models for a realistic CST. Modal and time history analyses are completed for all CST models to evaluate and compare the dynamic characteristics and response of each. Although the results of the 3D model considering the fluid-solid interaction show more detailed dynamic characteristics, the simplified 2D models capturing most key dynamic properties of CST under
Abstract. How to build a virtual terrain scene with high reality is always a hot topic in virtual reality. People have developed all kinds of techniques to enhance realism. To achieve this goal, we present an improved solution based on appearance consistency. We first use GDAL and VPB to acquire a large terrain and then smooth the normal of local particle in every tile’s boundary region with its neighboring particles. Then we recalculate illumination parameters and use it to modify the height of the particle. Thirdly, we remap the height of the particle in the boundary region of adjacent tiles to realize 2D-3D appearance consistency in large terrain scene construction. We demonstrate our results with several challenging terrains and present qualitative evaluation to our method.
and save time and resources. However, the automatic method is still not accurate enough, and in the case of Cho et al. (2010), the recognition result showed that some data outliers make the total recognition error much larger. Cho et al. (2010) recommended that the neck point should be chosen manually to reduce the error. Han & Nam (2011) used the morphological characteristic of body surfaces, cross sections of each landmark and existing relations among the landmarks as their landmark identification algorithms to automatically identify body landmarks of various body figures. Since the rear center crotch area is difficult to properly scan, there is a large error for the crotch back point, and because overweight women have softer tissues, there is often another large error for locating the armpit point. Ben et al. (2006) also provided an algorithm to locate the anthropometric landmarks on 3D human scans automatically, and found the automatically located landmarks to be very close to the ones marked on human body before scanning (Figure 2.5); however, from the result in Figure 2.5, we can tell that their approach is still not precise enough.
A morphable model is constructed by performing some form of dimensionality reduction, typically Principal Component Analysis (PCA), on a training dataset of shape examples. This is feasible only if each shape is first re-parametrised into a consistent form where the number of points and their anatomical locations are made consistent to some level of accuracy. Shapes satisfying these properties are said to be in dense correspondence with one another. Once built, the morphable model provides two functions. Firstly, it is a powerful prior on 2D profile shapes that can be leveraged in fitting algorithms to reconstruct accurate and complete 2D representations of profiles. Secondly, the proposed model provides a mechanism to encode any 2D profile in a low dimensional featurespace; a compact representation that makes tractable many 2D profile analysis problems in the medical domain.
X-ray computed tomography and serial block face scanning electron microscopy imaging techniques were used to produce 3D images with a resolution spanning three orders of magnitude from ~7.7 m m to 7 nm for one typical Bowland Shale sample from Northern England, identiﬁed as the largest potential shale gas reservoir in the UK. These images were used to quantitatively assess the size, geometry and connectivity of pores and organic matter. The data revealed four types of porosity: intra-organic pores, organic interface pores, intra- and inter-mineral pores. Pore sizes are bimodal, with peaks at 0.2 m m and 0.04 m m corresponding to pores located at organicemineral interfaces and within organic matter, respectively. These pore-size distributions were validated by nitrogen adsorption data. The multi-scale imaging of the four pore types shows that there is no connected visible porosity at these scales with equivalent diameter of 20 nm or larger in this sample. However, organic matter and clay minerals are connected and so the meso porosity ( < 20 nm) within these phases provides possible diffusion transport pathways for gas. This work conﬁrms multi-scale 3D imaging as a powerful quantiﬁcation method for shale reservoir characterisation allowing the representative volumes of pores, organic and mineral phases to be de ﬁ ned to model shale systems. The absence of connected porosity at scales greater than 20 nm indicates the likely importance of the organic matter network, and associated smaller-scale pores, in controlling hydrocarbon transport. . The application of these techniques to shale gas plays more widely should lead to a greater understanding of properties in the low permeability systems.
In light of the uptake study results, and their implica- tions regarding the utility of the RSMN for nanosafety assessment, it is essential to discuss the ability of 3D reconstructed skin models to represent real human skin. Although current research is limited, a study of quantum dot nanoparticles (6 nm; ≤ 24 h exposure) showed simi- lar, limited penetration in both human skin in vivo and EpiDerm™ when compared side-by-side . However, other dermal studies in vivo show that nanomaterials can penetrate the epidermis, enter circulation and in- duce organ toxicity, but the process typically requires > 30 days continuous exposure , or compromised skin-barrier integrity . Micron-scaled particles are also reported to penetrate the stratum corneum more readily at the site of hair follicles [5, 73]. Although Epi- Derm™ does not contain these follicular structures, and exposures are limited to the 72 h during which its 3D morphology can be retained, these limitations are argu- ably typical caveats associated with any in vitro test sys- tem. Interestingly, wound healing has been studied using reconstructed skin models, suggesting that the impact of compromised barrier integrity on nanoparti- cle (geno)toxicity could be studied in vitro [74, 75]. Herein, the negative 3D (geno)toxicity results after topical exposures were found related to the 3D model’s skin-barrier properties, which restricted interactions between the dividing cells and the test article. Whereas further work to establish how well the barrier proper- ties of EpiDerm™ reflect in vivo skin is required, it seems reasonable to assert that these results are more reflective of actual human hazard than traditional assays using 2D monocultured cells.