Performance of high-order implicit large eddy simulations

Download (0)

Full text

(1)

Contents lists available at ScienceDirect

Computers

and

Fluids

journal homepage: www.elsevier.com/locate/compfluid

Performance

of

high-order

implicit

large

eddy

simulations

Konstantinos Ritos

, Ioannis W. Kokkinakis

, Dimitris Drikakis

UniversityofStrathclyde,Glasgow,G11XJ,UK

a

r

t

i

c

l

e

i

n

f

o

Articlehistory:

Received24January2017 Revised30October2017 Accepted23January2018 Availableonline31January2018

Keywords:

iLES

High-Ordermethods Turbulentflows Parallelcomputing

a

b

s

t

r

a

c

t

Theperformance ofparallelimplicit LargeEddySimulations(iLES) isinvestigated inconjunctionwith high-orderweightedessentiallynon-oscillatoryschemesupto11th-orderofaccuracy.Simulationswere performedfortheTaylorGreenVortexand supersonicturbulentboundarylayerflows onHigh Perfor-manceComputing(HPC)facilities.ThepresentiLESarehighlyscalableachievingperformanceof approx-imately93%and68%on1536and6144cores,respectively,forsimulationsonameshofapproximately 1.07billioncells.Thestudyalsoshowsthathigh-orderiLES attainaccuracysimilar tostrictDirect Nu-mericalSimulation(DNS)butatareducedcomputationalcost.

© 2018TheAuthors.PublishedbyElsevierLtd. ThisisanopenaccessarticleundertheCCBYlicense.(http://creativecommons.org/licenses/by/4.0/)

1. Introduction

ImplicitLargeEddySimulations (iLES)originatedfromthe ob-servationsmadein [1] thattheembeddeddissipationofacertain classofnumericalmethodscanbeusedinlieuoftheexplicit Sub-Grid Scale (SGS) models. Modified Equation Analysis (MEA) was developed [2] aiming atdetermining the stabilityof a difference equationbyexaminingthetruncationerrors.Suchananalysiswas performedforthetruncationerrorofcertainschemes,e.g., [3–9] ) leading to a better understanding of the implicit sub-grid dissi-pation. In iLES, theNavier–Stokes Equations(NSE) are discretised usinghigh-resolutionmethodswithoutinvolvingalow-pass filter-ing operation, which leads to SGS terms that require additional modelling.Instead,onlythe(implicit)defactofilteringintroduced through the finite volume integration of the NSE over the mesh cellsisutilised inconjunctionwithnon-linearnumericalschemes that adhere to a number of principles; see [10,11] , and reviews

[9,12,13] .Ithasbeenshown [7] thatiLESmethodsneedtobe care-fullydesigned,optimised,andvalidatedfortheparticular differen-tialequation tobesolved.Direct MEAofhigh-resolution schemes forNSEisdifficulttobeperformed,thusunderstandingofthe nu-mericalpropertiesofthesemethodstodatestillrelieson perform-ingcomputationalexperiments.

TheuseofiLESinfree andwall-boundedflowshasbeen justi-fiedby severalauthors [14,15] ,whileavalidationoftheapproach throughtheoreticalanalysishasbeenpresentedbyMargolinetal.

[8] .Inincompressibleflows,itispossibletodevelopanoptimised

Correspondingauthors.

E-mail addresses: konstantinos.ritos@strath.ac.uk (K. Ritos),

dimitris.drikakis@strath.ac.uk(D. Drikakis).

stencilwithregardstonumericaldissipation [16] ,however,inthe caseofcompressibleflowsthenumericalmethodshouldbe mono-tonicwithrespectto thethermodynamic quantities.Poggie etal.

[17] andRitosetal. [18] appliedcompressibleiLEStostudy Turbu-lentBoundaryLayer(TBL)flowsandshowedthatiLEScanachieve closetostrictDNS(seepage4fordefinitionofstrictDNS)accuracy onsignificantlycoarsermeshes.DespiteiLES(andsimilarly classi-calLES)beingcomputationallylessdemandingthanDNS,itstill re-quiressignificantcomputationalresourcesforsimulatingnear-wall turbulenceathighReynoldsnumbers.

Todate,therehasbeennosystematicattempttoinvestigatethe parallelscalabilityofdifferenthigh-ordercompressibleiLES meth-ods in free and wall-bounded flows. The aim ofthis study is to presentresultsregardingtheaccuracy,efficiencyandparallel scal-abilityofhigh-orderiLESwithreferencetotheTaylorGreenVortex (TGV)andsupersonicTBLflows.

2. Numericalmethodsandflowcases

We have employed iLES in the framework of the CFD code CNS3D [12,15] . The Navier–Stokes equations are solved by using a finite volume Godunov-type method for the convective terms, whichcomprisestheHLLCapproximateRiemannsolver [13,19] and two high-resolution schemes. The Monotone Upstream-centered SchemeforConservation Laws(MUSCL) withsecond-order piece-wise linear monotonised central limiter [20] (labelled as M2), andtheWeighted-EssentiallyNon-Oscillatory(WENO)ninth-order scheme [21] (labelled as W9). Furthermore, in order to exam-ine the parallel scalability of high-order iLES, simulations were alsoperformedusinganeleventh-orderWENOscheme(labelledas W11).

https://doi.org/10.1016/j.compfluid.2018.01.030

(2)

Table1

Simulationparameters:u∞,T∞,M,P∞,ρ∞,andμ∞arethefreestream velocity,temperature,Machnumber,pressure,densityandviscosity, respectively.Tw isthewalltemperature,Iistheturbulenceintensity

attheinletandReListheReynoldsnumberbasedonthefreestream

propertiesofairandtheplatelength,L.

L uTM P

0.061m 588m/s 170K 2.25 23.8kPa

ρ∞ Tw/Tμ∞ I ReL

0.488kg/m3 1.9 1.167×10−5Pas 3% 1.5×106

The viscous terms are discretised using a second-order cen-tral scheme.The solution is advanced in time using a five-stage (fourth-orderaccurate)optimalstrong-stability-preservingRunge– Kutta method [22] . Further numerical details are provided in

[15] andreferencestherein.

The first flow case considered here is the TGV in a triple-periodiccubic domain oflength 2

π

(m). Aseries ofmesheswas used:323,643,1283,2563 and5123 evenly-spacedcomputational cells.Theflowisinitialisedbysolenoidalvelocityprofile,

u0=U0sin

(

kx

)

cos

(

ky

)

cos

(

kz

)

,

v

0=−U0cos

(

kx

)

sin

(

ky

)

cos

(

kz

)

,

w0=0,

(1)

andthepressureisobtainedbysolvingthePoissonequation:

P0=P∞+161

ρ

0U02[2+cos

(

2kz

)

]·[cos

(

2kx

)

+cos

(

2ky

)

], (2)

where the wavenumber k =1. An ideal gas equation of state is usedandtheMachnumber, U 0/

γ

P 00,is0.08. Theresultsare presentedintermsofnon-dimensionalunits;distance x ∗=kx and time t ∗= kU 0t.

The second flow caseconsidered here is a supersonic turbu-lentflowoveraflatplateatMachnumber M =2.25andReynolds numberof1.5×106basedonthefreestreampropertiesforairand thelengthoftheplate, L ;seealso Table 1 .

Periodicboundaryconditionsareusedinthespanwisedirection (z).In the wall-normal direction(y) a no-slipisothermal wall at temperature T w=323K isimposed.Supersonicoutflowconditions

areappliedattheoutlet, whilefar-fieldconditionsareappliedon theupperboundaryoppositetothewall.

Asyntheticturbulentinflowboundaryconditionisusedto pro-ducea freestream flow witha superimposed random turbulence. The synthetic turbulent inflow boundary condition is based on thedigitalfilter(DF) method [18,23–25] .Accordingto DF,instead of using a white-noise random perturbation at the inlet, energy modes within the Kolmogorov inertial range scaling with k −5/3 , where k is the wavenumber, are introduced into the turbulent boundary layer. A cutoff at the maximum frequency of 50 MHz isapplied since the finestmesh would under-resolve higher fre-quencyvalues. The turbulence intensity atthe inlet (I ) is set as

±3%ofthe intensityofthefreestreamvelocity.Thisperturbation hasbeenfoundtobesufficienttotriggerbypasstransitionand tur-bulencedownstream(see Fig. 1 ).

iLEShavebeenperformedonfinemeshesbutstillcoarserthan DNS [17,26] . We employed four meshes with the coarsest and finestmeshescontaining 4.5millionand 100millioncells, re-spectively. For the calculation of the mesh spacing

y the con-ventionalinner variable scaling method

y += u τ

y/νw is used,

where u τ=

τ

w/ρw is the friction velocity;

ν

w,

τ

w and

ρ

w are

the wall viscosity, shear stress and density, respectively. Typical meshresolutionrecommendationsforLESlieintherangeof50<

x +<150and 15<

z +<40,andforDNSin therangeof10<

x +<20 and5<

z +<10 [17,27,28] .Forwall-resolved LESand DNSthenear-wallspacing shouldbe

y +<1.Astrict definition

Table2

Boundarylayerproperties,includingpreviousDNSand experimen-talstudies.Thecompressibleformofthemomentumthickness(θ) hasbeenusedinthedefinitionofReθandReδ2.Reτ istheReynolds numberbasedonthefrictionvelocity andtheboundarylayer thicknessδ.Reδ2 isbasedonθ and thenear-wallviscosity μw.

H=δ/θ istheshapefactor,whereδisthedisplacement

thick-nessofcompressibleflow.

Reθ Reτ Reδ2 H M

W9 2170.0 414.0 1280.6 3.56 2.25 M2 1593.8 344.6 940.5 3.72 2.25 DNS[26] 2377.0 497.0 1516.0 2.98 2.0 strictDNS[17] - - 2000.0 - 2.25 Exp[29] 5100.0 1080.0 3100.0 2.00 2.28

forDNSmesh spacingrequires

x +1 and

y +1.The mesh spacingusedin thisstudyis intherangeof9.06<

x +<27.14,

0.497<

y +<1.22 and 8.53<

z +<24.95, where the smallest valuescorrespond to the finestmesh.Based on the above analy-sis, thepresentiLES onthe finestmesh can be consideredas an under-resolvedDNS.

TheTBLpropertiesarepresentedin Table 2 .Toenablethe com-parisonofthepresentresultswithotherpublications,various def-initionsoftheReynoldsnumberhavebeenemployedbasedonthe momentum thickness,the frictionvelocity, andthenear-wall vis-cosity.Theflowstatisticsarecomputedbyaveraging intime over threeflowcyclesand,spatially,inthespanwisedirection.The sta-tisticalconvergenceofthesimulationsbasedonthestandarderror ofthemeanislessthan2%.

3. Results

3.1. TGV

InstantaneousvisualisationsoftheTGV at t ∗=15(Fig. 2 )show the dominance of disorganised vortices in the decaying worm-vortex flow regime. The results were obtained using the ninth-order WENO scheme on 643, 1283, 2563 and 5123 meshes. The snapshotsoftheflowarebasedonthe Q -criterion,whichdefinesa vortexasacontinuousfluidregionwithapositivesecondinvariant ofthevelocitygradient[30] ,i.e. Q >0.Allrenderingsareperformed atthesamelevel(Q =1)andarecolouredwiththevelocity mag-nitude.

Theresultson2563 and5123meshesareverysimilarwith re-specttotheturbulentstructures resolved.Thekinetic energyrate ofdissipation,

ε

1,andpressuredilation-based dissipationrate,

ε

2, areshownin Fig. 3 .Thekineticenergyrateofdissipationis calcu-latedby

ε

1=−d E k/d t,where

Ek= 1

ρ

0V

1

2

ρ

u·udV (3)

is the volumetric-averaged kinetic energy. The simulations are nearly grid converged with respect to

ε

1 and agree with other publishedresults [31,32] (notshownhere).Thepressure dilatation-baseddissipationrateisdefinedby

ε

2=− 1

ρ

0V

p

·udV. (4)

ε

2measurestheeffectofcompressibilityonthedissipationof tur-bulentenergyandtakessmallvaluesforlowMachflows.

Awidely used performance metric forassessing parallel com-putationsisthespeedup:

Sn= Tre f

Tn ,

(5)

where T n istheexecutiontimeon n coresand T refistheexecution

(3)

(a)

(b)

Fig.1. Iso-surfacesofQ-criterion,colouredbyMachnumber,for(a)M2and(b)W9iLESsimulations;thecomputationaldomainhasbeentruncated.(Forinterpretationof thereferencestocolourinthisfigurelegend,thereaderisreferredtothewebversionofthisarticle).

(a) mesh = 64

3

(b) mesh = 128

3

(c) mesh = 256

3

(d) mesh = 512

3

Fig.2. Iso-surfacesofQ-criterion(Q=1)colouredbyvelocitymagnitudeatt∗=15.The323meshisnotshownasnostructureisvisibleatthislevelofQ.AllshownTGV

simulationsarewithW9.(Forinterpretationofthereferencestocolourinthisfigurelegend,thereaderisreferredtothewebversionofthisarticle).

coreortothenumberofcoresinacomputationalnodeoftheHPC facilityused.FortheTGV simulationsonmeshesup to5123 cells, 12coreswereusedasreference;oneHPCnodehastwoIntelXeon X5650 processors with6cores each. Theideal speedupof paral-lelcomputationswouldbeequalto n /n ref,butthisefficiencyisnot

possibleduethe communicationoverheadbetweenthe computa-tional coresandthe idletime of computational nodesassociated with load balancing. Fig. 4 a shows the parallel speedup for the TGVcaseusingtheninth-orderiLES,achieving77%speed-upusing 480 cores. Furthermore, forscalability purposes the parallel per-formance ofthe eleventh-orderWENOiLESon 6144coresforthe 10243 simulationisshown;aCrayHPCfacilitycompromisingtwo

IntelE5-2697v2processorswith12coreseachwasused.The refer-enceexecutiontime wasobtainedon192cores.The parallel per-formance ofthe 10243 simulationis approximately 93% and68% on1536and6144cores,respectively. Theparallelperformance of thesecond-orderiLESisnotshownbecauseitinvolvesless calcu-lationsforthesamemeshsizeandasaconsequencethescalability willalwaysbeworsewhencomparingtohigherorderiLES.

3.2. TBL

(4)

Fig.3. TGVcase:(a)Kineticenergydissipationrate(ε1)and(b)pressuredilatation-baseddissipationrate(ε2).Theresultsonthe5123meshareincloseagreementwith

previouslypublishedresults[31,32](notshownhere).They-axisin(b)isstretchedbyafactorof10comparedto(a).

Fig.4. Parallelscalingof(a)9th-and(b)11th-orderiLESfortheTGVcaseon1283and10243meshes,respectively.

Fig.5. ComparisonofiLESwithDNSandexperimentaldata.(a)vanDriestvelocityprofile(b)normalReynoldsstress.StrictDNSresults[17]areincludedonlyin(b)because forthevelocityprofilestheresultsperfectlyagreewiththeavailableDNSdataofPirozzolietal.[26].“LR” denotesalowerresolutionmeshcontainingapproximately1/3of thesizeofthefinemesh.

ComparisonswithDNSand/or experiments are presentedforthe vanDriest velocity profile, u VD, and normal Reynolds stress,

τ

uu

(Fig. 5 ).ThevanDriestvelocityprofileisgivenby

uVD=

u+

0

ρ

ρ

w

du+, (6)

wherethesuperscript ‘+’denotes wall scalling, u +=u/u τ. Previ-ouspublications [26,33] haveshownthatforadiabaticwallsa sat-isfactory agreement ofthe velocity data is expected innear-wall region.Smallvariationsareexpected fordifferentReynolds num-berandthepresentiLESareinagreementwiththeDNSof Piroz-zolietal. [26] .Theninth-orderiLESisalsoinexcellent agreement withtheexperimentaldata [29] .Thesecond-orderiLES,conducted onthesamemeshresolution,showssignificantdeviationfromthe

reference DNSand experiments. Performing the ninth-orderiLES on1/3meshresolutionshowsthatmeshconvergenceisachieved, hencethehigh-order iLESreliablyattainhighaccuracy ona rela-tivelycoarsemesh.

In respect of

τ

uu, the second-order iLES significantly

(5)

Fig.6. iLESspeed-upforthe9th-orderWENOschemeforthesupersonicTBLcase.

Table3

PerformanceofsecondandninthorderiLESvsDNS[17,26]. Method Computationalcost Error1 Error2 StrictDNS ∼25(years) 0.0% 0.0% DNS 117(days) 0.0% 6.5% iLESW9 24(days) 1.0% 6.3% iLESM2 10(days) 8.5% 23.7% iLESW9-LR 7(days) 3.1% 8.1%

FortheTBLcasethespeedupiscalculatedwithreferenceto36 cores(3 computationalnodeswithtwo IntelXeonX5650 proces-sors each node).The 36 coresreferencewasimposed due tothe sizeofthefinemesh(∼100millioncells).Theparallelspeedupis shownin Fig. 6 .The ninth-orderiLESprovideacomputational ef-ficiencyof86%oftheidealefficiency,utilising720 computational cores.

Table 3 shows the performance of low and high order iLES with reference to strict DNS [17] . For the DNS performance we have used the results of Pirozzoliet al. [26] , where a mesh ap-proximately27timescoarserthanthestrictdefinitionofDNSwas utilised.The reportederrorsareaveragedvaluescalculatedinthe near-wall region, y +≤30, where “Error1” and “Error2” refer to relative difference from the reference van Driest velocity profile and normal wall stress, respectively. The computational cost has beencalculatedbasedontheassumptionthatsimulationsare per-formedon240cores.ThecomputationalcostforDNSare estima-tions based on the mesh size andnumberof cores found inthe relevantpublications.Theresultsshowthathigh-orderiLEScan at-tainhighaccuracyatareducedcomputationalcost,cf.iLESW9-LR withtherestoftheresults.

4. Conclusions

Theaccuracy,parallelscalabilityandefficiencyofiLESwere ex-amined for different turbulent flow cases. A mesh convergence studywaspresentedfortheTGVcaseachievingnearlymesh inde-pendentresultsforthetwofinestmeshes. Thepresenthigh-order iLESexhibithighparallelefficienciesforsimulationsperformedup to6144coresonaCrayHPCfacilityandformeshescontainingup to1.07billioncells.

The first andsecond orderstatistics obtained fromhigh-order iLESofasupersonicTBLflowareinexcellentagreementwith pre-vious numericalandexperimental data.iLEScanachieve high ac-curacy in the near-wallregion that isdirectly comparableto the resultsofstrictDNSatareducedcomputationalcost.A combina-tion of high-order iLESwith relatively coarse meshes provides a

morepragmaticapproachthanusingasecondordermethodona significantlyfinermesh.

Acknowledgements

Results were obtained using the EPSRC funded ARCHIE-WeSt HighPerformanceComputer(www.archie-west.ac.uk )underEPSRC grantno. EP/K0 0 0586/1 .The authorswouldalsoliketothank EP-SRC for providing access to computational resources on the Na-tionalHPCfacilityARCHERthroughtheUKAppliedAerodynamics ConsortiumLeadershipProject“e529”.

References

[1] BorisJ,GrinsteinFF,OranE,KolbeR.Newinsightsintolargeeddysimulation. FluidDynRes1992;10(4–6):199–228.

[2] HirtC.Heuristicstabilitytheoryforfinite-differenceequations.JComputPhys 1968;2(4):339–55.

[3] MargolinLG,RiderWJ.Arationaleforimplicitturbulencemodelling.IntJ Nu-merMethods2002;39(9):821–41.

[4] Rider WJ, MargolinLG. Fromnumericalanalysis toimplicit subgrid turbu-lencemodeling.In:16thAIAAcomputationalfluiddynamicsconference;2003. p.1–11.

[5] Drikakis D, Rider WJ. High-resolution methods for incompressible and low-speedflows,1.Springer-Verlag;2004.

[6] MargolinLG,RiderWJ.ThedesignandconstructionofimplictLESmodels.Int JNumerMethods2005;47(10–11):1173–9.

[7] DomaradzkiJA,RadhakrishnanS.Effectiveeddyviscositiesinimplicit model-ingofdecayinghighReynoldsnumberturbulencewithandwithoutrotation. FluidDynRes2005;36(4–6):385–406.

[8] MargolinLG,RiderWJ,GrinsteinFF.ModelingturbulentflowwithimplicitLES. JTurbul2006;7(15).

[9] GrinsteinFF,MargolinLG,RiderWJ.Implicitlargeeddysimulation:computing turbulentfluiddynamics.CambridgeUniversityPress;2007.

[10] HartenA.Highresolutionschemesforhyperbolicconservationlaws.JComput Phys1983;49(3):357–93.

[11]HartenA.Highresolutionschemesforhyperbolicconservationlaws.JComput Phys1997;135(2):260–78.

[12] Drikakis D.Advances in turbulentflow computationsusinghigh-resolution methods.ProgAerospSci2003;39(6–7):405–24.

[13] Toro EF. Riemann solvers and numerical methodsfor fluid dynamics.3rd. Springer;2009.

[14] FurebyC,GrinsteinFF.Largeeddysimulationofhigh-Reynolds-numberfree andwall-boundedflows.JComputPhys2002;181(1):68–97.

[15] Drikakis D, HahnM, Mosedale A, Thornber B. Large eddy simulation us-ing high resolution and high order methods. Philos Trans Royal Soc A 2009;367:2985–97.

[16] HickelS,AdamsNA,DomaradzkiJA.Anadaptivelocaldeconvolutionmethod forimplicitLES.JComputPhys2006;213(1):413–36.

[17]Poggie J, Bisek NJ, Gosse R. Resolution effects in compressible, turbulent boundarylayersimulations.ComputFluids2015;120:57–69.

[18] RitosK,KokkinakisIW,DrikakisD,SpottswoodSM.Implicitlargeeddy simu-lationofacousticloadinginsupersonicturbulentboundarylayers.PhysFluids 2017;29(4):1–11.

[19] Toro EF, Spruce M, Speares W. Restoration of the contact surface in the HLL-Riemannsolver.ShockWaves1994;4(1):25–34.

[20]van Leer B. Towards the ultimate conservative difference scheme III. Up-stream-centeredfinite-differenceschemesforidealcompressibleflow.J Com-putPhys1977;23(3):263–75.

[21]BalsaraDS,ShuCW.Monotonicitypreservingweightedessentially non-oscil-latory schemes with increasingly high order of accuracy. J Comput Phys 2000;160(2):405–52.

[22]SpiteriR,RuuthSJ.Newclassofoptimalhigh-orderstrong-stability-preserving timediscretizationmethods.SIAMJNumerAnal2002;40(2):469–91.

[23]LundTS,WuX,SquiresKD.Generationofturbulentinflowdatafor spatial-ly-developingboundarylayersimulations.JComputPhys1998;140(2):233–58.

[24]KleinM,SadikiA,JanickaJ.Adigitalfilterbasedgenerationofinflowdatafor spatiallydevelopingdirectnumericaloflargeeddysimulations.JComputPhys 2003;186(2):652–65.

[25]TouberE, Sandham ND. Large-eddy simulation oflow-frequency unsteadi-nessinaturbulentshock-inducedseparationbubble.TheorComputFluidDyn 2009;23(2):79–107.

[26]PirozzoliS,BernardiniM.Turbulenceinsupersonicboundarylayersat moder-ateReynoldsnumber.JFluidMech2011;688:120–68.

[27]GeorgiadisNJ,RizzettaDP,FurebyC.Large-eddysimulation:current capabili-ties,recommendedpractices,andfutureresearch.AIAAJ2010;48(8):1772–84.

[28]ChoiH,MoinP.Grid-pointrequirementsforlargeeddysimulation:Chapman’s estimatesrevised.PhysFluids2012;24(1):011702.

[29]Piponniau S, Dussauge JP, Debiéve JF, Dupont P. A simple model for low-frequency unsteadiness in shock-induced separation. J Fluid Mech 2009;629:87–108.

(6)

[31] DebonisJ.SolutionsofTaylor–Greenvortexproblemusinghigh-resolution ex-plicit finitedifference methods.In: 51stAIAA AerospaceSciences Meeting; 2013.p.1–9.

[32]BullJR,Jameson A.SimulationoftheTaylor–Greenvortexusinghigh-order fluxreconstructionschemes.AIAAJ2015;53(9):2750–61.

[33]SmitsAJ,DussaugeJP.Turbulentshearlayersinsupersonicflow.2nd. Ameri-canInstituteofPhysics;2006.

Figure

Table 1

Table 1

p.2
Fig. the1. Iso-surfaces of Q -criterion, coloured by Mach number, for (a) M2 and (b) W9 iLES simulations; the computational domain has been truncated
Fig. the1. Iso-surfaces of Q -criterion, coloured by Mach number, for (a) M2 and (b) W9 iLES simulations; the computational domain has been truncated p.3
Fig. 2. Iso-surfaces of Q -criterion ( Q = 1 ) coloured by velocity magnitude at t ∗ = 15
Fig. 2. Iso-surfaces of Q -criterion ( Q = 1 ) coloured by velocity magnitude at t ∗ = 15 p.3
Fig. previously3. TGV case: (a) Kinetic energy dissipation rate ( ε1 ) and (b) pressure dilatation-based dissipation rate ( ε2 )
Fig. previously3. TGV case: (a) Kinetic energy dissipation rate ( ε1 ) and (b) pressure dilatation-based dissipation rate ( ε2 ) p.4
Fig. 4. Parallel scaling of (a) 9th- and (b) 11th-order iLES for the TGV case on 128 3 and 1024 3 meshes, respectively
Fig. 4. Parallel scaling of (a) 9th- and (b) 11th-order iLES for the TGV case on 128 3 and 1024 3 meshes, respectively p.4
Fig. thefor5. Comparison of iLES with DNS and experimental data. (a) van Driest velocity profile (b) normal Reynolds stress
Fig. thefor5. Comparison of iLES with DNS and experimental data. (a) van Driest velocity profile (b) normal Reynolds stress p.4
Fig. 6. iLES speed-up for the 9th-order WENO scheme for the supersonic TBL case.
Fig. 6. iLES speed-up for the 9th-order WENO scheme for the supersonic TBL case. p.5
Table 3

Table 3

p.5
Table withhaveproximatelyutilised.near-wallrelativeandbeenformedtionsrelevanttainwith3 shows the performance of low and high order iLES  reference to strict DNS [17]

Table withhaveproximatelyutilised.near-wallrelativeandbeenformedtionsrelevanttainwith3

shows the performance of low and high order iLES reference to strict DNS [17] p.5

References