Computer Science Question

computer science writing question and need a sample draft to help me learn.

This paper should be on design alternatives/technologies, 4 pages long, using IEEE format.
Research Paper Description
The research paper should follow the following outline.
1. Overview of DarknetZ Project
2. My Research Area for this Paper and the Project: Secure Federated Learning within ARM TrustZone (Give an overview of why you chose this research area. Include what you are researching and how it will contribute to the project) PPT attached
3. Detail of your Findings in the Research Area (Provide a detailed review of how the technology will be used in your CYSE 492 Senior Design Project).
4. Conclusion
5. Technology Paper Evaluation (Describe the contribution of this research paper to you and the project. How would you improve this project?)
Requirements: 4 pages, 1.5 space
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/340618453DarkneTZ: Towards Model Privacy at the Edge using Trusted ExecutionEnvironmentsPreprint · April 2020CITATIONS0READS2317 authors, including:Ali Shahin ShamsabadiQueen Mary, University of London41 PUBLICATIONS   849 CITATIONS   SEE PROFILEKleomenis KatevasBrave Software42 PUBLICATIONS   960 CITATIONS   SEE PROFILESoteris DemetriouImperial College London36 PUBLICATIONS   701 CITATIONS   SEE PROFILEAndrea CavallaroQueen Mary, University of London367 PUBLICATIONS   8,773 CITATIONS   SEE PROFILEAll content following this page was uploaded by Ali Shahin Shamsabadi on 24 April 2020.The user has requested enhancement of the downloaded file.
DarkneTZ:TowardsModelPrivacyattheEdgeusingTrustedExecutionEnvironmentsFanMoImperialCollegeLondonAliShahinShamsabadiQueenMaryUniversityofLondonKleomenisKatevasTelefónicaResearchSoterisDemetriouImperialCollegeLondonIliasLeontiadisSamsungAIAndreaCavallaroQueenMaryUniversityofLondonHamedHaddadiImperialCollegeLondonABSTRACTWepresentDarkneTZ,aframeworkthatusesanedgedevice’sTrustedExecutionEnvironment(TEE)inconjunctionwithmodelpartitioningtolimittheattacksurfaceagainstDeepNeuralNet-works(DNNs).Increasingly,edgedevices(smartphonesandcon-sumerIoTdevices)areequippedwithpre-trainedDNNsforavari-etyofapplications.Thistrendcomeswithprivacyrisksasmodelscanleakinformationabouttheirtrainingdatathrougheffectivemembershipinferenceattacks(MIAs).WeevaluatetheperformanceofDarkneTZ,includingCPUex-ecutiontime,memoryusage,andaccuratepowerconsumption,usingtwosmallandsixlargeimageclassificationmodels.Duetothelimitedmemoryoftheedgedevice’sTEE,wepartitionmodellayersintomoresensitivelayers(tobeexecutedinsidethedeviceTEE),andasetoflayerstobeexecutedintheuntrustedpartoftheoperatingsystem.Ourresultsshowthatevenifasinglelayerishidden,wecanprovidereliablemodelprivacyanddefendagainststateoftheartMIAs,withonly3%performanceoverhead.WhenfullyutilizingtheTEE,DarkneTZprovidesmodelprotectionswithupto10%overhead.CCSCONCEPTS•Securityandprivacy→Embeddedsystemssecurity;•Com-putingmethodologies→Machinelearning.ACMReferenceFormat:FanMo,AliShahinShamsabadi,KleomenisKatevas,SoterisDemetriou,IliasLeontiadis,AndreaCavallaro,andHamedHaddadi.2020.DarkneTZ:TowardsModelPrivacyattheEdgeusingTrustedExecutionEnvironments.InThe18thAnnualInternationalConferenceonMobileSystems,Applications,andServices(MobiSys’20),June15–19,2020,Toronto,ON,Canada.ACM,NewYork,NY,USA,13pages.https://doi.org/10.1145/3386901.3388946Permissiontomakedigitalorhardcopiesofallorpartofthisworkforpersonalorclassroomuseisgrantedwithoutfeeprovidedthatcopiesarenotmadeordistributedforprofitorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitationonthefirstpage.Copyrightsforcomponentsofthisworkownedbyothersthantheauthor(s)mustbehonored.Abstractingwithcreditispermitted.Tocopyotherwise,orrepublish,topostonserversortoredistributetolists,requirespriorspecificpermissionand/orafee.Requestpermissionsfrompermissions@acm.org.MobiSys’20,June15–19,2020,Toronto,ON,Canada©2020Copyrightheldbytheowner/author(s).PublicationrightslicensedtoACM.ACMISBN978-1-4503-7954-0/20/06…$15.00https://doi.org/10.1145/3386901.33889461INTRODUCTIONAdvancesinmemoryandprocessingresourcesandtheurgetoreducedatatransmissionlatencyhaveledtoarapidriseinthede-ploymentofvariousDeepNeuralNetworks(DNNs)onconstrainededgedevices(e.g.,wearable,smartphones,andconsumerInternetofThings(IoT)devices).Comparedwithcentralizedinfrastructures(i.e.,Cloud-basedsystems),hybridandedge-basedlearningtech-niquesenablemethodsforpreservingusers’privacy,asrawdatacanstaylocal[41].Nonetheless,recentworkdemonstratedthatlocalmodelsstillleakprivateinformation[21,33,34,47,57,61–63].Thiscanbeusedbyadversariesaimingtocompromisetheconfi-dentialityofthemodelitselforthatoftheparticipantsintrainingthemodel[48,57].Thelatter,ispartofamoregeneralclassofattacks,knownasMembershipInferenceAttacks(refertoasMIAshenceforth).MIAscanhavesevereprivacyconsequences[33,47]motivatinganumberofresearchworkstofocusontacklingthem[1,28,35].Predominantly,suchmitigationapproachesrelyondifferentialprivacy[14,68],whoseimprovementinprivacypreservationcomeswithanadverseeffectonthemodel’spredictionaccuracy.Weobserve,thatedgedevicesarenowincreasinglyequippedwithasetofsoftwareandhardwaresecuritymechanismspoweredbyprocessor(CPU)designsofferingstrongisolationguarantees.SystemdesignssuchasArmTrustZonecanenforcememoryisola-tionbetweenanuntrustedpartofthesystemoperatinginaRichExecutionEnvironment(REE),andasmallertrustedcomponentop-eratinginhardware-isolatedTrustedExecutionEnvironment(TEE),responsibleforsecuritycriticaloperations.IfwecouldefficientlyexecutesensitiveDNNsinsidethetrustedexecutionenvironmentsofmobiledevices,thiswouldallowustolimittheattacksurfaceofmodelswithoutimpairingtheirclassificationperformance.Previ-ousworkhasdemonstratedpromisingresultsinthisspace;recentadvancementsallowforhigh-performanceexecutionofsensitiveoperationswithinaTEE[17,19,24,50,51].TheseworkshavealmostexclusivelyexperimentedwithintegratingDNNsincloud-likedevicesequippedwithIntelSoftwareGuardeXtensions(SGX).However,thisparadigmdoesnottranslatewelltoedgecomputingduetosignificantdifferencesinthefollowingthreefactors:attacksurface,protectiongoals,andcomputationalperformance.Theat-tacksurfaceonserversisexploitedtostealauser’sprivatedata,whiletheadversaryonauser’sedgedevicefocusesoncompromis-ingamodel’sprivacy.Consequently,theprotectiongoalinmostworkscombiningdeeplearningwithTEEsontheserver(e.g.,[17]arXiv:2004.05703v1 [cs.LG] 12 Apr 2020
MobiSys’20,June15–19,2020,Toronto,ON,CanadaF.Mo,A.S.Shamsabadi,K.Katevas,S.Demetriou,I.Leontiadis,A.Cavallaro,andH.Haddadiand[24])istopreservetheprivacyofauser’sdataduringinference,whiletheprotectiononedgedevicespreservesboththemodelpri-vacyandtheprivacyofthedatausedintrainingthismodel.Lastly,edgedevices(suchasIoTsensorsandactuators)havelimitedcompu-tationalresourcescomparedtocloudcomputingdevices;hencewecannotmerelyuseperformanceresultsderivedonanSGX-enabledsystemontheservertoextrapolatemeasurementsforTEE-enabledembeddedsystems.Inparticular,blindlyintegratingaDNNinanedgedevice’sTEEmightnotbecomputationallypracticalorevenpossible.Weneedasystematicmeasurementoftheeffectsofsuchdesignsonedge-likeenvironments.SinceDNNsfollowalayeredarchitecture,thiscanbeexploitedtopartitionaDNN,havingasequenceoflayersexecutedintheuntrustedpartofthesystemwhilehidingtheexecutionofsensi-tivelayersinthetrusted,secureenvironment.WeutilizetheTEE(i.e.,ArmTrustZone)andperformauniquelayer-wiseanalysistoillustratetheprivacyrepercussionsofanadversaryonrelevantneuralnetworkmodelsonedgedeviceswiththecorrespondingperformanceeffects.Tothebestofourknowledge,wearethefirsttoembarkonexaminingtowhatextentthisisfeasibleonresource-constrainedmobiledevices.Specifically,welayoutthefollowingresearchquestion:RQ1:IsitpracticaltostoreandexecuteasequenceofsensitiveDNN’slayersinsidetheTEEofanedgedevice?Toanswerthisquestionwedesignaframework,namelyDark-neTZ,whichenablesanexhaustivelayerbylayerresourceconsump-tionanalysisduringtheexecutionofaDNNmodel.DarkneTZpartitionsamodelintoasetofnon-sensitivelayersranwithinthesystem’sREEandasetofsensitivelayersexecutedwithinthetrustedTEE.WeuseDarkneTZtomeasure,foragivenDNN—weevaluatetwosmallandsixlargeimageclassificationmodels—theunderlyingsystem’sCPUexecutiontime,memoryusage,andaccu-ratepowerconsumptionfordifferentlayerpartitionchoices.WedemonstrateourprototypeofDarkneTZusingtheOpenPortableTEE(OP-TEE)1softwarestackrunningonaHikey960board.2OP-TEEiscompatiblewiththemobile-popularArmTrustZone-enabledhardware,whileourchoiceofhardwarecloselyresemblescommonedgedevices’capabilities[42,58].OurresultsshowthatDarkneTZonlyhas10%overheadwhenfullyutilizingallavailablesecurememoryoftheTEEforprotectingamodel’slayers.TheseresultsillustratethatREE-TEEpartitionsofcertainDNNscanbeefficientlyexecutedonresourceconstraineddevices.Giventhis,wenextaskthefollowingquestion:RQ2:AresuchpartitionsusefultobotheffectivelyandefficientlytacklerealisticattacksagainstDNNsonmobiledevices?Toanswerthisquestion,wedevelopathreatmodelconsideringstateoftheartMIAsagainstDNNs.Weimplementtherespectiveat-tacksanduseDarkneTZtomeasuretheireffectiveness(adversary’ssuccessrate)fordifferentmodelpartitionchoices.Weshowthatbyhidingasinglelayer(theoutputlayer)intheTEEofaresource-constrainededgedevice,theadversary’ssuccessratedegradestorandomguesswhile(a)theresourceconsumptionoverheadonthedeviceisnegligible(3%)and(b)theaccuracyofinferencere-mainsintact.Wealsodemonstratetheoverheadoffullyutilizing1https://www.op-tee.org/2https://www.96boards.org/product/hikey960/TrustZoneforprotectingmodels,andshowthatDarkneTZcanbeaneffectivefirststeptowardsachievinghardware-basedmodelprivacyonedgedevices.PaperOrganisation.Therestofthepaperisorganizedasfollows:Section2discussesbackgroundandrelatedworkandSection3presentsthedesignandmaincomponentsofDarkneTZ.Section4providesimplementationdetailsanddescribesourevaluationsetup(ourimplementationisavailableonline3),whileSection5presentsoursystemperformanceandprivacyevaluationresults.Lastly,Section6discussesfurtherperformanceandprivacyimplicationsthatcanbedrawnfromoursystematicevaluationandweconcludeonSection7.2BACKGROUNDANDRELATEDWORK2.1PrivacyrisksofDeepNeuralNetworksModelprivacyrisks.Withsuccessfultraining(i.e.,themodelcon-vergingtoanoptimalsolution),aDNNmodel“memorizes”featuresoftheinputtrainingdata[44,57](see[32,64]formoredetailsondeeplearning),whichitcanthenusetorecognizeunseendataexhibitingsimilarpatterns.However,modelshavethetendencytoincludemorespecificinformationofthetrainingdatasetunre-latedtothetargetpatterns(i.e.,theclassesthatthemodelaimstoclassify)[9,57].Moreover,eachlayerofthemodelmemorizesdifferentinforma-tionabouttheinput.Yosinkietal.[59]foundthatthefirstlayers(closertotheinput)aremoretransferabletonewdatasetsthanthelastlayers.Thatis,thefirstlayerslearnmoregeneralinfor-mation(e.g.,ambientcolorsinimages),whilethelastlayerslearninformationthatismorespecifictotheclassificationtask(e.g.,faceidentity).Thememorizationdifferenceperlayerhasbeenverifiedbothinconvolutionallayers[60,62]andingenerativemodels[65].Evidently,anuntrustedpartywithaccesstothemodelcanleveragethememorizedinformationtoinferpotentiallysensitivepropertiesabouttheinputdatawhichleadstosevereprivacyrisks.Membershipinferenceattack(MIA).MIAsformapossibleat-tackondeviceswhichleveragememorizedinformationonamod-els’layerstodeterminewhetheragivendatarecordwaspartofthemodel’strainingdataset[48].Inablack-boxMIA,theattackerleveragesmodels’outputs(e.g.,confidencescores)andauxiliaryinformation(e.g.,publicdatasetsorpublicpredictionaccuracyofthemodel)totrainshadowmodelsorclassifierswithoutaccessinginternalinformationofthemodel[48,57].However,inawhite-boxMIA,theattackerutilizestheinternalknowledge(i.e.,gradientsandactivationoflayers)ofthemodelinadditiontothemodel’soutputstoincreasetheeffectivenessoftheattack[38].Itisshownthatthelastlayer(modeloutput)hasthehighestmembershipin-formationaboutthetrainingdata[38].Weconsiderawhite-boxadversaryasourthreatmodel,asDNNsarefullyaccessibleafterbeingtransferredfromtheservertoedgedevices[55].Inadditiontothis,awhite-boxMIAisastrongeradversarythanablack-boxMIA,astheinformationtheadversaryhasaccesstoinablack-boxattackisasubsetofthatusedinawhite-boxattack.3https://github.com/mofanv/darknetz
DarkneTZ:TowardsModelPrivacyattheEdgeusingTrustedExecutionEnvironmentsMobiSys’20,June15–19,2020,Toronto,ON,Canada2.2DeeplearningintheTEETrustedexecutionenvironment(TEE).ATEEisatrustedcom-ponentwhichrunsinparallelwiththeuntrustedRichoperatingsystemExecutionEnvironment(REE)andisdesignedtoprovidesafeguardsforensuringtheconfidentialityandintegrityofitsdataandprograms.Thisisachievedbyestablishinganisolatedregiononthemainprocessor,andbothhardwareandsoftwareapproachesareutilizedtoisolatethisregion.Thechipincludesadditionalel-ementssuchasunchangeableprivatekeysorsecurebitsduringmanufacturing,whichhelpsensurethatuntrustedpartsoftheplat-form(evenprivilegedOSorhypervisorprocesses)cannotaccessTEEcontent[7,10].Inadditiontostrongsecurityguarantees,TEEsalsoprovidebet-tercomputationalperformancethanexistingsoftwareprotections,makingitsuitableforcomputationally-expensivedeeplearningtasks.Forexample,advancedtechniquessuchasfullyhomomor-phicencryptionenableoperatorstoprocesstheencrypteddataandmodelswithoutdecryptionduringdeeplearning,butsignificantlyincreasethecomputationcost[3,37].Conversely,TEEprotectiononlyrequiresadditionaloperationstobuildthetrustedenvironmentandthecommunicationbetweentrustedanduntrustedparts,soitsperformanceiscomparabletonormalexecutionsinanuntrustedenvironment(e.g.,anOS).DeeplearningwithTEEs.PreviousworkleveragedTEEstopro-tectdeeplearningmodels.Apartfromtheuniqueattacksurfaceandthusprotectiongoalsweconsider,thesealsodifferwithourap-proachinonemoreaspect:theydependonanunderlyingcomputerarchitecturewhichismoresuitableforcloudenvironments.RecentworkhassuggestedexecutingacompletedeeplearningmodelinaTEE[10],whereduringtraining,users’privatedataistransferredtothetrustedenvironmentusingtrustedpaths.ThispreventsthehostCloudformeavesdroppingonthedata[39].SeveralotherstudiesimprovedtheefficiencyofTEE-residentmodelsusingGraphicsProcessingUnits(GPU)[51],multiplememoryblocks[24],andhigh-performanceMLframeworks[25].Moresimilartoourap-proach,Guetal.[17]partitionedDNNmodelsandonlyenclosedthefirstlayersinanSGX-poweredTEEtomitigateinputinforma-tiondisclosuresofreal-timefeddeviceuserimages.Incontrast,membershipinferenceattacksweconsider,becomemoreeffectivebyaccessinginformationinthelastlayers.AlltheseworksuseanunderlyingarchitecturebasedonIntel’sSGX,whichisnotsuitableforedgedevices.EdgedevicesusuallyhavechipsdesignedusingReducedInstructionSetComputing(RISC),peripheralinterfaces,andmuchlowercomputationalresources(around16mebibytes(MiB)memoryforTEE)[15].Arm’sTrustZoneisthemostwidelyusedTEEimplementationinedgedevices.Itinvolvesamorecom-prehensivetrustedenvironment,includingthesecurityextensionsfortheAXIsystembus,processors,interruptcontroller,TrustZoneaddressspacecontroller,etc.CameraorvoiceinputconnectedtotheAPBperipheralbuscanbecontrolledasapartofthetrustedenvironmentbytheAXI-to-APBbridge.UtilizingTrustZoneforon-devicedeeplearningrequiresmoredevelopmentsandinvesti-gationsbecauseofitsdifferentfeaturescomparedtoSGX.2.3Privacy-preservingmethodsAneffectivemethodforreducingthememorizationofprivateinformationoftrainingdatainaDNNmodelistoavoidover-fittingviaimposingconstraintsontheparametersandutilizingdropouts[48].DifferentialPrivacy(DP)canalsoobfuscatethepa-rameters(e.g.,addingGaussiannoisetothegradients)duringtrain-ingtocontroleachinput’simpactonthem[1,61].However,DPmaynegativelyaffecttheutility(i.e.,thepredictionaccuracy)ifthenoiseisnotcarefullydesigned[45].Inordertoobfuscatepri-vateinformationonly,onecouldapplymethodssuchasgenerativeneuralnetworks[54]oradversarialexamples[29]tocraftnoisesforoneparticulardatarecord(e.g.,oneimage),butthisrequiresadditionalcomputationalresourceswhicharealreadylimitedonedgedevices.Server-Clientmodelpartition.Generalinformationprocessedinthefirstlayers[59]duringforwardpropagationofdeeplearningoftenincludesmoreimportantindicatorsfortheinputsthanthoseinthelastlayers(whichisoppositetomembershipindicators),sincereconstructingtheupdatedgradientsoractivationofthefirstlayerscandirectlyrevealprivateinformationoftheinput[6,13].Basedonthis,hybridtrainingmodelshavebeenproposedwhichrunseveralfirstlayersattheclient-sideforfeatureextractionandthenuploadthesefeaturestotheserver-sideforclassification[40].Suchpartitionapproachesdelegatepartsofthecomputationfromtheserverstotheclients,andthus,inthesescenarios,strikingabalancebetweenprivacyandperformanceisofparamountimportance.Guetal.[17]followasimilarlayer-wisemethodandleverageTEEsonthecloudtoisolatethemoreprivatelayers.Clients’privatedataareencryptedandthenfedintothecloudTEEsothatthedataandfirstseverallayersareprotected.Thismethodexpandstheclients’trustedboundarytoincludetheserver’sTEEandutilizesanREE-TEEmodelpartitionattheserverwhichdoesnotsignificantlyincreaseclients’computationcostcomparedtorunningthefirstlayersonclientdevices.Tofurtherincreasetrainingspeed,itisalsopossibletotransferalllinearlayersoutsideacloud’sTEEintoanuntrustedGPU[51].Allthesepartitioningapproachesaimtopreventleakageofprivateinformationofusers(totheserverorothers),yetdonotpreventleakagefromtrainedmodelswhenmodelsareexecutedontheusers’edgedevices.3DARKNETZWenowdescribeDarkneTZ,aframeworkforpreservingDNNmodels’privacyonedgedevices.Westartwiththethreatmodelwhichwefocusoninthispaper.3.1ThreatModelWeconsideranadversarywithfullaccesstotheREEofanedgedevice(e.g.,theOS)onedgedevices:thiscouldbetheactualuser,maliciousthird-partysoftwareinstalledonthedevices,oramali-ciousorcompromisedOS.WeonlytrusttheTEEofanedgedevicetoguaranteetheintegrityandconfidentialityofthedataandsoft-wareinit.Inparticular,weassumethataDNNmodelispre-trainedusingprivatedatafromtheserverorotherparticipatingnodes.Weassumethemodelproviderscanfullyguaranteethemodelprivacyduringtrainingontheirserversbyutilizingexistingprotection
MobiSys’20,June15–19,2020,Toronto,ON,CanadaF.Mo,A.S.Shamsabadi,K.Katevas,S.Demetriou,I.Leontiadis,A.Cavallaro,andH.HaddadiFigure1:DarkneTZuseson-deviceTEEtoprotectasetoflayersofadeepneuralnetworkforbothinferenceandfine-tuning.(Note:Thetrustedcomputebase—ortrustboundary—forthemodelowneronedgedevicesistheTEEofthedevice).methods[39]orevenbytrainingthemodeloffline,sothemodelcanbesecretprovisionedtotheuserdeviceswithoutotherprivacyissues.3.2DesignOverviewDarkneTZdesignaimsatmitigatingattacksonon-devicemodelsbyprotectinglayersandtheoutputofthemodelwithlowcostbyutilizinganon-deviceTEE.Itshouldbecompatiblewithedgedevices.Thatis,itshouldintegratewithTEEswhichcanrunonhardwaretechnologiesthatcanbefoundoncommodityedgede-vices(e.g.ArmTrustZone),usestandardTEEsystemarchitecturesandcorrespondingAPIs.WeproposeDarkneTZ,illustratedinFigure1,aframeworkthatenablesDNNlayerstobepartitionedastwopartstobedeployedre-spectivelyintotheREEandTEEofedgedevices.DarkneTZallowsuserstodoinferencewithorfine-tuningofamodelseamlessly—thepartitionistransparenttotheuser—whileatthesametimecon-siderstheprivacyconcernsofthemodel’sowner.CorrespondingClientApplication(CA)andTrustedApplication(TA)performtheoperationsinREEandTEE,respectively.Withoutlossofgenerality,DarkneTZ’sCArunslayers1tolintheREE,whileitsTArunslayersl+1toLlocatedintheTEEduringfine-tuningorinferenceofaDNN.ThisDNNpartitioningcanhelptheservertomitigateseveralattackssuchasMIAs[36,38],asthelastlayershaveahigherprobabilityofleakingprivateinformationabouttrainingdata(seeSection2).DarkneTZexpectssetsoflayerstobepre-provisionedintheTEEbytheanalyst(iftheframeworkisusedforofflinemeasurements)orbythedeviceOEMifaversionofDarkneTZisimplementedonconsumerdevices.Notethatinthelattercase,secretprovisioningofsensitivelayerscanalsobeperformedovertheair,whichmightbeusefulwhenthesensitivelayerselectionneedstobedynamicallydeterminedandprovisionedtotheedgedeviceaftersupply.Inthiscase,onecouldextendDarkneTZtofollowavariationoftheSIGMAsecurekeyexchangeprotocol[30],modifiedtoincluderemoteattestation,similarto[66].SIGMAisprovablysecureandefficient.Itguaranteesperfectforwardsecrecyforthesessionkey(todefendagainstreplayattacks)whileitsuseofmessageauthenticationcodesensuresserverandclientidentityprotection.Integratingremoteattestationguaranteesthattheserverprovisionsthemodeltoanon-compromisededgedevice.3.3ModelPreparationOncethemodelisprovisioned,theCArequeststhelayersfromdevices(e.g.,solid-statediskdrive(SSD))andinvokestheTA.TheCAwillfirstbuildtheDNNarchitectureandloadtheparametersofthemodelintonormalmemory(i.e.,non-securememory)toprocessallcalculationsandmanipulationsofthenon-sensitivelayersintheREE.Whenencountering(secretlyprovisioned)encryptedlayersneedtobeexecutedintheTEE,whichisdeterminedbythemodelowner’ssetting,theCApassesthemtotheTA.TheTAdecryptstheselayersusingakeythatissecurelystoredintheTEE(usingsecurestorage),andthenitrunsthemoresensitivelayersintheTEE’ssecurememory.Thesecurememoryisindicatedbyoneadditionaladdressbitintroducedtoallmemorysystemtransactions(e.g.,cachetags,memory,andperipherals)toblocknon-secureaccess[7].Atthispoint,themodelisreadyforfine-tuningandinference.3.4DNNPartitionedExecutionTheforwardpassofbothinferenceandfine-tuningpassestheinputa0totheDNNtoproduceactivationoflayersuntilthelastlayer,i.e.,layerl’sactivationiscalculatedbyal=f(wlal−1+bl),wherewlandblareweightsandbiasesofthislayer,al−1isactivationofitspreviouslayerandfisthenon-linearactivationfunction.Therefore,aftertheCAprocessesitsinsidelayersfrom1tol,itinvokesacommandtotransfertheoutputs(i.e.,activation)oflayerl(i.e.,thelastlayerintheCA)tothesecurememorythroughabuffer(insharedmemory).TheTAswitchestotheforward_net_TAfunctioncorrespondingtotheinvokedcommandtoreceiveparameters(i.e.,outputs/activation)oflayerlandprocessesthefollowingforwardpassofthenetwork(fromlayerl+1tolayerL)intheTEE.Intheend,outputsofthelastlayerarefirstnormalizedasˆaLtocontrolthemembershipinformationleakageandarereturnedviasharedmemoryasthepredictionresults.Thebackwardpassoffine-tuningcomputesgradientsofthelossfunctionL(aL,y)withrespecttoeachweightwlandbiasbl,andupdatestheparametersofalllayers,{wl}Ll=1and{bl}Ll=1as
DarkneTZ:TowardsModelPrivacyattheEdgeusingTrustedExecutionEnvironmentsMobiSys’20,June15–19,2020,Toronto,ON,Canadawl=wl−η∂L(aL,y)∂wlandbl=bl−η∂L(aL,y)∂bl,whereηisaconstantcalledthelearningrateandyisthedesiredoutput(i.e.,calledlabel).TheTAcancomputethegradientofthelossfunctionbyreceivingyfromCAandbackpropagateittotheCAinordertoupdatealltheparameters.Intheend,tosavethefine-tunedmodelondevices,alllayersintheTAareencryptedandtransferredbacktotheCA.4EXPERIMENTSETTINGS4.1ModelsandDatasetsWefirstusetwopopularDNNs,namelyAlexNetandVGG-7,tomeasurethesystem’sperformance.AlexNethasfiveconvolutionallayers(i.e.,withkernelsize11,5,3,3,and3)followedbyafully-connectedandasoftmaxlayer,andVGG-7haseightlayers(i.e.,sevenconvolutionallayerswithkernelsize3,followedbyafully-connectedlayer).BothAlexNetandVGG-7useReLU(RectifierLinearUnit)activationfunctionsforallconvolutionallayers.ThenumberofneuronsforAlexNet’slayersis64,192,384,256,and256,whilethenumberofneuronsforVGG-7’slayersis64,64,124,124,124,124,and124.WetrainthenetworksandconductinferenceonCIFAR-100andImageNetTiny.Weuseimageclassificationdatasets,asarecentempiricalstudyshowsthatthemajorityofsmartphoneapplications(70.6%)thatusedeeplearningareforimageprocess-ing[55].Moreover,thestateoftheartMIAweareconsideringisdemonstratedagainstsuchdatasets[38].CIFAR-100includes50ktrainingand10ktestimagesofsize32×32×3belongingto100classes.ImageNetTinyisasimplifiedImageNetchallengethathas100ktrainingand10ktestimagesofsize64×64×3belongingto200classes.Inadditiontothis,weusesixavailableDNNs(TinyDarknet(4megabytes(MB)),DarknetReference(28MB),Extraction[49](90MB),Resnet-50[20](87MB),Densenet-201[23](66MB),andDarknet-53-448(159MB))pre-trainedontheoriginalImageNet[11]datasettomeasureDarkneTZ’sperformanceduringinference.Allpre-trainedmodelscanbefoundonline4.ImageNethas1000classes,andconsequently,theseDNNmodels’lastlayersoccupylargermemorythatcanexceedtheTEE’slimits,comparedtomodelswith100/200classes.Therefore,forthesesixmodels,weonlyevaluatetheconditionthattheirlastlayerisintheTEE.Toevaluatethedefence’seffectivenessagainstMIAs,weusethesamemodelsasthoseusedinthedemonstrationoftheattack[38](AlexNet,VGG-7,andResNet-110).ThisResNetwith110depthisanexistingnetworkarchitecturethathasthreeblocks(eachhas36convolutionallayers)inthemiddleandanotheroneconvolutionallayeratthebeginningandonefullyconnectedlayerattheend[20].Weusepublishedmodelstrained(with164epochs)onCIFAR-100[31]online5.WealsotrainthreemodelsonImageNetTiny6with300epochsastargetmodels(i.e.,victimmodelsduringattacks).Modelswiththehighestvalidaccuracyareusedaftertraining.Wefollow[38]’smethodology,andalltrainingandtestdatasetsaresplittotwopartswithequalsizesrandomlysothattheMIAmodellearnsbothMemberandNon-memberimages.Forexample,25Koftrainingimagesand5KoftestCIFAR-100imagesarechosentotraintheMIAmodel,andthenthemodel’stestprecisionandrecall4https://pjreddie.com/darknet/imagenet/5https://github.com/bearpaw/pytorch-classification6https://tiny-imagenet.herokuapp.com/areevaluatedusing5Koftrainingimagesand5KoftestimagesintherestofCIFAR-100images.4.2ImplementationandEvaluationSetupWedevelopanimplementationbasedontheDarknet[46]DNNlibrary.Wechosethisparticularlibrarybecauseofitshighcom-putationalperformanceandsmalllibrarydependencieswhichfitswithinthelimitedsecurememoryoftheTEE.Weruntheimple-mentationonOpenPortableTEE(OP-TEE),whichprovidesthesoftware(i.e.,operatingsystems)foranREEandaTEEdesignedtorunontopofArmTrustZone-enabledhardware.ForTEEmeasurements,wefocusontheperformanceofdeeplearningsincesecretprovisioningonlyhappensonceforupdat-ingthemodelfromsevers.Weimplement128-bitAES-GCMforon-devicesecurestorageofsensitivelayers.Wetestourimplemen-tationonaHikey960board,awidely-useddevice[4,8,12,58]thatispromisingtobecomparablewithmobilephones(andotherexistingproducts)duetoitsAndroidopensourceprojectsupport.TheboardhasfourARMCortex-A73coresandfourARMCortex-A53cores(pre-configuredto2362MHzand533MHz,respectively,bythedeviceOEM),4GBLPDDR4SDRAM,andprovides16MiBsecurememoryfortrustedexecution,whichincludes14MiBfortheTAand2MiBforTEErun-time.Another2MiBsharedmemoryisallocatedfromnon-securememory.AstheHikeyboardadjuststheCPUfrequencyautomaticallyaccordingtotheCPUtemperature,wedecreaseandfixthefrequencyofCortexA73to903MHzandkeepthefrequencyofCortexA53as533Mhz.Duringexperimentsweintroducea120secondssystemsleeppertrialtomakesurethattheCPUtemperaturebeginsunder40°Ctoavoidunderclocking.Edgedevicessufferfromlimitedcomputationalresources,andassuch,itisparamounttomeasuretheefficiencyofdeeplearningmodelswhenpartitionedtobeexecutedpartlybytheOSandpartlybytheTEE.InparticularwemonitorandreportCPUexecutiontime(inseconds),memoryusage(inmegabytes),andpowerconsumption(inwatts)whenthecompletemodelrunsintheREE(i.e.,OS)andcompareitwithdifferentpartitioningconfigurationswheremoresensitivelayersarekeptwithintheTEE.CPUexecutiontimeistheamountoftimethattheCPUwasusedfordeeplearningoperations(i.e.,fine-tuningorinference).Memoryusageistheamountofthemappingthatiscurrentlyresidentinthemainmemory(RAM)occupiedbyourprocessfordeeplearningrelatedoperations.PowerconsumptionistheelectricalenergyconsumptionperunittimethatwasrequiredbytheHikeyboard.Morespecifically,weutilizedtheREE’s/proc/self/statusforaccessingtheprocessinformationtomeasuretheCPUexecutiontimeandmemoryusageofourimplementation.CPUexecutiontimeistheamountoftimeforwhichtheCPUwasusedforprocess-inginstructionsofsoftware(asopposedtowall-clocktimewhichincludesinput/outputoperations)andisfurthersplitinto(a)timeinusermodeand(b)timeinkernelmode.TheREEkerneltimecapturestogether(1)thetimespentbytheREE’skerneland(2)thetimespentbytheTEE(includingbothwhileinusermodeandkernelmode).ThiskerneltimegivesusadirectperceptionoftheoverheadwhenincludingTEEsfordeeplearningversususingthesameREEwithoutaTEE’sinvolvement.
MobiSys’20,June15–19,2020,Toronto,ON,CanadaF.Mo,A.S.Shamsabadi,K.Katevas,S.Demetriou,I.Leontiadis,A.Cavallaro,andH.HaddadiMemoryusageisrepresentedusingresidentsetsize(RSS)memoryintheREE,butthememoryoccupiedintheTEEisnotcountedbytheRSSsincetheREEdoesnothaveaccesstogathermemoryusageinformationoftheTEE.TheTEEisdesignedtoconcealthissensitiveinformation(e.g.,bothCPUtimeandmemoryusage);otherwise,theconfidentialityofTEEcontentswouldbeeasilybreachedbyutilizingside-channelattacks[53].Toovercomethis,wetriggeranabortfromtheTEEaftertheprocessrunsstably(memoryusagetendstobefixed)toobtainthememoryusageoftheTEE.Toaccuratelymeasurethepowerconsumption,weusedMon-soonHighVoltagePowerMonitor,7ahigh-precisionpowermeter-inghardwarecapableofmeasuringthecurrentconsumedbyatestdevicewithavoltagerangeof0.8Vto13.5Vandupto6Acontinu-ouscurrent.WeconfiguredittopowertheHikeyboardusingtherequired12Vvoltagewhilerecordingtheconsumedcurrentina50Hzsamplingrate.ForconductingtheMIA,weuseamachinewith4Intel(R)Xeon(R)E5-2620CPUs(2.00GHz),anNVIDIAQUADRORTX6000(24GB),and24GBDDR4RAM.Pytorchv1.0.1[43]isusedastheDNNlibrary.4.3MeasuringPrivacyinMIAsWedefinetheadversarialstrategyinoursettingbasedonstate-of-the-artwhite-boxMIAswhichobservethebehaviorofallcompo-nentsoftheDNNmodel[38].White-boxMIAscanachievehigheraccuracyofdistinguishingwhetheroneinputsampleispresentedintheprivatetrainingdatasetcomparedtoblack-boxMIAssincethelatteronlyhaveaccesstothemodels’output[48,57].Besides,white-boxMIAsarealsohighlypossibleinon-devicedeeplearn-ing,whereamodelusercannotonlyobservetheoutput,butalsoobservefine-grainedinformationsuchasthevaluesofthecostfunction,gradients,andactivationoflayers.Weevaluatethemembershipinformationexposureofasetofthetargetmodel’slayersbyemployingthewhite-boxMIA[38]ontheselayers.Theattackerfeedsthetargetdatatothemodelandleveragesallpossibleinformationinthewhite-boxsettingincludingactivationofalllayers,model’soutput,lossfunction,andthegradientsofthelossfunctionwithrespecttotheparameterofeachlayer.Itthenseparatelyanalyseseachinformationsourcebyextractingfeaturesfromtheactivationofeachlayer,themodel’soutputandthelossfunctionviafullyconnectedneuralnetworkswithonehiddenlayer,whileusingconvolutionalneuralnetworksforthegradients.Allextractedfeaturesarecombinedinaglobalfeaturevectorthatislaterusedasaninputforaninferenceattackmodel.Theattackmodelpredictsasinglevalue(i.e.,MemberorNon-member)thatrepresentsthemembershipinformationofthetargetdata(werefertheinterestedreadersto[38]foradetaileddescriptionofthisMIA).WeusethetestaccuracyoftheMIAmodeltrainedonasetoflayerstorepresenttheadvantageofadversariesaswellasthesensitivityoftheselayers.TomeasuretheprivacyriskwhenpartofthemodelisinTEE,weconductthisMIAonourtargetmodelintwodifferentsettings:(i)startingfromthefirstlayer,weaddthelaterlayersonebyoneuntiltheendofthenetwork,and(ii)startingfromthelastlayerweaddthepreviouslayersonebyoneuntilthebeginningofthe7https://www.msoon.com/network.However,theavailableinformationofonespecificlayerduringthefine-tuningphaseandthatduringtheinferencephasearedifferentwhenstartingfromthefirstlayers.Inferenceonlyhasaforwardpropagationphasewhichcomputestheactivationofeachlayer.Duringfine-tuningandbecauseofthebackwardpropagation,inadditiontotheactivation,gradientsoflayersarealsovisible.Incontrasttothat,attacksstartingfromthelastlayerscanobservethesameinformationinbothinferenceandfine-tuningsincelayers’gradientscanbecalculatedbasedonthecostfunction.Therefore,insetting(i),weutilizeactivation,gradients,andoutputs.Insetting(ii),weonlyusetheactivationofeachlayertoevaluateinferenceandusebothactivationandgradientstoevaluatefine-tuning,sincetheoutputsofthemodel(e.g.,confidencescores)arenotaccessibleinthissetup.5EVALUATIONRESULTSInthisSectionwefirstevaluatetheefficiencyofDarkneTZwhenprotectingasetoflayersintheTrustZonetoanswerRQ1.Toeval-uatesystemefficiency,wemeasureCPUexecutiontime,mem-oryusage,andpowerconsumptionofourimplementationforbothtrainingandinferenceonAlexNetandVGG-7trainedontwodatasets.Weprotectthelastlayers(startingfromtheoutput)sincetheyaremorevulnerabletoattacks(e.g.,MIAs)onmodels.Thecostlayer(i.e.,thecostfunction)andthesoftmaxlayerareconsideredasaseparatelayersincetheycontainhighlysensitiveinformation(i.e.,confidencescoresandcostfunction).Startingfromthelastlayer,weincludethemaximumnumberoflayersthattheTrust-Zonecanhold.ToanswerRQ2,weusetheMIAsuccessrate,indicatingthemembershipprobabilityoftargetdata(themoreDarkneTZlimitsthis,thestrongertheprivacyguaranteesare).Wedemonstratetheeffectonperformanceanddiscussthetrade-offbetweenperformanceandprivacyusingMIAsasoneexample.5.1CPUExecutionTimeAsshowninFigure2,theresultsindicatethatincludingmorelayersintheTrustZoneresultsinanincreasingCPUtimefordeeplearningoperations,wherethemostexpensiveadditionistoputthemaxi-mumnumberoflayers.Figure2ashowstheCPUtimewhentrainingAlexNetandVGG-7withTrustZoneonCIFAR-100andImageNetTinydataset,respectively.Thisincreasingtrendissignificantandconsistentforbothdatasets(CIFAR-100:F(6,133)=29.37,p<0.001;F(8,171)=321.3,p<0.001.ImageNetTiny:F(6,133)=37.52,p<0.001;F(8,171)=28.5,p<0.001).WealsoobservethatprotectingonlythelastlayerintheTrustZonehasnegligibleeffectontheCPUutilization,whileincludingmorelayerstofullyutilizetheTrustZoneduringtrainingcanincreaseCPUtime(by10%).Forinference,theincreasingtrendisalsosignificant(seeFigure2b).ItonlyincreasesCPUtimebyaround3%whenprotectingonlythelastlayerwhichcanincreaseupto10×whenthemaximumpossiblenumberoflayersisincludedintheTrustZone.TofurtherinvestigatetheincreasingCPUexecutiontimeeffect,weanalyzedalltypesoflayers(bothtrainableandnon-trainable)separatelyintheTrustZone.Trainablelayershaveparameters(e.g.,weightsandbiases)thatareupdated(i.e.,trainable)duringthetrain-ingphase.Fullyconnectedlayersandconvolutionallayersaretrain-able.Dropout,softmax,andmaxpoolinglayersarenon-trainable. DarkneTZ:TowardsModelPrivacyattheEdgeusingTrustedExecutionEnvironmentsMobiSys’20,June15–19,2020,Toronto,ON,Canada(a)CPUtimeoftraining(b)CPUtimeofinferenceFigure2:TheCPUtimeofeachstepoftrainingmodelsorconductinginferenceonCIFAR-100andImageNetTiny,protectingconsecutivelastlayersusingTrustZone(Forexample:whenputtingthelastlayersintheTrustZone,1referstothecostfunc-tionandthesoftmaxlayer,2includes1andthepreviousfully-connectedlayer,3includes2andthepreviousconvolutionallayers,etc.Horizontaldashedlines(and)representthebaselinewherealllayersareoutoftheTrustZone.20timesforeachtrial,anderrorbarsare95%CI.Severalerrorbarsofdatapointsareinvisibleastheyaretoosmalltobeshowninthisfigureaswellasthefollowingfigures).AsshowninFigure3,differentturningpointsexistwheretheCPUtimesignificantlyincreases(p<0.001)comparedtothepreviousconfiguration(i.e.,onemorelayerismovedintotheTrustZone)(TukeyHSD[2]wasusedfortheposthocpairwisecomparison).Whenconductingtraining,theturningpointsappearwhenputtingthemaxpoolinglayerintheTrustZoneforAlexNet(seeFigure3a)andwhenputtingthedropoutlayerandthemaxpoolinglayerforVGG-7(seeFigure3b).Alltheselayersarenon-trainable.Whenconductinginference,theturningpointsappearwhenincludingtheconvolutionallayersinTrustZoneforbothAlexNet(seeFigure3c)andVGG-7(seeFigure3d),whichareonestepbehindthosepointswhenconductingtraining.OnepossiblereasonfortheincreasedCPUtimeduringinferenceisthattheTrustZoneneedstoconductextraoperations(e.g.,relatedsecurememoryallocation)forthetrainablelayer,asshowninFigure3candFigure3dwhereallincreaseshappenwhenonetrainablelayerisincludedintheTrustZone.Sinceweonlyconductone-timeinferenceduringexperiments,theoperationsofinvokingTEElibraries,creatingtheTA,andallocatingsecurememoryforthefirsttimesignificantlyincreasedtheexecutiontimecomparedtothenextoperations.Everysubsequentinferenceattempt(continuouslywithoutrebuildingthemodel)doesnotincludeadditionalCPUtimeoverhead.Figure4alsoshowsthatmostoftheincreasedCPUexecutiontime(from∼0.1sto∼0.6s)isobservedinthekernelmode—whichincludestheexecutioninTrustZone.TheoperationthatneedstocreatetheTA(torestarttheTEEandloadTEElibrariesfromscratch),suchasone-timeinference,shouldbetakencareofbypreloadingtheTAbeforeconductinginferenceinpracticalapplications.Duringtraining,themainreasonfortheincreasedCPUtimeisthatprotectingnon-trainablelayersintheTrustZoneresultsinanadditionaltransmissionoftheirprevioustrainablelayersfromtheREEtotheTrustZone.Non-trainablelayers(i.e.,dropoutandmax-poolinglayers)areprocessedusingatrainablelayerasthebase,andthenon-trainableoperationmanipulatesitspreviouslayer(i.e.,thetrainablelayer)directly.Tohidethenon-trainablelayerandtopreventitsnextlayerfrombeingtransferredtotheREEduringbackwardpropagation(asmentionedinSection3.4),wealsomovethepreviousconvolutionallayertotheTrustZone,whichresultsintheturningpointsofthetrainingareonelayerinfrontoftheturningpointsduringinference.Therefore,inpracticalapplications,weshouldprotectthetrainablelayeranditspreviousnon-trainablelayertogether,sinceonlyprotectingthenon-trainablelayerstillrequiresmovingitstrainablelayerintoTrustZoneanddoesnotreducethecost.5.2MemoryUsageTrainingwiththeTrustZonedoesnotsignificantlyinfluencethememoryusage(intheREE)asitissimilartotrainingwithoutTrust-Zone(seeFigure5a).InferencewithTrustZoneuseslessmemory(intheREE)(seeFigure5b)butthereisstillnodifferencewhenmorelayersareplacedintoTrustZone.Memoryusage(intheREE)decreasessincelayersaremovedtoTrustZoneandoccupysecurememoryinstead.WemeasuretheTA’smemoryusageusingallmap-pingsizesinsecurememorybasedontheTA’sabortinformation.TheTAusesfivememoryregionsforsizesof0x1000,0x101000,0x1e000,0xa03000,and0x1000whichis11408KiBintotalforallconfigurations.ThemappingsizeofsecurememoryisfixedwhentheTEErun-timeallocatesmemoryfortheTA,anditdoesnotinfluencewhenmovingmorelayersintothememory.Therefore,becauseofthedifferentmodelsizes,agoodsettingistomaximizetheTA’smemorymappingsizeinTrustZoneinordertoholdseverallayersofapossiblelargemodel.5.3PowerConsumptionFortraining,thepowerconsumptionsignificantlydecreases(p<0.001)whenmorelayersaremovedinsideTrustZone(seeFigure5c).Incontrast,thepowerconsumptionduringinferencesignificantlyincreases(p<0.001)asshowninFigure5d.Inbothtrainingand MobiSys’20,June15–19,2020,Toronto,ON,CanadaF.Mo,A.S.Shamsabadi,K.Katevas,S.Demetriou,I.Leontiadis,A.Cavallaro,andH.Haddadi(a)TrainingwithAlexnet(b)TrainingwithVGG-7(c)InferencewithAlexnet(d)InferencewithVGG-7Figure3:TheCPUtimeofeachstepoftrainingmodelsorconductinginferenceonCIFAR-100,protectingconsecutivelastlayersusingTrustZone(Note:Thex-axiscorrespondstoseverallastlayersincludedintheTrustZone.CT,SM,FC,D,MP,andCrefertothecost,softmax,fullyconnected,dropout,maxpooling,convolutionallayers.1,2,3,4,and5inthex-axisarecorrespondingtothex-axisofFigure2.Hor-izontaldashedlines()representthebaselinewherealllayersareoutoftheTrustZone.20timesforeachtrial,anderrorbarsare95%CI).inferencesettings,thetrendofpowerconsumptionislikelyre-latedtothechangeofCPUtime(seeFigure2).Morespecifically,trajectoriesoftheminfigureshavethesameturningpoints(i.e.,decreasesorincreaseswhenmovingthesamelayertotheTEE).OnereasonfortheincreasedpowerconsumptionduringinferenceisthesignificantincreaseinthenumberofCPUexecutionsforinvokingtherequiredTEElibrariesthatconsumeadditionalpower.Whenalargenumberoflow-poweroperations(e.g.,memoryop-erationsformappingareas)areinvolved,thepowerconsumption(i.e.,energyconsumedperunittime)couldbelowercomparedtowhenafewCPU-boundcomputationally-intensiveoperationsarerunning.Thismightbeoneofthereasonsbehindthedecreasedpowerconsumptionduringtraining.Systemperformanceonlargemodels.Wealsotesttheperfor-manceofDarkneTZonseveralmodelstrainedonImageNetwhenprotectingthelastlayeronly,includingthesoftmaxlayer(orthepoolinglayer)andthecostlayerinTrustZone,inordertohideconfidencescoresandthecalculationofcost.Theresultsshowthattheoverheadofprotectinglargemodelsisnegligible(seeFigure6):increasesinCPUtime,memoryusage,andpowerconsumption(a)TrainingonCIFAR-100(b)InferenceonCIFAR-100Figure4:TheCPUexecutiontimeinusermodeandkernelmodeofeachstepoftrainingthemodelorconductinginfer-enceonCIFAR-100,protectingconsecutivelastlayersusingTrustZone(Note:Horizontaldot-dashedlines()repre-sentthebaselinewherealllayersareoutoftheTrustZone.20timesforeachtrial.CPUtimeinusermodeinFigure4bistoosmalltobeshown).arelowerthan2%forallmodels.Amongthesemodels,thesmallermodels(e.g.,TinyDarknetandDarknetReferencemodel)tendtohaveahigherrateofincreaseofCPUtimecomparedtothelargermodels(e.g.,Darknet-53model),indicatingthatwithlargermodels,theinfluenceofTrustZoneprotectiononresourceconsumptionbecomesrelativelyless.Systemperformancesummary.Insummary,itispracticaltoprocessasequenceofsensitiveDNNmodel’slayersinsidetheTEEofamobiledevice.PuttingthelastlayerintheTrustZonedoesnotincreaseCPUtimeandonlyslightlyincreasesmemoryusage(bynomorethan1%).Thepowerconsumptionincreaseisalsominor(nomorethan0.5%)whenfine-tuningthemodels.Forinference,se-curingthelastlayerdoesnotincreasememoryusagebutincreasesCPUtimeandpowerconsumption(by3%).IncludingmorelayerstofullyutilizetheTrustZoneduringtrainingcanfurtherincreaseCPUtime(by10%)butdoesnotharmpowerconsumption.One-timeinferencewithmultiplelayersintheTrustZonestillrequiresfurtherdevelopment,suchasutilizingpreliminaryloadoftheTA,inpracticalapplications.5.4PrivacyWeconductthewhite-boxMIA(Section4.3)onalltargetmodels(seeSection4.1forthechoiceofmodels)toanalyzetheprivacyriskwhileprotectingseverallayersintheTrustZone.Weusedthestandardprecisionandrecallmetrics,similartopreviousworks[48].Inourcontext,precisionisthefractionofrecordsthatanattackerinfersasbeingmembers,thatareindeedmembersinthetrainingset.Recallisthefractionoftrainingrecordsthathadbeenidentifiedcorrectlyasmembers.TheperformanceforbothmodelsandMIAsareshowninTable1.Figure7showstheattacksuccessprecisionandrecallfordifferentconfigurationsofDarkneTZ.Ineachcon-figuration,adifferentnumberoflayersisprotectedbyTrustZonebeforewelaunchtheattack.Theconfigurationswithzerolayers DarkneTZ:TowardsModelPrivacyattheEdgeusingTrustedExecutionEnvironmentsMobiSys’20,June15–19,2020,Toronto,ON,Canada(a)Memoryusageoftraining(b)Memoryusageofinference(c)Powerconsumptionoftraining(d)PowerconsumptionofinferenceFigure5:Thememoryusageandpowerconsumptionoftrainingmodels,whileconductingtrainingorinferenceonCIFAR-100andImageNetTiny,protectingconsecutivelastlayersusingTrustZone(Note:Horizontaldashedlines(and)representthebaselinewherealllayersareoutsidetheTrustZone.20timesforeachtrial,errorbarsare95%CI).Figure6:PerformanceonprotectingthelastlayerofmodelstrainedonImageNetinTrustZoneforinference(Note:20timespertrial;errorbarsaretoosmalltobevisibleintheplot).protectedcorrespondtoDarkneTZbeingdisabled(i.e.,withourdefensedisabled).Inparticular,wemeasuretheMIAadversary’ssuccessfollowingtwomainconfigurationsettingsofDarkneTZ.Inthefirstsetting,weincrementallyaddconsecutivelayersintheTable1:Trainingandtestingaccuracy(Acc.)andcorrespond-ingMIAprecision(Pre.)withorwithoutDarkneTZ(DTZ)ofallmodelsanddatasets.DatasetModelTrainAcc.TestAcc.AttackPre.AttackPre.(DTZ)CIFAR-100AlexNet97.0%43.9%84.7%51.1%VGG-783.8%62.7%71.5%50.5%ResNet-10099.6%72.4%88.3%50.6%ImageNetTinyAlexNet40.3%31.5%56.7%50.0%VGG-757.1%48.6%54.2%50.8%ResNet-11062.1%54.2%54.6%50.2%TrustZonestartingfromthefrontlayersandmovingtothelastlayersuntilthecompletemodelisprotected.Inthesecondsettingwedotheopposite:westartfromthelastlayerandkeepaddingpreviouslayersinTrustZoneforeachconfiguration.OurresultsshowthatwhenprotectingthefirstlayersinTrustZone,theattacksuccessprecisiondoesnotchangesignificantly.Incontrast,hidingthelastlayerscansignificantlydecreasetheattacksuccesspreci-sion,evenwhenonlyasinglelayer(i.e.,thelastlayer)isprotectedbyTrustZone.Theprecisiondecreasesto∼50%(randomguessing) MobiSys’20,June15–19,2020,Toronto,ON,CanadaF.Mo,A.S.Shamsabadi,K.Katevas,S.Demetriou,I.Leontiadis,A.Cavallaro,andH.HaddadiFigure7:Precisionandrecallofwhite-boxmembershipinferenceattackswhenfirstorlastlayersofthemodel,trainedonCIFAR-100,areprotectedusingTrustZone.(Note:Forfirstlayerprotection,1referstothefirstlayer,2referstothefirstandthesecondlayer,etc.Forlastlayerprotection,1referstothelastlayer(i.e.,theoutputlayer),2referstothelastandsecondlastlayer,etc.0meansthatalllayersareoutoftheTrustZone.Dashedlinesat50%representbaselines(i.e.,randomguess).Eachtrialhasbeenrepeated5times,anderrorbarsare95%CI).nomatterhowaccuratetheattackisbeforethedefense.Forex-ample,fortheAlexNetmodeltrainedonCIFAR-100,theprecisiondropsfrom85%to∼50%whenweonlyprotectthelastlayerinTrustZone.Precisionismuchhigherthanrecallsincethenumberofmembersintheadversary’strainingsetislargerthanthatofnon-members,sotheMIAmodelpredictsmemberimagesbetter.Theresultsalsoshowthatthemembershipinformationthatleaksduringinferenceandfine-tuningisverysimilar.Moreover,accord-ingto[38]and[48],theattacksuccessprecisionisinfluencedbythesizeoftheattackers’trainingdataset.Weusedrelativelylargedatasets(halfofthetargetdatasets)fortrainingMIAmodelssothatitishardfortheattackertoincreasesuccessprecisionsignificantlyinourdefensesetting.Therefore,byhidingthelastlayerinTrust-Zone,theadversary’sattackprecisiondegradesto50%(randomguess)whiletheoverheadisunder3%.WealsoevaluatedtheprivacyriskwhenDarkneTZprotectsthemodel’soutputsinTrustZonebynormalizingitbeforeoutputtingpredictionresults.Inthisconfigurationweconductthewhite-boxMIAswhenallotherlayers(intheuntrustedREE)areaccessiblebytheadversary.Thismeansthatthecostfunctionisprotected,andtheconfidencescore’soutputsarecontrolledbyTrustZone.Threecombinationsofmodelsanddatasets,includingAlexNet,VGG-7,andResNetonCIFAR-100areselectedastheywereidentifiedasmorevulnerable(i.e.,withhighattackprecisionseeTable1)toMIAs[38].DarkneTZissettocontrolthemodel’soutputsinthreedifferentways:(a)top-1classwithitsconfidencescore;(b)top-5classeswiththeirconfidencescores;(c)allclasseswiththeirconfidencescores.AsshowninFigure8allthreemethodscansignificantly(p<0.001)decreasetheattacksuccessperformancetoaround50%(i.e.,randomguess).Therefore,wefoundthatitishighlypracticaltouseDarkneTZtotackleMIAs:itincurslowresourceconsumptioncostwhileachievinghighprivacyguarantees.Figure8:Precisionofwhite-boxmembershipinferenceat-tacksonmodelstrainedonCIFAR-100whenonlyoutputsareprotectedusingTrustZone(Dashedlinesat50%repre-sentbaselines(i.e.,randomguess).5timesforeachtrial,anderrorbarsare95%CI).6DISCUSSION6.1SystemPerformanceEffectsofthemodelsize.Weshowedthatprotectinglargemod-elswithTrustZonetendstohavealowerrateofincreaseofCPUexecutiontimethanprotectingsmallmodels(seeFigure6).Onepos-sibleexplanationisthatthelastlayerofalargermodelusesalowerproportionofcomputationalresourcesinthewholemodelcom-paredtothatofasmallermodel.Wehavealsoexaminedtheeffectofdifferenthardware:weexecutedourimplementationofDark-neTZwithsimilarmodelsizesonaRaspberryPi3ModelB(RPi3B) DarkneTZ:TowardsModelPrivacyattheEdgeusingTrustedExecutionEnvironmentsMobiSys’20,June15–19,2020,Toronto,ON,Canadaandfoundittohavealowerrateofincreaseofcost(i.e.,loweroverhead)thanwhenexecutedontheHikeyboard[36].ThisisbecausetheHikeyboardhasmuchfasterprocessorsoptimizedformatrixcalculations,whichrendersadditionaloperationsofutilizingTrustZonemorenoticeablecomparedtoothernormalexecutions(e.g.,deeplearningoperations)intheREE.Moreover,ourresultsshowthatatypicalconfiguration(16MiBsecurememory)oftheTrustZoneissufficienttoholdatleastthelastlayerofpracticalDNNmodels(e.g.,trainedonImageNet).However,itischallengingtofitmultiplelayersoflargemodelsinasignificantlysmallerTEE.WetestedaTEEwith5MiBsecurememoryonaGrapeboard8:only1,000neurons(correspondingto1,000classes)intheoutputlayeralreadyoccupy4MiBmemorywhenusingfloating-pointarithmetic.Insuchenvironments,modelcompression,suchaspruning[18]andquantization[27,52],couldbeonewaytofacilitateincludingmorelayersintheTEE.Lastly,wefoundthatutilizingTEEsforprotectingthelastlayerdoesnotnecessarilyleadtoresourcecon-sumptionoverhead,whichdeservesfurtherinvestigationinfuturework.Overall,ourresultsshowthatutilizingTrustZonetoprotectoutputsoflargeDNNmodelsiseffectiveandhighlyefficient.Extrapolatingforothermobile-friendlymodels.WehaveusedTinyDarknetandDarknetReferencefortestingDarkneTZ’sper-formanceonmobile-friendlymodels(forImageNetclassification).Anotherwidely-usedDNNsonmobiledevices,Squeezenet[26]andMobilenet[22],definenewtypesofconvolutionallayersarenotsupportedinDarknetframeworkcurrently.WeexpectthesetohaveasimilarprivacyandTEEperformancefootprintbecauseofthecomparablesizeofmodel(4MB,28MB,4.8MB,3.4MBforTinyDarknet,DarknetReference,Squeezenet,andMobilenet,respec-tively),floating-pointoperations(980M,810M,837M,579M),andmodelaccuracy(58.7%,61.1%,59.1%,and71.6%forTop-1)9.Improvingperformance.ModernmobiledevicesusuallyareequippedwithGPUorspecializedprocessorsfordeeplearningsuchasNPU.OurcurrentimplementationonlyusestheCPUbutcanbeextendedtoutilizingfasterchips(i.e.,GPU)bymovingthefirstlayersoftheDNNthatisalwaysintheREEtothesechips.Byprocessingsev-erallayersofaDNNinaTEE(SGX)andtransferalllinearlayerstoaGPU,Trameretal.[51]haveobtained4xto11xincreaseforverifiableandprivateinferenceintermsofVGG16,MobileNet,andResNet.Foredgedevices,anotherwayforexpeditingthedeeplearningprocessistoutilizeTrustZone’sAXIbusorperipheralbus,whichalsohasanadditionalsecurebitontheaddress.AccessingaGPU(orNPU)throughthesecurebusenablestheTrustZonetocon-troltheGPUsothattheconfidentialityofDNNmodelsontheGPUcannotbebreachedandachievefasterexecutionsforpartitioneddeeplearningondevices.6.2Models’PrivacyDefendingagainstotheradversaries.DarkneTZisnotonlyca-pableofdefendingMIAsbycontrollinginformationfromoutputs,butalsocapableofdefendingothertypesofattackssuchastraining-basedmodelinversionattack[16,56]orGANattack[21]astheyalsohighlydependonthemodel’soutputs.Inadditiontothat,by8https://www.grapeboard.com/9https://github.com/albanie/convnet-burdenandhttps://pjreddie.com/darknet/tiny-darknet/controllingtheoutputinformationduringinference,DarkneTZcanprovidedifferentprivacysettingsdependingondifferentprivacypoliciestoserverscorrespondingly.Forexample,optionsincludedinourexperimentsareoutputtingTop-1onlywithitsconfidencescores,outputtingTop-5withtheirranks,oroutputtingallclasseswiththeirrankswhichallachievestrongdefenseagainstMIAs.Recentresearch[29]alsomanipulatesconfidencescores(i.e.,byaddingnoises)todefendagainstMIAs,buttheirprotectioncanbebrokeneasilyifthenoiseadditionprocessisvisibletotheadver-sariesforacompromisedOS.DarkneTZalsoprotectslayerswhiletrainingmodelsandconductinginference.Theissueofprivateinfor-mationleakedfromlayers’gradientsbecomesmoreseriousconsid-eringthatDNNmodels’gradientsaresharedandexchangedamongdevicesincollaborated/federatedlearning.[34]’sworksuccessfullyshowsprivate(e.g.,membership)informationaboutparticipants’trainingdatausingtheirupdatedgradients.Recentresearch[67]furtherrevealsthatitispossibletorecoverimagesandtextsfromgradientsinpixel-levelandtoken-level,respectively,andthelastlayershavealowlossfortherecovery.ByusingDarkneTZtolimitinformationexposureoflayers,thistypeofattackcouldbeweakened.Preservingmodelutility.By"hiding"(insteadofobfuscating)partsofaDNNmodelwithTrustZone,DarkneTZpreservesamodel’sprivacywithoutreducingtheutilityofthemodel.Partition-ingtheDNNandmovingitsmoresensitivepartintoanisolatedTEEmaintainsitspredictionaccuracy,asnoobfuscatingtechnique(e.g.,noiseaddition)isappliedtothemodel.Asoneexampleofobfuscation,applyingdifferentialprivacycandecreasethepredic-tionaccuracyofthemodel[61].AddingnoisestoamodelwiththreelayerstrainedonMNISTleadstothemodelaccuracydropby5%forsmallnoiselevels(ϵ=8)andby10%forlargenoiselevels(ϵ=2)[1,5].Thedropincreasestoaround20%forlargelevelnoiseswhentrainingonCIFAR-10[1].Toobtainconsiderableaccu-racywhenusingdifferentialprivacy,oneneedstotrainthemodelwithmoreepochs,whichischallengingforlargermodelssincemorecomputationalresourcesareneeded.Inrecentwork,carefullycraftednoiseisaddedtoconfidencescoresbyapplyingadversar-ialexamples[29].Comparedtotheinevitabledecreasingutilityofaddingnoise,DarkneTZachievesabettertrade-offbetweenprivacyandutilitycomparedtodifferentialprivacy.7CONCLUSIONWedemonstratedatechniquetoimprovemodelprivacyforade-ployed,pre-trainedDNNmodelusingon-deviceTrustedExecutionEnvironment(TrustZone).Weappliedtheprotectiontoindividualsensitivelayersofthemodel(i.e.,thelastlayers),whichencodealargeamountofprivateinformationontrainingdatawithrespecttoMembershipInferenceAttacks.WeanalyzedtheperformanceofourprotectionontwosmallmodelstrainedontheCIFAR-100andImageNetTinydatasets,andsixlargemodelstrainedontheImageNetdataset,duringtrainingandinference.Ourevaluationin-dicatesthat,despitememorylimitations,theproposedframework,DarkneTZ,iseffectiveinimprovingmodels’privacyatarelativelylowperformancecost.UsingDarkneTZaddsaminoroverheadofunder3%forCPUtime,memoryusage,andpowerconsumptionforprotectingthelastlayer,andof10%forfullyutilizingaTEE’s MobiSys’20,June15–19,2020,Toronto,ON,CanadaF.Mo,A.S.Shamsabadi,K.Katevas,S.Demetriou,I.Leontiadis,A.Cavallaro,andH.Haddadiavailablesecurememorytoprotectthemaximumnumberoflayers(dependingonthemodelsizeandconfiguration)thattheTEEcanhold.WebelievethatDarkneTZisasteptowardsstrongerprivacyprotectionandhighmodelutility,withoutsignificantoverheadinlocalcomputingresources.ACKNOWLEDGMENTSWeacknowledgetheconstructivefeedbackfromtheanonymousreviewers.KatevasandHaddadiwerepartiallysupportedbytheEPSRCDataboxandDADAgrants(EP/N028260/1,EP/R03351X/1).ThisresearchwasalsofundedbyagiftfromHuaweiTechnologies,agenerousscholarshipfromtheChineseScholarshipCouncil,andahardwaregiftfromArm.REFERENCES[1]MartinAbadi,AndyChu,IanGoodfellow,HBrendanMcMahan,IlyaMironov,KunalTalwar,andLiZhang.2016.Deeplearningwithdifferentialprivacy.InProceedingsofthe2016ACMSIGSACConferenceonComputerandCommunicationsSecurity.ACM,308–318.[2]HervéAbdiandLynneJWilliams.2010.Tukey’shonestlysignificantdifference(HSD)test.EncyclopediaofResearchDesign.ThousandOaks,CA:Sage(2010),1–5.[3]AbbasAcar,HidayetAksu,ASelcukUluagac,andMauroConti.2018.Asurveyonhomomorphicencryptionschemes:Theoryandimplementation.ACMComputingSurveys(CSUR)51,4(2018),79.[4]FrancisAkowuah,AmitAhlawat,andWenliangDu.2018.ProtectingSensitiveDatainAndroidSQLiteDatabasesUsingTrustZone.InProceedingsoftheInter-nationalConferenceonSecurityandManagement(SAM).TheSteeringCommitteeofTheWorldCongressinComputerScience,227–233.[5]GalenAndrew,SteveChien,andNicolasPapernot.2019.TensorFlowPrivacy.https://github.com/tensorflow/privacy[6]YoshinoriAono,TakuyaHayashi,LihuaWang,ShihoMoriai,etal.2018.Privacy-preservingdeeplearningviaadditivelyhomomorphicencryption.IEEETransac-tionsonInformationForensicsandSecurity13,5(2018),1333–1345.[7]AArm.2009.Securitytechnology-buildingasecuresystemusingTrustZonetechnology.ARMTechnicalWhitePaper(2009).[8]FerdinandBrasser,DavidGens,PatrickJauernig,Ahmad-RezaSadeghi,andEm-manuelStapf.2019.SANCTUARY:ARMingTrustZonewithuser-spaceenclaves..InNetworkandDistributedSystemsSecurity(NDSS)Symposium2019.[9]RichCaruana,SteveLawrence,andCLeeGiles.2001.Overfittinginneuralnets:Backpropagation,conjugategradient,andearlystopping.InAdvancesinNeuralInformationProcessingSystems.402–408.[10]VictorCostanandSrinivasDevadas.2016.IntelSGXExplained.IACRCryptologyePrintArchive2016,086(2016),1–118.[11]JiaDeng,WeiDong,RichardSocher,Li-JiaLi,KaiLi,andLiFei-Fei.2009.Imagenet:Alarge-scalehierarchicalimagedatabase.InProceedingsoftheIEEEconferenceonComputerVisionandPatternRecognition.Ieee,248–255.[12]PanDong,AlanBurns,ZheJiang,andXiangkeLiao.2018.TZDKS:ANewTrustZone-BasedDual-CriticalitySystemwithBalancedPerformance.In2018IEEE24thInternationalConferenceonEmbeddedandReal-TimeComputingSystemsandApplications(RTCSA).IEEE,59–64.[13]AlexeyDosovitskiyandThomasBrox.2016.Invertingvisualrepresentationswithconvolutionalnetworks.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition.4829–4837.[14]CynthiaDwork,AaronRoth,etal.2014.Thealgorithmicfoundationsofdiffer-entialprivacy.FoundationsandTrends®inTheoreticalComputerScience9,3–4(2014),211–407.[15]Jan-ErikEkberg,KariKostiainen,andNAsokan.2014.Theuntappedpotentialoftrustedexecutionenvironmentsonmobiledevices.IEEESecurity&Privacy12,4(2014),29–37.[16]MattFredrikson,SomeshJha,andThomasRistenpart.2015.Modelinversionattacksthatexploitconfidenceinformationandbasiccountermeasures.InPro-ceedingsofthe2015ACMSIGSACConferenceonComputerandCommunicationsSecurity.ACM,1322–1333.[17]ZhongshuGu,HeqingHuang,JialongZhang,DongSu,HaniJamjoom,AnkitaLamba,DimitriosPendarakis,andIanMolloy.2018.YerbaBuena:SecuringDeepLearningInferenceDataviaEnclave-basedTernaryModelPartitioning.arXivpreprintarXiv:1807.00969(2018).[18]SongHan,HuiziMao,andWilliamJDally.2015.Deepcompression:Com-pressingdeepneuralnetworkswithpruning,trainedquantizationandhuffmancoding.arXivpreprintarXiv:1510.00149.InInternationalConferenceonLearningRepresentations(ICLR).https://arxiv.org/abs/1510.00149[19]LucjanHanzlik,YangZhang,KathrinGrosse,AhmedSalem,MaxAugustin,MichaelBackes,andMarioFritz.2018.Mlcapsule:Guardedofflinedeploymentofmachinelearningasaservice.arXivpreprintarXiv:1808.00590(2018).[20]KaimingHe,XiangyuZhang,ShaoqingRen,andJianSun.2016.Deepresiduallearningforimagerecognition.InProceedingsoftheIEEEconferenceonComputerVisionandPatternRecognition.770–778.[21]BrilandHitaj,GiuseppeAteniese,andFernandoPerez-Cruz.2017.DeepmodelsundertheGAN:informationleakagefromcollaborativedeeplearning.InPro-ceedingsofthe2017ACMSIGSACConferenceonComputerandCommunicationsSecurity.ACM,603–618.[22]AndrewGHoward,MenglongZhu,BoChen,DmitryKalenichenko,WeijunWang,TobiasWeyand,MarcoAndreetto,andHartwigAdam.2017.Mobilenets:Efficientconvolutionalneuralnetworksformobilevisionapplications.arXivpreprintarXiv:1704.04861(2017).[23]GaoHuang,ZhuangLiu,LaurensVanDerMaaten,andKilianQWeinberger.2017.Denselyconnectedconvolutionalnetworks.InProceedingsoftheIEEEconferenceonComputerVisionandPatternRecognition.4700–4708.[24]TylerHunt,CongzhengSong,RezaShokri,VitalyShmatikov,andEmmettWitchel.2018.Chiron:Privacy-preservingMachineLearningasaService.arXivpreprintarXiv:1803.05961(2018).[25]NickHynes,RaymondCheng,andDawnSong.2018.Efficientdeeplearningonmulti-sourceprivatedata.arXivpreprintarXiv:1807.06689(2018).[26]ForrestNIandola,SongHan,MatthewWMoskewicz,KhalidAshraf,WilliamJDally,andKurtKeutzer.2016.SqueezeNet:AlexNet-levelaccuracywith50xfewerparametersand<0.5MBmodelsize.arXivpreprintarXiv:1602.07360(2016).[27]BenoitJacob,SkirmantasKligys,BoChen,MenglongZhu,MatthewTang,AndrewHoward,HartwigAdam,andDmitryKalenichenko.2018.Quantizationandtrainingofneuralnetworksforefficientinteger-arithmetic-onlyinference.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition.2704–2713.[28]BargavJayaramanandDavidEvans.2019.EvaluatingDifferentiallyPrivateMachineLearninginPractice.In28thUSENIXSecuritySymposium(USENIXSecurity19).USENIXAssociation,SantaClara,CA,1895–1912.https://www.usenix.org/conference/usenixsecurity19/presentation/jayaraman[29]JinyuanJia,AhmedSalem,MichaelBackes,YangZhang,andNeilZhenqiangGong.2019.MemGuard:DefendingagainstBlack-BoxMembershipInferenceAttacksviaAdversarialExamples.InProceedingsofthe2019ACMSIGSACCon-ferenceonComputerandCommunicationsSecurity.259–274.[30]HugoKrawczyk.2003.SIGMA:The‘SIGn-and-MAc’approachtoauthenticatedDiffie-HellmananditsuseintheIKEprotocols.InAnnualInternationalCryptologyConference.Springer,400–425.[31]AlexKrizhevsky,VinodNair,andGeoffreyHinton.[n.d.].CIFAR-100(CanadianInstituteforAdvancedResearch).http://www.cs.toronto.edu/~kriz/cifar.html[32]YannLeCun,YoshuaBengio,andGeoffreyHinton.2015.Deeplearning.nature521,7553(2015),436–444.[33]NinghuiLi,WahbehQardaji,DongSu,YiWu,andWeiningYang.2013.Member-shipprivacy:aunifyingframeworkforprivacydefinitions.InProceedingsofthe2013ACMSIGSACconferenceonComputerandCommunicationsSecurity.ACM,889–900.[34]LucaMelis,CongzhengSong,EmilianoDeCristofaro,andVitalyShmatikov.2019.ExploitingUnintendedFeatureLeakageinCollaborativeLearning.InProceedingsof40thIEEESymposiumonSecurity&Privacy.IEEE,480–495.[35]IlyaMironov.2017.Rényidifferentialprivacy.In2017IEEE30thComputerSecurityFoundationsSymposium(CSF).IEEE,263–275.[36]FanMo,AliShahinShamsabadi,KleomenisKatevas,AndreaCavallaro,andHamedHaddadi.2019.Poster:TowardsCharacterizingandLimitingInformationExposureinDNNLayers.InProceedingsofthe2019ACMSIGSACConferenceonComputerandCommunicationsSecurity.ACM,2653–2655.[37]MichaelNaehrig,KristinLauter,andVinodVaikuntanathan.2011.Canhomo-morphicencryptionbepractical?.InProceedingsofthe3rdACMworkshoponCloudcomputingsecurityworkshop.ACM,113–124.[38]MiladNasr,RezaShokri,andAmirHoumansadr.2019.ComprehensivePrivacyAnalysisofDeepLearning:Stand-aloneandFederatedLearningunderPassiveandActiveWhite-boxInferenceAttacks.InProceedingsof40thIEEESymposiumonSecurity&Privacy.IEEE.[39]OlgaOhrimenko,FelixSchuster,CedricFournet,AasthaMehta,SebastianNowozin,KapilVaswani,andManuelCosta.2016.ObliviousMulti-PartyMachineLearningonTrustedProcessors.In25thUSENIXSecuritySymposium(USENIXSecurity16).USENIXAssociation,Austin,TX,619–636.https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/ohrimenko[40]SeyedAliOsia,AliShahinShamsabadi,AliTaheri,KleomenisKatevas,SinaSajadmanesh,HamidRRabiee,NicholasDLane,andHamedHaddadi.2020.Ahybriddeeplearningarchitectureforprivacy-preservingmobileanalytics.IEEEInternetofThingsJournal(2020).[41]SeyedAliOsia,AliShahinShamsabadi,AliTaheri,HamidRRabiee,andHamedHaddadi.2018.PrivateandScalablePersonalDataAnalyticsUsingHybridEdge-to-CloudDeepLearning.Computer51,5(2018),42–49. DarkneTZ:TowardsModelPrivacyattheEdgeusingTrustedExecutionEnvironmentsMobiSys’20,June15–19,2020,Toronto,ON,Canada[42]HeejinPark,ShuangZhai,LongLu,andFelixXiaozhuLin.2019.StreamBox-TZ:securestreamanalyticsattheedgewithTrustZone.In2019{USENIX}AnnualTechnicalConference19.537–554.[43]AdamPaszke,SamGross,SoumithChintala,GregoryChanan,EdwardYang,ZacharyDeVito,ZemingLin,AlbanDesmaison,LucaAntiga,andAdamLerer.2017.AutomaticDifferentiationinPyTorch.InNIPSAutodiffWorkshop.[44]AdityanarayananRadhakrishnan,MikhailBelkin,andCarolineUhler.2018.DownsamplingleadstoImageMemorizationinConvolutionalAutoencoders.arXivpreprintarXiv:1810.10333(2018).[45]MdAtiqurRahman,TanzilaRahman,RobertLaganière,NomanMohammed,andYangWang.2018.MembershipInferenceAttackagainstDifferentiallyPrivateDeepLearningModel.TransactionsonDataPrivacy11,1(2018),61–79.[46]JosephRedmon.2013–2016.Darknet:OpenSourceNeuralNetworksinC.http://pjreddie.com/darknet/.[47]AhmedSalem,YangZhang,MathiasHumbert,PascalBerrang,MarioFritz,andMichaelBackes.2018.Ml-leaks:Modelanddataindependentmember-shipinferenceattacksanddefensesonmachinelearningmodels.arXivpreprintarXiv:1806.01246.InNetworkandDistributedSystemsSecurity(NDSS)Symposium2018.https://arxiv.org/abs/1806.01246[48]RezaShokri,MarcoStronati,CongzhengSong,andVitalyShmatikov.2017.Mem-bershipinferenceattacksagainstmachinelearningmodels.InProceedingsof38thIEEESymposiumonSecurity&Privacy.IEEE,3–18.[49]ChristianSzegedy,WeiLiu,YangqingJia,PierreSermanet,ScottReed,DragomirAnguelov,DumitruErhan,VincentVanhoucke,andAndrewRabinovich.2015.Goingdeeperwithconvolutions.InProceedingsoftheIEEEconferenceonComputerVisionandPatternRecognition.1–9.[50]ShrutiTople,KaranGrover,ShwetaShinde,RanjitaBhagwan,andRamachandranRamjee.2018.Privado:PracticalandsecureDNNinference.arXivpreprintarXiv:1810.00602(2018).[51]FlorianTramèrandDanBoneh.2019.Slalom:Fast,VerifiableandPrivateExecu-tionofNeuralNetworksinTrustedHardware.arXivpreprintarXiv:1806.03287.InInternationalConferenceonLearningRepresentations(ICLR).https://arxiv.org/abs/1806.03287[52]KuanWang,ZhijianLiu,YujunLin,JiLin,andSongHan.2019.Haq:Hardware-awareautomatedquantizationwithmixedprecision.InProceedingsoftheIEEEconferenceonComputerVisionandPatternRecognition.8612–8620.[53]WenhaoWang,GuoxingChen,XiaoruiPan,YinqianZhang,XiaoFengWang,VincentBindschaedler,HaixuTang,andCarlAGunter.2017.Leakycauldrononthedarkland:Understandingmemoryside-channelhazardsinSGX.InPro-ceedingsofthe2017ACMSIGSACConferenceonComputerandCommunicationsSecurity.ACM,2421–2434.[54]ChuguiXu,JuRen,DeyuZhang,YaoxueZhang,ZhanQin,andKuiRen.2019.GANobfuscator:MitigatinginformationleakageunderGANviadifferentialprivacy.IEEETransactionsonInformationForensicsandSecurity14,9(2019),2358–2371.[55]MengweiXu,JiaweiLiu,YuanqiangLiu,FelixXiaozhuLin,YunxinLiu,andXuanzheLiu.2019.Afirstlookatdeeplearningappsonsmartphones.InTheWorldWideWebConference.2125–2136.[56]ZiqiYang,JiyiZhang,Ee-ChienChang,andZhenkaiLiang.2019.NeuralNetworkInversioninAdversarialSettingviaBackgroundKnowledgeAlignment.InPro-ceedingsofthe2019ACMSIGSACConferenceonComputerandCommunicationsSecurity.ACM,225–240.[57]SamuelYeom,IreneGiacomelli,MattFredrikson,andSomeshJha.2018.Privacyriskinmachinelearning:Analyzingtheconnectiontooverfitting.In2018IEEE31stComputerSecurityFoundationsSymposium(CSF).IEEE,268–282.[58]KailiangYing,AmitAhlawat,BilalAlsharifi,YuexinJiang,PriyankThavai,andWenliangDu.2018.TruZ-Droid:IntegratingTrustZonewithmobileoperatingsystem.InProceedingsofthe16thAnnualInternationalConferenceonMobileSystems,Applications,andServices.ACM,14–27.[59]JasonYosinski,JeffClune,YoshuaBengio,andHodLipson.2014.Howtransfer-ablearefeaturesindeepneuralnetworks?.InAdvancesinNeuralInformationProcessingSystems.3320–3328.[60]JasonYosinski,JeffClune,AnhNguyen,ThomasFuchs,andHodLipson.2015.Understandingneuralnetworksthroughdeepvisualization.arXivpreprintarXiv:1506.06579.InDeepLearningWorkshopinInternationalConferenceonMachineLearning.https://arxiv.org/abs/1506.06579[61]LeiYu,LingLiu,CaltonPu,MehmetEmreGursoy,andStaceyTruex.2019.Differentiallyprivatemodelpublishingfordeeplearning.InProceedingsof40thIEEESymposiumonSecurity&Privacy.IEEE,332–349.[62]MatthewDZeilerandRobFergus.2014.Visualizingandunderstandingconvolu-tionalnetworks.InEuropeanconferenceoncomputervision.Springer,818–833.[63]ChiyuanZhang,SamyBengio,MoritzHardt,BenjaminRecht,andOriolVinyals.2017.Understandingdeeplearningrequiresrethinkinggeneralization.arXivpreprintarXiv:1611.03530.InInternationalConferenceonLearningRepresentations(ICLR).https://arxiv.org/abs/1611.03530[64]C.Zhang,P.Patras,andH.Haddadi.2019.DeepLearninginMobileandWirelessNetworking:ASurvey.IEEECommunicationsSurveysTutorials21,3(2019),2224–2287.[65]ShengjiaZhao,JiamingSong,andStefanoErmon.2017.Learninghierarchicalfeaturesfromdeepgenerativemodels.InInternationalConferenceonMachineLearning.4091–4099.[66]ShijunZhao,QianyingZhang,YuQin,WeiFeng,andDengguoFeng.2019.SecTEE:ASoftware-basedApproachtoSecureEnclaveArchitectureUsingTEE.InProceedingsofthe2019ACMSIGSACConferenceonComputerandCommunica-tionsSecurity.ACM,1723–1740.[67]LigengZhu,ZhijianLiu,andSongHan.2019.Deepleakagefromgradients.InAdvancesinNeuralInformationProcessingSystems.14747–14756.[68]ÚlfarErlingsson,VasylPihur,andAleksandraKorolova.2014.RAPPOR:Ran-domizedAggregatablePrivacy-PreservingOrdinalResponse.InProceedingsofthe2014ACMSIGSACConferenceonComputerandCommunicationsSecurity.Scottsdale,Arizona.https://arxiv.org/abs/1407.6981View publication stats Mainsources:https://github.com/mofanv/darknetzOtherpaperattached:DarkneTZ:TowardsModelPrivacyattheEdgeusingTrustedExecutionEnvironments.Otherattached:ProjectinfoFocusonSecureFederatedLearningwithinARMTrustZoneResearchPaperIntroductionTheobjectiveoftheresearchpaperisforstudentstobecomemorefamiliarwithevaluationof“designalternatives/technologies”intheareaofCyberSecurityEngineering.Asecondaryobjectiveistofacilitatecriticalthinkingandwritingskillsnecessaryforresearchandsubsequentdocumentationofthetechnology.Thepapershouldprovideanopportunityforindividualstudentstofurtherunderstand/documenttheproblemprovidedinthedarknetzprojectandtoevaluateresearchandmethodsindesignalternatives/technologiesforthegivenproblem.Preliminaryresearchwillincludereadingarticlesfromrefereedjournals(representativejournalslistedbelow),Internetbrowsing,interviewswithexpertsandanyotherfactgatheringtechniquewhichwillprovideyouwithinformationregardingyourtopic.ResearchPaperDescriptionTheresearchpapershouldfollowthefollowingoutline.1.OverviewofDarknetZProject2.MyResearchAreaforthisPaperandtheProject:SecureFederatedLearningwithinARMTrustZone(Giveanoverviewofwhyyouchosethisresearcharea.Includewhatyouareresearchingandhowitwillcontributetotheproject)PPTattached3.DetailofyourFindingsintheResearchArea(ProvideadetailedreviewofhowthetechnologywillbeusedinyourCYSE492SeniorDesignProject).4.Conclusion5.TechnologyPaperEvaluation(Describethecontributionofthisresearchpapertoyouandthecourse.Howwouldyouimprovethissemesterproject?)Youshouldexplaineverythinginyourownwordsandyourdiscussionshouldbeunderstandabletosomeonewhoisnotfamiliarwiththetopic!Donotassumethatyourreaderisawareofthedesignalternatives/technologies.Therefore,maketheefforttosuccinctlyintroducebasicconceptsandideas.Thispapershouldbeondesignalternatives/technologies,4pageslong,usingIEEEformat.Eachstudentisrequestedtoselectatleasttworefereedjournalarticlesregardingtheirchosendesignalternatives/technologiesthathavebeenpublishedafter2015asreferences.Eachjournalarticleshouldbeselectedfromoneoftherecommendedrefereedjournalslistedbeloworanotherrefereedjournalthatcoversthetopicslistedbelow.Thetwoarticlesshouldnotbefromthesamejournal.Thejournalarticleneedstoprovideatheoreticaladvancement,an innovativeimplementationorapplication,orboth.ItismostprobablethatyouwillhavemanymorereferenceshowevertwoarerequiredRecommendedRefereedJournalsIEEETransactionsonEngineeringManagementNationalCybersecurityInstituteJournalJournalofCyberSecurityTechnologyIEEESystemsEngineeringSystemDynamicsReviewComputersandIndustrialEngineeringIEEESoftwareJournalofCybersecurity,OxfordPressINCOSESystemsEngineeringJournalNationalCybersecurityInstituteJournalIIETransactionsJournalofComputerSecurityOperationsResearchJournalofCybersecurityJournalofInformationAssurance&Cybersecurity(JIACS)JournalofCyber-SecurityandDigitalForensics(IJCSDF)InternationalJournalofCyberWarfareandTerrorism(IJCWT)InternationalJournalofInformationandNetworkSecurity(IJINS) Émpistos ARM TrustZone • Hardware-based security solution designed to create secure and non-secure execution environments on ARM-based processors. • Seperates processor into Normal and Secure worlds • Memory address space of Normal and Secure worlds through NS-bit which indicates the type of memory access • Normal World can call Secure World through a Secure Monitor Call instruction OP-TEE "Trusted Execution Environment" • Allows a trusted application to be ran in secure kernel world away from the non-secure OS • Allows generic OS-level functions like Interrupt and thread handling, crypto services and shared memory • How it works: A non-secure application calls the TEE API library, which then calls the host OS OP-TEE driver to send a request to the TEE to call a TA binary in the secure world to execute and return the result. Darknetz • DarkneTZ is an application that allows people to run multiple layers of a Deep Neural Network in ARM TrustZone. //Simplified version of Darknet, but is configured to take advantage of TrustZone. • Allows for secure execution of neural network layers, particularly the final output layer, to execute in ARM TrustZone safely away from unsecure OS • Protects model and input data from outside adversarial attacks, such as power analysis to determine what the model is doing and poisoning the data to mess up accuracy of the model Deep Learning • Machine based learning on artificial neural networks to recognize complex patterns • Made from layers of ‘neurons’ which learn to recognize patterns and predict/classify things in an image • DarkneTZ helps prevent attacks on models by having some layers of a network execute in a trusted zone Deep Learning : Layers • Layers – different parts of a neural network that have different purposes • Input Layer – input data • Hidden Layers • Fully Connected layers • Convolutional layers • MaxPooling layers • Output layer – final prediction Deep Learning: the Learning Part How a network learns is dependent on a few things: • Learning rate • Loss function • Optimizer Other important terms: • Training vs Testing • Underfitting vs Overfitting • Dropout • Data augmentation Keras & TensorFlow • Keras is an API that is bundled with the TensorFlow library that allows for easy construction, modification, and testing of neural networks • Designed to be human-readable • Provides very easy modules and functions to implement a neural network. Keras Example – Regular Neural Network Keras Example – Convolutional Neural Network Goal Going Forward • Learn from Keras and implement an easy-to-use Python frontend to construct and train neural networks in C that can then be securely executed using DarkneTZ • Demonstrate basic models functioning within DarkneTZ on the Raspberry Pi 3B+ • Next task: examine darknet code and learn how to create a basic network with c instead of tensorflow Week 9 - Setting Up DarknetZ Setting Up DarknetZ on Raspberry Pi 3B+ A Reproducible Guide + issues • To ssh into the Raspberry pi3, go to /etc/ssh/sshd_config • set "PermitEmptyPasswords yes" to log in as test user • To log in as root user, additionally set "PermitRootLogin yes" • Running on MNIST Dataset • grid.cfg • In a new directory named ‘grid’ • In a new directory named ‘images’ inside the directory ‘grid’ • Create and run the following python program WEEK 9 - SETTING UP DARKNETZ SETTING UP DARKNETZ ON QEMU A REPRODUCIBLE GUIDE + ISSUES • Ubuntu 22.04 or 20.04 running VM. *Note: make sure to have sufficient VM Memory Space (Recommended: 50+ GB) • In $Optee_Path$/optee_os/core/include/mm/pgt.cache.h • Include the changes in green below • In $Optee_Path$/optee_os/core/arch/arm/plat-vexpress/conf.mk • Remove the changes highlighted in red and add the changes highlighted in green below

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *