Anda di halaman 1dari 385

Wind River Linux User's Guide, 3.

Wind River Linux

USER'S GUIDE

3.0

Copyright 2009 Wind River Systems, Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means without the prior written permission of Wind River Systems, Inc. Wind River, Tornado, and VxWorks are registered trademarks of Wind River Systems, Inc. The Wind River logo is a trademark of Wind River Systems, Inc. Any third-party trademarks referenced are the property of their respective owners. For further information regarding Wind River trademarks, please see: www.windriver.com/company/terms/trademark.html This product may include software licensed to Wind River by third parties. Relevant notices (if any) are provided in your product installation at the following location: installDir/product_name/3rd_party_licensor_notice.pdf. Wind River may refer to third-party documentation by listing publications or providing links to third-party Web sites for informational purposes. Wind River accepts no responsibility for the information provided in such third-party documentation.

Corporate Headquarters Wind River 500 Wind River Way Alameda, CA 94501-1153 U.S.A. Toll free (U.S.A.): 800-545-WIND Telephone: 510-748-4100 Facsimile: 510-749-2010 For additional contact information, see the Wind River Web site: www.windriver.com For information on how to contact Customer Support, see: www.windriver.com/support

Wind River Linux User's Guide 3.0

24 Feb 09 Part #: DOC-16337-ND-00

Contents

PART I: INTRODUCTION, DESIGN, AND BUILD


1 Introduction .................................................................................................
1.1 1.2 1.3 1.4 1.5 1.6 Introduction ...................................................................................................................... Wind River Linux Documentation ............................................................................... Roadmap to the Wind River Linux Users Guide ..................................................... Document Conventions ................................................................................................. Overview of Wind River Linux .................................................................................... Platform Developer and Application Developer ...................................................... Platform Developer ........................................................................................... Application Developer ..................................................................................... 1.7 Kernel and File System Components .......................................................................... Kernel Feature Profiles ..................................................................................... Four File Systems .............................................................................................. Combinations of File System and Kernel Feature Profiles .......................... 1.8 1.9 1.10 Cross Development Tools .............................................................................................. Supported Run-time Boards ......................................................................................... Additional Resources ..................................................................................................... Online Support .................................................................................................. Use Cases ............................................................................................................ Installation ..........................................................................................................

3
3 4 5 6 6 7 7 7 8 8 9 10 10 11 11 11 11 11

Development Workflow ..............................................................................


2.1 2.2 Introduction ...................................................................................................................... Installing, Configuring, and Deploying Run-Time Software ................................ Creating Platform and Application Projects .................................................

13
13 14 14

iii

Wind River Linux User's Guide, 3.0

Configuring and Building the Platform Project ............................................ Configuring for a Custom Target .................................................................... Deploying Runtime Software .......................................................................... 2.3 Updating and Debugging .............................................................................................. Updating Packages ........................................................................................... Updating the Kernel Configuration ............................................................... Debugging Runtime Software ......................................................................... 2.4 Preparing a Product Deployment .................................................................................

14 15 15 15 15 16 16 16

The Development Environment .................................................................


3.1 3.2 Introduction ...................................................................................................................... Development Environment Directory Structure ....................................................... startWorkbench.sh and the Workbench Directories ..................................... The updates Directory ...................................................................................... 3.2.1 The wrlinux-3.0 Directory ................................................................................ The sysroots Directory ...................................................................................... The wrlinux Directory ...................................................................................... The ldat Directory ............................................................................................. The layers Directory .......................................................................................... 3.3 Templates and Layers ..................................................................................................... 3.3.1 3.3.2 3.4 What is a Template? .......................................................................................... What is a Layer? ................................................................................................

17
17 18 18 19 19 19 19 19 20 20 20 20 21 21 23 23 23 24 25 26 27 28 28 28 28 28 28 28 29 29 29

Layers in the Development Environment .................................................................. The layers Directory .......................................................................................... Layer Structure and Replaceability ................................................................

3.5

Templates in the Development Environment ............................................................ 3.5.1 3.5.2 Template Configuration Files .......................................................................... Core Layer Template Directory Structure and Contents ............................. profile Templates ............................................................................................... rootfs Templates ................................................................................................ board Templates (BSPs) .................................................................................... test Templates .................................................................................................... feature Templates .............................................................................................. extra Templates .................................................................................................. 3.5.3 Toolchain Layer Template Directory Structure and Contents .................... arch Templates ................................................................................................... cpu Templates .................................................................................................... multilib Templates ............................................................................................. 3.5.4 Kernel Layer Template Directory Structure and Contents .......................... default Templates .............................................................................................. feature Templates ..............................................................................................

iv

Contents

karch Templates ................................................................................................. kernel Templates ...............................................................................................

29 29

Configuring and Building ..........................................................................


4.1 Introduction ...................................................................................................................... 4.1.1 Design Benefits .................................................................................................. Post-installation ................................................................................................. 4.2 Configuring Your Platform Project .............................................................................. 4.2.1 4.2.2 4.2.3 Creating the Project Build Directory .............................................................. Configuring the Build Environment .............................................................. Build Environment Directory Structure ......................................................... Local Custom Layer Directories ...................................................................... 4.2.4 Configure Options ............................................................................................. Configuring with Profiles ................................................................................ Complete Run-time System ............................................................................. Kernel-only ......................................................................................................... File System-only ................................................................................................ 4.2.5 4.2.6 Configure Option Rules ................................................................................... Configure Option Examples ............................................................................ Common PC Complete Run-time ................................................................... Common PC Kernel Only ................................................................................ Common PC File System Only ........................................................................ ARM Versatile AB-926EJS Complete Flash-Capable Run-time .................. ARM Versatile AB-926EJS Complete Debug-Capable Run-time ............... Adding Analysis Tools Support to Projects ................................................... 4.2.7 Some Common Configure Options ................................................................ Basic Configure Options .................................................................................. Some Additional Configure Options ............................................................. Rebuilding the Toolchain or Libc From Source ............................................ Configuring Rebuilding of Host Tools ........................................................... 4.3 Building Your Platform Project .................................................................................... 4.3.1 4.3.2 Two Build Methods ........................................................................................... Using the RPM Build Method (make fs) ........................................................ Looking More Closely at the RPM Build Process ......................................... 4.3.3 Using the Source Build Method (make build-all) ......................................... Looking More Closely at the Source Build Process ...................................... 4.3.4 Building Parts of the Run-Time from Source ................................................ Building Only the Kernel from Source ........................................................... Building Only the File System from Source .................................................. Building Individual Packages from Source ...................................................

31
31 31 32 32 32 33 34 35 35 36 36 36 36 36 37 38 38 38 39 39 40 40 40 41 42 43 43 43 44 44 46 46 47 47 48 48

Wind River Linux User's Guide, 3.0

Layer and Template Processing ................................................................


5.1 5.2 Introduction ...................................................................................................................... Understanding Layers .................................................................................................... 5.2.1 Creating the Layer Search List ........................................................................ Layer Path Environment Variable ................................................................... Basic Layer Contents ........................................................................................ 5.3 Understanding Templates .............................................................................................. Identifying Explicit and Implicit Templates .................................................. 5.3.1 Template Search Order ..................................................................................... The Initial Template Search List ...................................................................... The Final Template Search Paths .................................................................... Template Processing ......................................................................................... An Example of Template Processing Order .................................................. 5.3.2 Processing Template include Files .................................................................. Including Templates of the Same Name ........................................................ 5.4 Processing Template Components ............................................................................... Processing File Fragments ............................................................................... Processing File System Components .............................................................. Processing Package Lists .................................................................................. 5.5 Constructing the Target File System ............................................................................ Configure Time File System Construction (filesystem/fs) .......................... Build Time File System Construction (export/dist) ..................................... Viewing the Target File Settings ...................................................................... Determining Which Package Contributes a File ...........................................

49
49 50 50 51 51 52 52 52 53 53 54 54 56 58 60 60 60 60 62 62 63 64 65

Custom Layers and Templates ..................................................................


6.1 6.2 Introduction ...................................................................................................................... Creating Custom Templates .......................................................................................... Naming Your Templates ................................................................................... The Structure of Templates .............................................................................. 6.3 Using Custom Templates ............................................................................................... Configuration with Templates ........................................................................ Verifying Template Processing ........................................................................ Creating Custom Profiles ................................................................................. 6.4 Creating Custom Layers ................................................................................................. The Structure of Layers .................................................................................... 6.4.1 Workflow and the Local Custom Layer ......................................................... Creating Exportable Layers with make export-layer ...................................

67
67 67 68 69 71 71 72 72 73 74 74 75

vi

Contents

6.4.2 6.5

Manually Creating Layers ...............................................................................

77 78 78 79 79 80 82

Using Custom Layers ...................................................................................................... Configuration with Layers ............................................................................... Verifying Layer Processing ..............................................................................

6.6

Combining Custom Layers and Templates ................................................................ Another Custom Profile Example ................................................................... Specifying Templates in a Custom Layer ......................................................

Application Development ..........................................................................


7.1 7.2 Introduction ...................................................................................................................... Working with Sysroots ................................................................................................... 7.2.1 Exporting Sysroots ............................................................................................ Exporting Sysroots ............................................................................................ 7.2.2 7.2.3 7.3 Using sysroots in Application Development ................................................ sysroots and Multilibs ......................................................................................

83
83 83 84 84 84 86 87 87 88

Adding Custom Applications to Platform Projects .................................................. Referencing External Application Code from a Project ............................... Including the Source in the Package dist Directory .....................................

PART II: CONFIGURING AND CUSTOMIZING


8 Changing Basic Linux Configuration Files ..............................................
8.1 8.2 8.3 8.4 8.5 8.6 Introduction ...................................................................................................................... Creating Basic Linux Configuration Files .................................................................. Changing Preset Linux Configuration Files .............................................................. Moving Changes to a Custom Layer ............................................................................ Moving Changes to a Custom Template ..................................................................... Tutorial: Configuring Robust Networking and NTP ...............................................

91
91 91 92 93 94 94

Configuring the Kernel ...............................................................................


9.1 9.2 9.3 Introduction ...................................................................................................................... Initial Creation of the Kernel Configuration File ..................................................... Kernel Configuration Fragment Auditing ................................................................. More on Kernel Configuration Fragment Auditing .....................................

97
97 97 99 99

vii

Wind River Linux User's Guide, 3.0

Audit Reporting ................................................................................................ 100 Example of Auditing Output .......................................................................... 100 9.4 Reconfiguring and Rebuilding the Kernel ................................................................ 103 Using GUI Tools for Kernel Modification ...................................................... 103 Adding a Kernel Fragment File in a Template .............................................. 104 Adding a Config Fragment in Your Project Build Directory ...................... 105 9.4.1 Resetting the Original Kernel Configuration ................................................ 106

10

Adding Packages ........................................................................................ 107


10.1 10.2 10.3 Introduction ...................................................................................................................... 107 Before Adding a Package ............................................................................................... 108 Adding a Package: rpmbuild with a Source RPM .................................................... 109 Preparing to Add an SRPM Package .............................................................. 109 10.3.1 Adding a Third-Party SRPM Package ............................................................ 109 Detailed Procedure for Adding Third-Party SRPMs ................................... Install the Package in the File System ............................................................ Necessary Makefile Contents .......................................................................... Necessary spec File Changes ........................................................................... Lua Scripting in Spec Files ............................................................................... 10.3.2 110 111 112 113 114

Older Method of Adding SRPMs .................................................................... 114 Create the Local Layer Package Environment .............................................. 115 Create the Patch ................................................................................................. 116 Build with the Patch .......................................................................................... 119

10.4

Adding a Package: rpmbuild with a Classic Package .............................................. 120 Preparing to Add a Standard Source Archive with rpmbuild ................... 120 Adding a Standard Source Archive with rpmbuild ..................................... 120

10.5

Adding a Package: the Classic Method ....................................................................... 121 Preparing to Add a Source Archive with the Classic Method .................... 121 Adding a Source Archive with the Classic Method ..................................... 121

10.6 10.7

Removing a Package ....................................................................................................... 121 Adding a Package to a Running Target ....................................................................... 122

11

Configuring PREEMPT_RT ........................................................................ 123


11.1 11.2 11.3 11.4 Introduction ...................................................................................................................... 123 Enabling Real Time ......................................................................................................... 123 Application Programming Considerations for PREEMPT_RT .............................. 124 Configuring the Preemption Level .............................................................................. 124

viii

Contents

No Forced Preemption (Server) ...................................................................... Voluntary Kernel Preemption (Desktop) ....................................................... Preemptible Kernel (Low-latency Desktop) .................................................. Complete Preemption (Real-Time) ................................................................. 11.5

125 125 125 126

Interrupt Service Routine (ISR) Payload Execution Context .................................. 126 Thread Softirqs .................................................................................................. 127 Thread Hardirqs ................................................................................................ 127 Preemptible RCU ............................................................................................... 127

11.6

Run-time Scheduler Debug Instrumentation ............................................................ 128 Debug preemptible kernel ............................................................................... Wakeup latency histogram .............................................................................. Non-preemptible critical section latency timing .......................................... Interrupts-off critical section latency timing ................................................. RT Mutex Integrity Checker ............................................................................ 128 128 128 128 129

12

Configuring Scalable Features ................................................................. 131


12.1 12.2 Introduction ...................................................................................................................... 131 BusyBox ............................................................................................................................. 131 Configuring BusyBox ....................................................................................... 132 Configuring Busybox with a Custom Layer ................................................. 133 12.3 Static Link Option ........................................................................................................... 133 Static Link Implementation ............................................................................. 134 12.4 Library Optimization Option ....................................................................................... 134 Implementing Library Optimization .............................................................. 135 12.5 Reducing Kernel Boot Time .......................................................................................... 135 12.5.1 An Overview of the Boot Process ................................................................... 135 Identifying Sources of Boot Latency ............................................................... 136 12.6 Analyzing and Optimizing Boot Time ........................................................................ 138 Collecting Boot-Time Data with bootlogger .................................................. 138 12.6.1 12.6.2 Analyzing Early Boot Time .............................................................................. 139 Analyzing and Optimizing Late Boot Time .................................................. 141 Visualizing Late Boot Time .............................................................................. 141 An Example of Investigating the other Category ......................................... 143 An Example of Investigating Idle Time ......................................................... 145 12.7 Analyzing and Optimizing Runtime Footprint ........................................................ 152 12.7.1 Querying the RPM Installation Database ...................................................... 152 Script Usage ....................................................................................................... 152 Querying Package Sizes ................................................................................... 152 Querying for Package File Lists ...................................................................... 153

ix

Wind River Linux User's Guide, 3.0

Smart Querying for Dependencies ................................................................. 153 12.7.2 Getting a Footprint Snapshot .......................................................................... 154

13

Patch Management ..................................................................................... 157


13.1 Introduction ...................................................................................................................... 157 Patching Models in Wind River Linux ........................................................... 157 13.2 Patch Principles and Workflow .................................................................................... 158 13.2.1 Applying and Resolving Patches .................................................................... 158 Deploying Patches ............................................................................................ 159 13.3 The Quilt Patching Model ............................................................................................. 160 13.3.1 Patching SRPMs with Quilt ............................................................................. 160 Configure Your Environment and Project ..................................................... Create a New RPM Patch ................................................................................. Create a Layer and Save Your Patch ............................................................... Copy and Modify the Existing Spec File ....................................................... Testing Your Patches ......................................................................................... 13.4 160 161 162 163 164

git and the Kernel ............................................................................................................ 165 13.4.1 An Overview of gits Role in the Kernel ........................................................ 165 The Kernel Build Workflow ............................................................................. 166 The kernel-cache ................................................................................................ 166 The Kernel Source Tree ..................................................................................... 167 13.4.2 Starting to Use git .............................................................................................. 168 Types of Commands ......................................................................................... 168 Tools Overview .................................................................................................. 169 The Kernel Lifecycle and Developer Workflow ............................................ 170 13.4.3 Examples ............................................................................................................ 174 Adding a Patch to the Kernel .......................................................................... Patch Management ........................................................................................... BSP Example ...................................................................................................... Patch Merge ....................................................................................................... Sharing a Kernel ................................................................................................ 174 176 177 179 180

13.5

Kernel Patching with scc ................................................................................................ 180 Kernel Patching Design Philosophy ............................................................... scc Facilities ........................................................................................................ scc Files ............................................................................................................... scc File Examples ............................................................................................... 180 181 181 183

PART III: DEPLOYING YOUR PLATFORM PROJECT


14 Simulated Deployment with QEMU ........................................................... 187
14.1 Introduction ...................................................................................................................... 187

Contents

Internals .............................................................................................................. 187 14.2 Deployment ...................................................................................................................... 187 Accessing the Simulation ................................................................................. 188 14.3 Configuration ................................................................................................................... 189 Ending the Simulation ...................................................................................... 190 Command Line Options ................................................................................... 190 Enabling TUN/TAP Networking ................................................................... 191 14.4 QEMU Example: Deploying initramfs ........................................................................ 193 Building and Running initramfs ..................................................................... 194 Switching the file system from initramfs ....................................................... 194

15

Network Server Configuration .................................................................. 199


15.1 Introduction ...................................................................................................................... 199 Boot Process Overview ..................................................................................... Network Services During Boot ....................................................................... Network Configuration on Different Hosts .................................................. Setting Target and Server Host Names .......................................................... 15.2 199 200 200 201

Configuring DHCP ......................................................................................................... 201 The DHCP Configuration File ......................................................................... 201 The DHCP Leases File ...................................................................................... 202 Starting the DHCP Server ................................................................................ 202

15.3

Configuring TFTP ........................................................................................................... 202 Making the Kernel Available for Download ................................................. 202 The TFTP Configuration File ........................................................................... 203

15.4

Configuring NFS ............................................................................................................. 203 Making the Root File System Available for Export ...................................... 203 Configuring /etc/exports ................................................................................ 204

16

Deploying Your Board from a Network ..................................................... 205


16.1 16.2 Introduction ...................................................................................................................... 205 Configuring a Serial Connection to the Board .......................................................... 206 Setting up the Workbench Terminal ............................................................... 206 Setting-up cu and UUCP .................................................................................. 206 16.3 Example Network Deployments with RedBoot ........................................................ 207 16.3.1 Deploying with Flash ....................................................................................... 207 Deploying with JFFS2 ....................................................................................... 208 Deploying with CRAMFS ................................................................................ 210 Deploying with YAFFS ..................................................................................... 211

xi

Wind River Linux User's Guide, 3.0

16.4

Example Ramdisk Deployment with U-Boot ............................................................ 213 Create the initrd Image ..................................................................................... 213 Configure U-Boot .............................................................................................. 213 Deployment ........................................................................................................ 214

17

Deploying Your Board with PXE ................................................................ 215


17.1 Introduction ...................................................................................................................... 215 Process Overview .............................................................................................. 215 17.2 Preparing the Downloaded Files .................................................................................. 216 The PXELinux Boot Loader File ...................................................................... 216 The PXELinux Configuration File .................................................................. 216 17.3 17.4 Configuring DHCP for PXE .......................................................................................... 217 Setting up and Booting the Target ............................................................................... 218 Configuring PXE Boot On the Target ............................................................. 218 Booting the Target ............................................................................................. 219

18

Stand-Alone Deployment With Flash Devices ......................................... 221


18.1 18.2 18.3 18.4 18.5 Introduction ...................................................................................................................... 221 Process Overview ............................................................................................................ 222 Preliminaries .................................................................................................................... 222 Setting up Hosts .............................................................................................................. 222 Stand-alone Deployment with a Ramdisk ................................................................. 223 Loading the Ramdisk Image ............................................................................ 223 Booting the Target ............................................................................................. 223 18.6 Stand-alone Deployment with JFFS2 .......................................................................... 224 Booting the Target ............................................................................................. 224 Simplifying Your Network and U-Boot Environment ................................. 224 18.7 Stand-alone Deployment with CRAMFS ................................................................... 225 Booting the Target ............................................................................................. 225 Simplifying Your Network and U-Boot Environment ................................. 226

19

Stand-Alone Deployment to Disk .............................................................. 227


19.1 19.2 Introduction ...................................................................................................................... 227 Server-Based Installation of Wind River Linux ........................................................ 227 An Example of a Self-Contained Server Installation ................................... 228 Configuring and Building the Server Install ................................................. 228 Booting and Installing ...................................................................................... 229

xii

Contents

19.3

Booting Standalone with LinuxLive ............................................................................ 230 Before You Begin ............................................................................................... 231 19.3.1 19.3.2 19.3.3 Creating a Platform Project .............................................................................. 231 Preparing the Target's Hard Drive .................................................................. 233 Placing the File System and Kernel on the Hard Disk ................................. 235 Copying from the Wind River CD-ROM ....................................................... 235 Copying from a USB Disk ................................................................................ 235 Downloading from a Network Host .............................................................. 236 19.3.4 Configuring Target System Files and Booting .............................................. 237

19.4

Creating ISO and USB Flash Drive Images ............................................................... 238

20

Deploying SELinux ..................................................................................... 241


20.1 Introduction ...................................................................................................................... 241 20.1.1 Configuring an SELinux Platform Project ..................................................... 241 Configuring SELinux on the Target ................................................................ 241 Booting the Target and Loading the Policy ................................................... 241 Building the policy store .................................................................................. 242

PART IV: USE CASES


21 Building Run-times with RPM and Source ............................................... 247
21.1 21.2 21.3 21.4 Introduction ...................................................................................................................... 247 Tutorial One: RPM Build for Common PC ................................................................. 248 Tutorial Two: Source Build for Common PC ............................................................. 250 Tutorial Three: Building ISO Images and Partial Run-time Systems ................... 251 Building an ISO Image ..................................................................................... 251 Building a File System Only ............................................................................ 251 Building a Kernel Only ..................................................................................... 252 21.5 21.6 21.7 Tutorial Four: RPM Build on ARM Versatile AB-926EJS ........................................ 252 Tutorial Five: Source Build on ARM Versatile AB-926EJS ...................................... 253 Tutorial Six: Building Ramdisk and Flash File Systems ......................................... 254 Building a Ramdisk Image ............................................................................... 254 Building a JFFS2 Image .................................................................................... 254 Building a CRAMFS Image .............................................................................. 254

22

Examples of Adding Packages ................................................................. 255


22.1 Introduction ...................................................................................................................... 255

xiii

Wind River Linux User's Guide, 3.0

22.2

Adding SRPM Packages ................................................................................................ 256 22.2.1 Adding the logwatch SRPM ............................................................................ 256

22.3

Adding Spec Packages ................................................................................................... 259 Adding mm ........................................................................................................ 259

22.4

Adding Classic Packages ............................................................................................... 261 22.4.1 Adding Classic Packages with configure ...................................................... 261 Adding links ...................................................................................................... 261 22.4.2 Adding Classic Packages without configure ................................................ 263 Adding schedutils ............................................................................................. 264

22.5 22.6

Adding Packages with a GUI Tool ............................................................................... 268 Adding an RPM Package to a Running Target .......................................................... 270

23

Using Custom Templates and Layers ....................................................... 271


23.1 Introduction ...................................................................................................................... 271 23.1.1 23.1.2 23.2 23.3 23.4 23.5 23.6 Examples in this Use Case ............................................................................... 272 The Layers Used in the Example .................................................................... 272

Adding a Layer to a Platform Project .......................................................................... 273 Adding Another Layer ................................................................................................... 274 Overriding Layer Contents with Another Layer ....................................................... 275 Patching a Host Tools Package ..................................................................................... 276 Configuring and Patching the Kernel ......................................................................... 277 Enabling CONFIG_BINFMT_AOUT ............................................................. 277 Patching the Kernel ........................................................................................... 278 Configuring and Building ................................................................................ 278

23.7 23.8

Using Feature Templates in Layers .............................................................................. 279 Modifying a BSP ............................................................................................................. 280

24

Kernel Use Cases ....................................................................................... 283


24.1 24.2 24.3 Introduction ...................................................................................................................... 283 Adding a Feature to a Supported Kernel .................................................................... 283 Using KVM ....................................................................................................................... 285 Overview Of KVM ............................................................................................ 285 KVM Host Requirements ................................................................................. 286 24.3.1 Configuring the KVM Host ............................................................................. 286

xiv

Contents

Boot KVM on common_pc_64 (with TAP) .................................................... 287 24.3.2 Configuring the KVM Guest ........................................................................... 288 Start the KVM guest (linux) from the KVM host (linux): ............................ 289 24.3.3 24.4 Run apache or boa ............................................................................................. 289

Collecting Kernel Core Dumps with Kdump ............................................................ 290 24.4.1 Kdump Example with x86 ............................................................................... 290 Using kexec for Quick Reboot ......................................................................... 292 Issues and Limitations ...................................................................................... 292

PART V: APPENDIXES
A Open Source Documentation .................................................................... 295
A.1 A.2 A.3 A.4 A.5 Introduction ...................................................................................................................... 295 Carrier Grade Linux ........................................................................................................ 295 Networking ....................................................................................................................... 296 Security .............................................................................................................................. 296 Linux Development ........................................................................................................ 296

Common make Command Targets ............................................................ 299


B.1 Introduction ...................................................................................................................... 299

File System Layout Configuration ............................................................. 303


C.1 C.2 Introduction ...................................................................................................................... 303 changelist.xml Commands ............................................................................................ 304 General Attributes ............................................................................................. Removing a File, Directory, Pipe, Symlink, or Device ................................. Adding a File ..................................................................................................... Adding a Directory ........................................................................................... Adding a Symlink ............................................................................................. Adding a Device ................................................................................................ Adding a Pipe .................................................................................................... C.3 304 304 304 305 306 307 307

The fs_final.sh Script ...................................................................................................... 308

KGDB Debugging and the Command Line ............................................... 309


D.1 D.2 Introduction ...................................................................................................................... 309 Debugging with KGDB from the Command Line ................................................... 309

xv

Wind River Linux User's Guide, 3.0

D.2.1

Enabling and Disabling KGDB in the Kernel ............................................... 311 Using the Command Line ................................................................................ 311

D.3

KGDB Debugging Using the Serial Console (KGDBOC) ...................................... 312

Connecting with TIPC ................................................................................. 313


E.1 E.2 Introduction ...................................................................................................................... 313 Configuring TIPC Targets ............................................................................................. 314 E.2.1 E.2.2 E.2.3 E.3 E.4 E.5 Adding the TIPC Utilities ................................................................................ 314 Installing the TIPC Kernel Module and Utilities .......................................... 314 Running the usermode-agent .......................................................................... 315

Configuring a TIPC Proxy ............................................................................................. 315 Configuring Your Workbench Host ............................................................................. 316 Using usermode-agent with TIPC ................................................................................ 317

Control Groups (cgroups) .......................................................................... 321


F.1 F.2 F.3 Introduction ...................................................................................................................... 321 CPUSETS .......................................................................................................................... 322 cgroups .............................................................................................................................. 323

Build Variables ............................................................................................ 325


G.1 Introduction ...................................................................................................................... 325 Additional Notes on Build Variables .............................................................. 329

Cavium Simple Executive Integration and Debugging ............................ 331


H.1 Introduction ...................................................................................................................... 331 H.1.1 Components of Wind River Simple Executive Support .............................. 332 Cavium Simple Executive SDK RPM ............................................................. Cavium Simple Executive Linux RPM ........................................................... WRLinux Simple Executive Layer .................................................................. Workbench 3.x Simple Executive Debug Integration Patch ....................... H.1.2 H.2 332 332 332 333

Provided "Feature Templates" ......................................................................... 333

Preparing the Host .......................................................................................................... 334 H.2.1 H.2.2 Installing the Simple Executive Layer Prerequisites .................................... 334 Available Documentation ................................................................................ 335

H.3

Configuring and Building from the Command Line ............................................... 335

xvi

Contents

H.3.1 H.3.2

Configuring your Project ................................................................................. 335 Customize your Package List .......................................................................... 335 Using the Feature Templates ........................................................................... 336 Making Changes Manually .............................................................................. 336

H.3.3 H.3.4 H.4

Building the Project ........................................................................................... 336 Specifying Build Types ..................................................................................... 337

Running Simple Executive Applications ................................................................... 337 H.4.1 H.4.2 Linux Usermode Applications ........................................................................ 338 Standalone Applications .................................................................................. 338

H.5

Simple Executive Layer Technical Notes .................................................................... 339 H.5.1 Simple Executive Applications as wrlinux Packages .................................. 339 Application Wrapper Makefiles ...................................................................... 339 Application Wrapper Support Files ............................................................... 340 H.5.2 .........................................................Miscellaneous Simple Executive Details 340

H.6

Configuring and Building with Workbench .............................................................. 341 H.6.1 H.6.2 H.6.3 H.6.4 H.6.5 H.6.6 H.6.7 Adding the SDK Path ....................................................................................... 341 Overriding the OCTEON_MODEL Value ..................................................... 341 Starting Workbench .......................................................................................... 341 Configure a Platform Project with Simple Executive Support ................... 342 Building the Platform Project .......................................................................... 343 Working with the Package List ....................................................................... 343 Changing the OCTEON_TARGET Value for a Package .............................. 344

H.7 H.8

Configuring the Kernel with Workbench .................................................................. 345 Debugging from the Command Line .......................................................................... 347 H.8.1 H.8.2 H.8.3 Overview ............................................................................................................ 347 Prerequisites ....................................................................................................... 348 Available Documentation ................................................................................ 348

H.9

Setting Up the Target ...................................................................................................... 348 H.9.1 H.9.2 Review: Starting a Standalone Application ................................................... 348 Starting an Application for Debugging ......................................................... 349

H.10 Setting up the Host ......................................................................................................... 350 H.10.1 Starting GDB ...................................................................................................... 350 H.10.2 Connecting to a Target ...................................................................................... 350 H.11 Debugging Caveats ......................................................................................................... 351 H.11.1 Single-step and Atomic Operations ................................................................ 351

xvii

Wind River Linux User's Guide, 3.0

H.11.2 H.11.3 H.11.4

Debugging Multiprocessor Applications ...................................................... 351 Debugging Standalone Images with Linux Running .................................. 352 Debugging the Linux Kernel ........................................................................... 352

H.12 Debugging with Workbench ......................................................................................... 353 H.12.1 Prerequisites ....................................................................................................... 353 H.12.2 Importing the Application to a C/C++ Project (optional) .......................... 353 H.12.3 Creating a Launch Configuration ................................................................... 354 H.12.4 Debugging the Application ............................................................................. 355 H.12.5 Note(s) on Workflow ........................................................................................ 357 H.13 Known Issues, Limitations, and Tips .......................................................................... 357

Glossary ...................................................................................................... 359 Index ................................................................................................................ 363

xviii

PAR T I

Introduction, Design, and Build


1 2 3 4 5 6 7 Introduction .............................................................................. 3

Development Workflow ............................................................ 13 The Development Environment ............................................... 17 Configuring and Building ........................................................ 31 Layer and Template Processing .............................................. 49 Custom Layers and Templates ................................................ 67 Application Development ........................................................ 83

Wind River Linux User's Guide, 3.0

1
Introduction
1.1 Introduction 3 1.2 Wind River Linux Documentation 4 1.3 Roadmap to the Wind River Linux Users Guide 5 1.4 Document Conventions 6 1.5 Overview of Wind River Linux 6 1.6 Platform Developer and Application Developer 7 1.7 Kernel and File System Components 8 1.8 Cross Development Tools 10 1.9 Supported Run-time Boards 11 1.10 Additional Resources 11

1.1 Introduction
Welcome to the Wind River Linux User's Guide. Wind River Linux is a software development environment that creates optimized Linux distributions for embedded devices. Development environments are available on a number of host platforms, and support a large and ever-growing set of targets. For details on particular host support refer to the Release Notes. For supported target boards, refer to Wind River Online Support.

Wind River Linux User's Guide, 3.0

1.2 Wind River Linux Documentation


The following is a list of the documentation provided by Wind River that supports development of Linux targets. Much of this documentation is available through the installation hosts start menu, for example under Applications > Wind River > Documentation on Red Hat Enterprise Linux.

Wind River Linux Users Guide (this document)

This guide describes Wind River Linuxhow to configure it, and customize it for your needs. It is primarily oriented toward command line usage, but it is also useful to Workbench developers who want to understand some of the underlying design and implementation of the build system. It provides both explanatory and procedural use case material.

Wind River Linux Getting Started

The Getting Started provides a few brief procedures that you can perform on the command line or with Workbench. Its primary purpose is to orient you in the primary ways to use Wind River Linux and point to the documentation areas that focus most on the way you will be using the product.

Wind River Workbench Users Guide

This guide describes how to use Workbench to develop projects, manage targets, and edit, compile, and debug code.

Wind River Workbench by Example, Linux Version

This guide is for Linux-specific use of Workbench, and provides examples on how to configure and build application, platform, and kernel module projects.

Wind River Workbench Online Help

Wind River Workbench provides context-sensitive help. To access the full help set, select Help > Help Contents in Wind River Workbench. To see help information for a particular view or dialog box, press the help key when in that view or dialog box. See 1.4 Document Conventions, p.6 for details on the help key.

Wind River Linux Reference Pages

Reference manual pages (man pages) for the gnu commands on the Wind River Linux development host. Accessible though Workbench help with Help > Help Contents Wind River Documentation > References > Wind River Linux Operating System Reference.

Wind River Analysis Tools documentation

This is a set of documents that describe how to use the Wind River Analysis tools that are provided with Workbench. The tools include a memory use analyzer, an execution profiler, and System Viewer, a logic analyzer for visualizing and troubleshooting complex embedded software. The Wind River System Viewer API Reference is also included.

Wind River Workbench Host Shell User's Guide

The host shell is a host-resident shell provided with Workbench that provides a command line interface for debugging targets.

1 Introduction 1.3 Roadmap to the Wind River Linux Users Guide

Most of the documentation is available online as PDFs or HTML accessible through Wind River Workbench online help. Links to the PDF files are available by selecting Wind River > Documentation from your operating system start menu. The documentation is also available below your installation directory (called installDir) through the command line as follows:

PDF VersionsTo access the PDF, point your PDF reader to the *.pdf file, for example: installDir/docs/extensions/eclipse/plugins/com.windriver.ide.doc.wr_linux_ platforms/wr_linux_users_guide_3.0/wr_linux_users_guide_3.0.pdf. HTML VersionsTo access the HTML, point your web browser to the index.html file, for example: installDir/docs/extensions/eclipse/plugins/com.windriver.ide.doc.wr_linux_ platforms/wr_linux_users_guide_3.0/html/index.html.

1.3 Roadmap to the Wind River Linux Users Guide


This document is divided into the following parts: Part I. Introduction, Design, and BuildProvides an overview of Wind River Linux including kernel and file system combinations, descriptions of the development and build environments, development workflow, and use of custom templates, layers, and sysroots. Part II. Configuring and CustomizingProvides information on configuring kernels and file systems, configuring conditional real-time Linux, configuring scalable features to control the size of your run-time, and application and kernel patch management. Part III. Deploying your Platform ProjectDescribes how to deploy your run-time in a networked or standalone environment, including how to use the QEMU simulator, multiple network servers, PXE and U-Boot boot loaders, and stand-alone deployment issues including deployment with Ramdisk, JFFS2, CRAMFS, hard disks, and USB disks. Part IV. Use CasesA variety of examples of how to develop with Wind River Linux, including creating platform projects, customizing them, adding and removing packages, and using layers and templates. Part V. AppendixesProvide miscellaneous information including open source documentation pointers, make target summary, build variables, and further details on various features.

Wind River Linux User's Guide, 3.0

1.4 Document Conventions


In this document, placeholders for which you must substitute a value are shown in italics. Literal values are shown in bold. For example, this document uses the placeholder installDir to refer to the location where you have installed Workbench. By default, this is C:\WindRiver on Windows hosts and $HOME/WindRiver on Linux and Solaris hosts. The placeholder prjbuildDir refers to the project build directory in which much of your work takes place. Menu choices are shown in bold, for example File > New > Project means to select File, then New, then Project. Commands that you enter on a command line are also shown in bold and system output is shown in typewriter text, for example:
$ pwd /home/mary/WindRiver/workdir/prjbuildDir $

Long command lines that would normally wrap are shown using the backslash (\) followed by ENTER, which produces a secondary prompt, at which you may continue typing. (The secondary prompts are not shown to make it easier to cut and paste from the examples.) In the following example you would enter everything literally except the $ prompt:
$ configure --enable-board=sun_niagara2_sun4v \ --enable-kernel=standard \ --enable-rootfs=glibc_std

If a command requires root privileges to run, the prompt is displayed as #. The path to the configure script used to configure a project is generally omitted for brevity. The script is found in installDir/wrlinux-version/wrlinux/. The following naming conventions are used throughout the guide:

/home/user/WindRiver is referred to as installDir. The directory or folder where you build your projects, for example /home/user/workdir/common_pc (common_pc_prj in Workbench), is referred to as prjbuildDir.

1.5 Overview of Wind River Linux


Wind River Linux supports many leading commercial off-the-shelf (COTS) boards. The build system consists of a complete development environment that includes a complete set of standard Linux run-time components, both as binary and source packages. It also includes cross-development tools that can be used to configure and build customized run-time systems and applications for a range of COTS hardware. Wind River supports boards according to customer demand. Please contact Wind River if yours is not yet officially supported. Wind River Workbench is included as part of Wind River Linux to provide a robust application development and debugging environment. For more information about Wind River Linux, see http://www.windriver.com/products/.

1 Introduction 1.6 Platform Developer and Application Developer

1.6 Platform Developer and Application Developer


You may purchase Wind River Linux in two different packages, depending on the type of development work you intend to perform and on your host system.

Platform Developer

The Platform Developer package is for developers who are intimately concerned with the Linux operating system including:

configuring and rebuilding the kernel and file system developing or adding device drivers or kernel modules deploying the kernel and file system to target boards.

The Platform Developer package is available for the Linux host systems specified in the release notes.
Platform Developer Package Contents

The Platform Developer package includes the full Wind River Linux:

reference source reference file system target libraries cross-build system host utilities GNU toolchain Board Support Package (BSP) components for supported boards

Included is Wind River Workbench, with debugging and analysis features:


KGDB kernel mode agent debugging ptrace user mode agent debugging Wind River System Viewer Wind River Analysis Tools core file analysis

The platform developer can export a sysroot (a portable set of libraries, include files, and other resources) as well as a toolchain to be used by the application developer.

Application Developer

The Application Developer package is for the developer of user-level applications only. It is available for Linux, Solaris, and Windows host systems as listed in the release notes.
Application Developer Package Contents

The Application Developer package includes a subset of Wind River Linux:


target libraries Wind River host utilities Wind River GNU GCC 4.3.x toolchain

Wind River Linux User's Guide, 3.0

Included is Wind River Workbench, with a debugging and analysis tools subset:

ptrace user mode agent debugging Wind River System Viewer Wind River Analysis Tools

NOTE: The sections of this book that deal mainly with the Wind River Linux cross-build system, reference source and file system, and BSP components, are not relevant to the Application Developer Package.

1.7 Kernel and File System Components


Wind River Linux supports user-configurable combinations of kernel profiles and file systems.
NOTE: Not all kernel feature and file system combinations are supported on any particular board. Refer to Wind River Online Support for information on supported root file system and kernel combinations for your board.

The kernel-BSP-filesystem feature matrix, available on Wind River Online Support, is the foundation of the kernel feature profiles, and documents the supported configurations as tested by Wind River. The matrix consists of kernel feature profiles, supported BSPs, and the types of file systems supported.

Kernel Feature Profiles

A kernel feature profile implements a supported set of kernel features. Each contains features that are compatible with each other and excludes features that are not compatible. Kernel profiles use a combination of kernel configuration, kernel patches, and build system changes to support their features.
NOTE: Kernel feature profiles are not the same as profile templates. Profile templates (or simply profiles, are described in profile Templates, p.25).

The kernel profiles are layered to build a set of increasingly specific or enhanced functionality. The set of features that is available and tested on all boards is called the standard kernel profile. Kernel feature profiles that add or modify the functionality of the standard profile are called enhanced kernel profiles. Enhanced profiles are available on a selected set of boards and are mutually exclusive with other enhanced profiles. A single board may be supported by multiple mutually exclusive (runtime) enhanced profiles along with the standard profile.
NOTE: All features of the standard kernel profile work within any particular enhanced profile.

Wind River Linux provides the following kernel profiles:

1 Introduction 1.7 Kernel and File System Components

standardall boards support the standard profile, fundamental kernel features are implemented in this profile to provide a common platform for all boards. smallthe small kernel profile represents a configuration suitable for resource contained deployment. Available on selected boards. cglThis is the Carrier Grade Linux profile, designed to support the Linux Foundations CGL 4.0 specification. See http://www.linux-foundation.org/en/Carrier_Grade_Linux for a summary and details on the CGL specification. Available on selected boards. Not available for ARM or MIPS based boards. ecglThis is an extended CGL profile. Kernels with this profile provide the CGL features plus extensions. Available on a subset of CGL boards. Refer to Wind River Online Support for more information.preempt_rtThis kernel profile provides the PREEMPT_RT kernel patches to enable conditional hard real-time support for selected boards. For details, refer to http://rt.wiki.kernel.org/index.php. rtcoreKernels with this profile support the Real-Time Core guaranteed real-time core extensions. Real-Time Core is an optional product available from Wind River.

NOTE: A single board may be in one or more enhanced kernel profiles.

For detailed instructions on reconfiguring and customizing Wind River Linux kernels, see chapter 9. Configuring the Kernel.

Four File Systems

There are four basic file systems:

Glibc Standard (glibc_std)A full file system, with Glibc but without CGL-relevant packages or extensions. Glibc CGL (glibc_cgl)A full file system, with CGL-relevant packages and CGL extensions. Glibc Small (glibc_small)A much smaller, BusyBox-based file system, with Glibc. uClibc (uclibc_small)The same BusyBox-based file system as glibc_small, but with uClibc, a small C library intended specifically for very small footprint systems.

Run-time components are available both as binary RPMs and source tar files.

Wind River Linux User's Guide, 3.0

Combinations of File System and Kernel Feature Profiles

Table 1-1 shows which file systems are available with each kernel profile.
Table 1-1 Kernel Profiles and Supported File Systems

Kernel Feature Profile

glibc_std

glibc_cgl

glibc_small

uclibc_small

standard small cgl ecgl preempt_rt rtcore (optional product)

Yesa No No No Yes Yes

Nob No Yes Yes No No

Yes Yes Yes No Yes Yes

No Yes No No No Yes

a. In some cases this combination may not be supported as it is not needed on purely networking equipment. Individual board readme files contain details. b. In cases where a board cannot support the cgl kernel profile (for example MIPS boards) it instead supports the standard kernel profile and the glibc_cgl root file system with some features of the userspace gracefully failing for lack of kernel support. NOTE: Refer to Wind River Online Support for the latest kernel-filesystem-BSP feature matrix to determine which kernel features and file systems are supported for your board.

1.8 Cross Development Tools


You can use the Wind River Linux build system to create a Linux kernel and a separate target root file system with all necessary configuration and initialization files for a deployed Linux platform. You can add or remove source RPM and traditional tar archive packages for customized solutions. You can also add or remove RPM binary packages from the target file system, automatically checking dependencies, flagging missing libraries, components, or version mismatches. The build system provides a version controllable development environment, separate from the host file system which is protected from inadvertent damage. Cross-development is supported by the inclusion of the GNU cross-toolchain, and enhanced by the addition of Wind River Workbench. Workbench supports kernel mode debugging through the Kernel GNU Debugger (KGDB), and user mode debugging through the ptrace agent. For detailed information on using Wind River Workbench, see the Wind River Workbench Users Guide, and the Wind River Workbench by Example, Linux Version.

10

1 Introduction 1.9 Supported Run-time Boards

1.9 Supported Run-time Boards


Wind River Linux comes complete with pre-built Linux kernels and pre-built run-time file system packages (and will build identical and configurable kernels and file systems from source), for many boards from a variety of manufacturers. For the most recent list of supported boards, see Wind River Online Support. Information on setting up target servers and booting supported boards, as well as details on booting with ISO, hard disk, and flash RAM, can be found in Part III. Deploying your Platform Project.
NOTE: Wind River strongly recommends that you read your boards README file, located within installDir/wrlinux-3.0/wrlinux/templates/board/boardname/. This file contains important information on board bring-up, boot loaders, board features, and board limitations. The board README and other README files can also be found in your prjbuildDir/READMES directory when you configure a project.

1.10 Additional Resources


Online Support

Wind River Online Support provides updates and enhancements to packages as they become available, which can be downloaded and added to Wind River Linux. Tutorials designed to illustrate Wind River integration with Workbench, as well as sample configuration files to simplify the target board boot process, are also available.

Use Cases

Besides step-by-step instructions in several chapters, this Users Guide includes several tutorial examples, in Part IV. Use Cases.
NOTE: Detailed Workbench tutorials are available in the Wind River Workbench Users Guide and Wind River Workbench by Example, Linux Version.

Installation

Complete installation instructions can be found in the Wind River product installation and licensing guides. Go to http://www.windriver.com/licensing and then choose the Site Configuration Documentation link on that page. Any last-minute changes in the installation procedure or host requirements can be found in the Wind River Linux Release Notes or at Wind River Online Support.

11

Wind River Linux User's Guide, 3.0

12

2
Development Workflow
2.1 Introduction 13 2.2 Installing, Configuring, and Deploying Run-Time Software 14 2.3 Updating and Debugging 15 2.4 Preparing a Product Deployment 16

2.1 Introduction
This chapter presents an overview of the development workflow for application and platform development using Wind River Linux with Wind River Workbench. The cycle starts at product installation and ends at product deployment. This chapter provides basic instructions for building the run-time system. Each section refers to subsequent chapters for detailed explanations and step-by-step tutorials. Figure 2-1 illustrates the basic stages of product development.
Figure 2-1 Overview of the Product Development Lifecycle Product Requirements
Applications Files Packages Unit Tests Docs Layers Sysroots Profiles Kernel File System

Setup
Templates Layers Packages RPMs, CVS, . . .,

Develop
Config Edit Compile On-Host Workbench

Diagnose
Deploy Debug Test On-Host Workbench & Target

Product Deliverables
Kernel Image File System Image Layers Sysroots

Optimize
System Layout Views Profiles Static Analysis Cores Footprint Workbench, Eclipse, Target

13

Wind River Linux User's Guide, 3.0

This document is largely concerned with the Setup and Develop phases shown in Figure 2-1. Refer to the Analysis Tools documentation listed in 1.2 Wind River Linux Documentation, p.4 for details on the Diagnose and Optimize phases. You can perform most of the operations involved in the development cycle within the GUI environment of Workbench, although most of the operations performed in this book occur at the command line. For full details, see Wind River Workbench Users Guide and Wind River Workbench by Example, Linux Version.

2.2 Installing, Configuring, and Deploying Run-Time Software


For full instructions on installing Wind River Linux, please see your Release Notes, the Developer Install Guide, a fold-out included with the product, and Wind River Online Support. Initial deployment includes configuring and building the run-time system, deploying to a target, and debugging with Workbench or some other tool. Building a working run-time system (make fs) for the first time will take roughly thirty minutes or so depending on your configuration and resources. As long as the basic hardware and system infrastructure is in place for target deployment, target bring-up can be performed in an additional five minutes. To connect with Workbench and start debugging can take even less time. Refer to 4. Configuring and Building for details on building from prebuilt binaries and default kernels, as well as creating more custom configurations.

Creating Platform and Application Projects

Platform projects consist of default or customized kernel and file system combinations, and application projects are targeted for specific platforms. Platform developers create a platform project and then produce a sysroot (with make export-sysroot) for application developers. The sysroot provides the target runtime libraries and header files for use by the application developers on their development hosts. Because the sysroot duplicates application dependencies of the eventual runtime environment, applications are easily deployed after development. Platform developers can incorporate developed applications in a project by placing the application under prjbuildDir/filesystem/fs/ or by using the file system layout feature, and then rebuilding the file system (see 8. Changing Basic Linux Configuration Files and C. File System Layout Configuration for more information).

Configuring and Building the Platform Project

By configuring and building a platform project, you create a complete run-time system. You typically create a platform project within a work directory, called in this document workdir. Within workdir you create a subdirectory for the particular

14

2 Development Workflow 2.3 Updating and Debugging

project, which will be referred to in this document as prjbuildDir. Your prjbuildDir would typically have some name indicating its contents, for example common_pc_small, or new_powerpc. Within your prjbuildDir, issue a configure command with the necessary options to configure the appropriate build environment and makefiles. You then issue a make command to build a complete platform including the kernel and root file system. For more detailed instructions on the configuration and build process, see chapter 4. Configuring and Building.

Configuring for a Custom Target

If your target does not exactly match one of the supported boards, you can create a custom board support package, generally based on one of the provided definitions. See the Wind River Linux BSP Developers Guide for detailed instructions.

Deploying Runtime Software

The process of initially deploying a target entails providing the kernel and file system, setting up the network infrastructure, configuring the bootloader, and so on. Note that the QEMU simulated deployment method (only supported for some boards), makes network infrastructure, bootloaders, and even target hardware unnecessary.

2.3 Updating and Debugging


Wind River Linux is designed to be easily modified both at the kernel and package (file system) level. Debugging of both kernel and userspace applications can be performed in traditional fashion at the command line or with the additional debug tools provided by the Workbench GUI.

Updating Packages

You can add or remove RPM or source packages from your system, and these packages may be ones provided by Wind River or available from third parties. Package configuration is covered in detail in chapters 8. Changing Basic Linux Configuration Files, 10. Adding Packages, and Part IV. Use Cases. In addition, Wind River Workbench by Example, Linux Version describes how to use the Workbench GUI to add and remove packages, apply patches, and much more.

15

Wind River Linux User's Guide, 3.0

Updating the Kernel Configuration

You can reconfigure, rebuild, and test the kernel. See chapter 9. Configuring the Kernel.

Debugging Runtime Software

Wind River Linux platforms and applications can be developed and debugged with the wide range of tools available with Linux in general. In addition, Wind River Linux includes the powerful set of tools provided with Wind River Workbench, which consists of the Eclipse-based Workbench GUI customized with debugging and target management tools, as well as a set of analysis tools. There are several advantages to using Workbench for development, rather than just using command-line tools such as gdb and others on existing source code. For example, you can take advantage of the integration of the Editor and source code navigation facilities, create launch configurations, and in general use the GUI and its many different views to control breakpoints, monitor threads and processes, and so on.

2.4 Preparing a Product Deployment


Prior to deploying your production version, you will generally go through the process of recompiling packages and applications without the debug flags, removing any such development add-ons and support. By default, the configure command is set to --enable-build=production which does not include debugging or profiling information in the binaries and libraries. If you have been configuring with --enable-build=debug or --enable-build=profiling, remove that option from the configure line when creating your final platform for deployment. You can also explicitly include the --enable-build=production option. To turn off kernel debugging support, within the project build directory, change directory to prjbuildDir/build, and enter:
$ make linux.menuconfig

Go to the Kernel hacking menu item, and disable Compile the kernel with debug info. Alternatively, you can use the Kernel Configuration tool in a Workbench platform project as described in the Wind River Workbench by Example, Linux Version. If you are enabling this option in a *.cfg file, turn it off there. Refer to 9. Configuring the Kernel for more on kernel configuration.) You may want to optimize libraries, if appropriate, as described in 12. Configuring Scalable Features or configure for stand-alone deployment as described in Part III. Deploying your Platform Project. Final test and release completes the process.

16

3
The Development Environment
3.1 Introduction 17 3.2 Development Environment Directory Structure 18 3.3 Templates and Layers 20 3.4 Layers in the Development Environment 21 3.5 Templates in the Development Environment 23

3.1 Introduction
You build Wind River Linux run-time systems using two different environments:

the development environment the build environment

The development environment is the installed Wind River code, in its own directory and subdirectory structure. The build environment is a completely separate area, where you actually build the run-time system. When building a run-time system for a supported target board or simulation, you should not have to enter or modify the development environment in any way. The separation of the development and build environments keeps the development environment pristine, and also supports parallel builds. This chapter introduces the structure and content of the development environment. It includes a discussion of the function and contents of the two prominent structural features of Wind River Linuxlayers and templates. The build environment, including the use of the configure and make commands to build runtime software, is described in 4. Configuring and Building.

17

Wind River Linux User's Guide, 3.0

3.2 Development Environment Directory Structure


Wind River Linuxs development environment is typically installed in the WindRiver subdirectory of your home directory, that is, /home/user/WindRiver. This is referred to as installDir.
Figure 3-1 Overview of Development Environment

installDir/

startWorkbench.sh and Workbench Directories

wrlinux-3.0/ Linux Product Directory

updates/ Patches, etc.

layers/

sysroots/

ldat/ Build System

wrlinux/ configure front-end

wrll-analysis-version/ Analysis Tools Layer

wrll-host-tools/ Host Tools Layer

wrll-linux-version/ Kernel Layer

wrll-toolchain-version/ Toolchain Layer

wrll-wrlinux/ Core Layer

The structure shown in Figure 3-1 makes it clear what is part of the build system and therefore cannot be overridden by other layers, and what is actually a layer and therefore can be overridden by your custom configurations.
NOTE: Not all layers are shown in Figure 3-1 and additional layers may be added.

The directories and executables shown in Figure 3-1 are discussed in the following sections.

startWorkbench.sh and the Workbench Directories

The startWorkbench.sh executable starts the Workbench GUI. You can start Workbench through clicking a desktop icon or, from the command line, entering the path and name of the executable. Workbench is introduced in the Wind River Workbench Users Guide, and examples of its use are in Wind River Workbench by Example, Linux Version. Not specifically shown in the diagram are several directories of interest to Workbench users:

workbench-3.1The Wind River Workbench installation. workspaceThe default Workbench workspace. You can specify an alternate location at Workbench startup or switch workspaces during use.

18

3 The Development Environment 3.2 Development Environment Directory Structure

docsthe documentation for the online help system. The .html and .pdf files may also be accessed directly by browsing. wrlinux-3.0/scriptsScripts useful for Workbench and otherwise, including a script to help in adding packages to a project (see 22.5 Adding Packages with a GUI Tool, p.268). wrlinux-3.0/samplesSample projects that can be used in Workbench as well as from the command line.

The updates Directory

Use this location for patches and other updates from Wind River.

3.2.1 The wrlinux-3.0 Directory


The wrlinux-3.0 directory contains the WindRiver Linux development environment, with the contents as shown in Figure 3-1 and discussed in this section.

The sysroots Directory

The sysroots directory contains several pre-built sysroots that are available as build specs out of the box. Sysroots provide board-specific, pre-built target libraries to link against when building a package from source.

The wrlinux Directory

This directory contains the configure script you use when configuring a project and a config directory that contains files setting default configure script behavior. The configure script is a front end to the product-agnostic ldat/configure script. You run the wrlinux/configure script which calls ldat/configure with the proper parameters.

The ldat Directory

ldat is an acronym for Linux Distribution Assembly Tool. This directory contains the package- and product-agnostic build infrastructure. It includes text files that list required host tools for supported development hosts.

The scripts subdirectory contains a number of shell scripts for the build system that perform various build tasks, including defining build macros and rules. Within the tools subdirectory are makefiles and configuration scripts to build Wind River-modified host tools including the rpm, bzip2, and elfutils tools used to build target packages; the qemu simulator; the patch and quilt patching tools; and many more.

19

Wind River Linux User's Guide, 3.0

The layers Directory

The wrlinux-3.0/layers directory is described in detail in 3.4 Layers in the Development Environment, p.21.

3.3 Templates and Layers


Wind River Linux is mainly organized as templates and layers. The organization and contents of the provided layers and templates are described in this chapter. How the build system processes these layers and templates is described in 5. Layer and Template Processing.

3.3.1 What is a Template?


A template is a collection of configuration settings and files that you use when creating a project; the build system combines multiple templates to build a complete project. Templates can add packages to the file system, change kernel configuration flags, provide kernel or package patches, or even override existing definitions of other templates. You can specify templates directly on the command line, or through include files in templates, which specify additional templates for inclusion. Templates are often categorized into types. For instance, templates that control the contents of the root file system are called rootfs templates. A BSP is a board template which specifies a cpu template and a kernel architecture (karch) template in its include file.

3.3.2 What is a Layer?


Wind River Linux provides for multiple independent collections of templates, code, configuration files, and packages, called layers. Multiple layers may be included in a single project, and each layer can provide any combination of features, ranging from kernel patches to new user space packages. A layer allows the addition of new files, such as the templates that define a board for a new BSP, without modifying the original development environment. Just as templates can include other templates, layers can include other layers using the include file mechanism.

20

3 The Development Environment 3.4 Layers in the Development Environment

3.4 Layers in the Development Environment


Layers provide a mechanism for separating functional components of the development environment as described in this section. You can also create your own layers to provide similar separation of your components when developing products as described in 6. Custom Layers and Templates.

The layers Directory

The Wind River Linux development environment contains separate layers for the kernel, toolchain, and build source files. These layers are directories in installDir/wrlinux-3.0/layers. The Wind River Linux layers have the prefix wrll, for example wrll-wrlinux, which is the layer containing the kernel and file system sources as well as associated configuration directories (templates). A kernel layer (wrll-linux-version) and toolchain layer (wrll-toolchain-version) are also provided in the layers directory. The layers directory may also contain other layers including optional products, such as the Real-Time Core product from Wind River (wrll-rtcore-version). The basic layered development structure is shown in Figure 3-2.
Figure 3-2 Overview of Development Environment Layers

installDir/wrlinux-3.0/layers

wrll-analysis-version

wrll-host-tools

wrll-linux-version

wrll-toolchain-version

wrll-wrlinux

dist/ packages/ templates/ tools/

host-tools/ templates/ tools/

board/ dist/ docs/ packages/ templates/

include wrll-toolchain-*/

RPMS/ dist/ host-tools/ packages/ templates/ tools/

The sections below describe the contents of the primary layers provided with Wind River Linux.
The wrll-analysis-version Layer

The analysis layer includes Workbench and command-line analysis tools support. The subdirectories and contents are:

dist/Makefiles and source files for building analysis tools. packages/Source packages for the Workbench analysis tools. Refer to the Workbench analysis tools documentation for details on these tools. templates/Various analysis tools templates for inclusion during configuration.

21

Wind River Linux User's Guide, 3.0

tools/Makefiles and source files for the boottime analysis tools. See 12. Configuring Scalable Features for more information on the boottime analysis tools.

The wrll-host-tools

The host tools layer contains the host tool binaries in host-tools, and the makefiles and patches in tools that create the binaries from the source code provided in layers/wrll-wrlinux/packages. A default template adds the host tools to your build configurations.
The wrll-linux-version Layer

The kernel layer contains the default kernel for different board combinations under the boards subdirectory. The packages directory contains the source file archives for host and target kernel tools, and host makefiles and patches are in tools and target makefiles and patches are in dist. The templates directory provides configuration files for different kernel configurations.
The wrll-toolchain-version Layer

The toolchain layer (wrll-toolchain-version) contains toolchain layers for each supported architecture, a common toolchain layer, source, and an include file that includes the architecture-specific toolchain layers. This is an example of a layer using an include file to include additional layers. In the case of this include file, the toolchains for the individual architectures are preceded with a minus (-) sign, meaning the particular toolchain is not required for a configuration to proceed, but the common toolchain entry does not have a minus sign meaning that it is required. The toolchain subdirectories contain the complete GNU tool chain including cross-compiler and GNU documentation. The toolchain layer was formerly supplied by the wrlinux-version/gnu/ directory.
The wrll-wrlinux Layer

This directory contains the open source files, target package patches and makefiles, and the resulting binaries and has the following subdirectories:

The packages directory contains compressed tar files and source RPMs, for Wind River Linux run-time system applications and host tools. These are copied from their open source project repositories. The dist directory contains makefiles and patches for the run-time application source files, each in their own package subdirectory. The makefiles and patches integrate the original pristine source files from the open source community stored in packages into the Wind River Linux build system. The RPMS directory contains the patched run-time applications packaged as binary RPMs. These pre-built packages are used to build the run-time file system unless you make source modifications. The templates directory contains a set of templates that control the architecture, CPU, board, and kernel configuration (including patches); the file system configuration, and the package list of each board. For more information on templates, see section 3.5 Templates in the Development Environment, p.23.

22

3 The Development Environment 3.5 Templates in the Development Environment

Layer Structure and Replaceability

The kernel (wrll-linux-version) and core (wrll-linux) layers show some of the standard subdirectories that layers can contain, including custom layers that you create. Layers make the development and build environments highly configurable. Layers are replaceableif you have a different kernel layer, for example, you could specify it as an option to the configure command and override the default kernel layer. For more information on layers, refer to 5. Layer and Template Processing.

3.5 Templates in the Development Environment


Templates provide a way for Wind River Linux to support five basic architectures and many processor families and target boards. Templates reduce repetition of common configuration information, enhance standardization of supported features, and simplify reconfiguration and the addition of new software. A template is simply a directory containing a collection of text configuration files. Development directory templates are typically organized as directory/subdirectory pairs, for example rootfs/glibc_small is a template of the root file system type for the glibc_small root file system, and kernel/standard is a template of the kernel type for the standard kernel. The configuration files in a template define the various file systems and kernels available with different boards. File system templates reside primarily within layers/wrll-wrlinux/templates (the core layer) and kernel templates within layers/wrll-linux-version/templates (the kernel layer). This section describes the typical contents of templates and their basic types. The way the build system processes templates is described in 5. Layer and Template Processing.

3.5.1 Template Configuration Files


A template may contain any of the following types of configuration files:

config.shdefines build environment variables. includelists other templates to include. *.cfgkernel config fragments (partial kernel configuration files). uclibc.cfga build configuration file for uClibc-based file systems. pkglist.* This set of files controls the ultimate contents of the prjbuildDir/pkglist file:

pkglist.addlists packages to be added to the target package list. pkglist.removelists packages to be removed from the target package list.

23

Wind River Linux User's Guide, 3.0

pkglist.onlya list of packages that are to be the beginning of a new target package list, replacing any pkglist assembled up to this point.

toolslist.*: This set of files controls the ultimate contents of prjbuildDir/toolslist.

toolslist.addlists packages to be added to the host tools package list. toolslist.removelists packages to be removed from the host tools package list. toolslist.onlya list of packages that are to be the beginning of a new host-tools package list, replacing any toolslist assembled up to this point.

modlist.* This set of files controls the ultimate contents of the prjbuildDir/filesystem/fs/etc /modules file:

modlist.addlist of kernel modules to add to the target for optional loading (for example with the modprobe command). modlist.removelists modules to be removed from the target module list. modlist.onlya list of modules that are to be the beginning of a new target module list, replacing any modules file assembled up to this point.

fs/ filesTemplates may contain an fs subdirectory, the contents of which are used to help assemble the final target file system. For example, an fs/etc/inittab file may contribute the inittab for the target file system. bootloader/ filesFiles associated with bootloader requirements for the specific template. READMEText describing the template. This is copied to prjbuildDir/READMEs/ when you configure a project that includes the template.
NOTE: Each board/boardname template includes an important README file. This file provides detailed information on the particular board, including information on board-specific features, its bootloader, and booting procedures. This README file is available in the prjbuildDir/READMES directory after you configure a project.

3.5.2 Core Layer Template Directory Structure and Contents


The wrll-wrlinux templates define the boards, features, and file systems as shown in Figure 3-3. Additional templates define tests and add-on products.

24

3 The Development Environment 3.5 Templates in the Development Environment

Figure 3-3

Structure of the Core Templates Directory

wrll-wrlinux

templates

board

extra

feature

profile

rootfs

test

The following sections describe the contents of the five primary template directories and the three supplementary template directories of the wrll-wrlinux core layer.

profile Templates

Profiles combine kernel, file system, and other features into groups tailored for specific uses. They can be used as is or used as a template to which you may add or subtract features, or as a model for creating custom profiles. Pre-defined profiles provide you with a starting point to create tailored solutions that best fit you needs. They are not necessarily intended to be used as an actual product, but may serve as a model or basis for a product. Pre-define profiles include:

consumer_premise_equipmentA consumer device profile for customer premise equipment. This profile provides, for example, possible configurations for a set top box, or a home network gateway. industrial_equipmentA consumer device profile for industrial personal computers. This profile provides, for example, possible configurations for a network monitor device, or an industrial control device. mobile_multimedia_deviceA consumer device profile for mobile and multimedia devices. This profile provides, for example, possible configurations for a personal media player, or a mobile internet device. pne Carrier Grade Linux. The Carrier Grade Linux specification is a standard, owned by the Linux Foundation, that defines features and performance of a Linux distribution suitable for use in carrier grade equipment. Enabling this profile will provide a configuration that implements the requirements of the Carrier Grade Linux specification as published at http://www.linuxfoundation.org/en/Carrier_Grade_Linux. The pne profile provides a CGL registered kernel and userspace configuration. Documentation describing which features are enabled in this configuration can be found on the CGL registration page at http://www.linuxfoundation.org/en/Registration.

epneEnhance Carrier Grade Linux. This profile provides as certain enhancements that are not currently part of the Carrier Grade Linux specification but are of use in carrier environments. This includes customized

25

Wind River Linux User's Guide, 3.0

OOM killer behavior, network traffic statistics gathering, specialized communications mechanisms for high-performance applications inside the same machine and modified default configuration values.

lpne Limited Carrier Grade Linux. This profile is intended for environments where some carrier grade features are required but a full CGL registered kernel and userspace is not necessary. Expected environments for this profile are small networking appliances and consumer electronics devices where specific components of the CGL specification are required (for example, applications where only the CGL Security requirements are necessary).

For additional information on the profiles and their supported features, refer to the
README files and templates associated with each profile in

installDir/wrlinux-3.0/wrll-wrlinux/templates/profile. For information on how to configure a project to include a profile, refer to Configuring with Profiles, p.36.

rootfs Templates

The templates/rootfs directory contains two types of templates:

*libc*Template subdirectories of rootfs that contain libc in their names, for example glibc_std, provide the package lists of the particular root file system type in pkglist.add files. They also contain include files that include the appropriate *_fs structure template for the file system (described next), and config.sh files which provide environment variables for the build system. file_system_fsTemplate subdirectories of rootfs that end in the suffix _fs provide file system structure. For example, glibc_small_fs contains some target /etc files and subdirectories as well as fs-install and pre-cleanup scripts used by the build system as described in 5.5 Constructing the Target File System, p.62.

These are discussed in more detail in the following subsections.


rootfs/*libc* Templates

The four supported file system templates are:

glibc_cgl This provides a full suite of Glibc-based run-time packages, including CGL-relevant packages and CGL extensions.

glibc_std This provides a full suite of Glibc-based run-time packages, but without CGL-relevant packages and CGL extensions.

glibc_small This provides a reduced suite of Glibc-based packages in a BusyBox-based run-time system.

26

3 The Development Environment 3.5 Templates in the Development Environment

uclibc_small This provides a reduced suite of uClibc-based packages in a BusyBox-based run-time system.
NOTE: Note that for the uclibc_small and glibc_small file systems, additional capabilities such as debug and demo tools must be added at configure time if you configure from the command linedebug and demo features are added by default when you configure using Workbench. For examples of how to add debug and demo features from the command line, see 4.2 Configuring Your Platform Project, p.32.

The templates for the four supported Wind River run-time file systems include a package list file and an include file; the templates listed in the include file (which always include a structure template, discussed next), are processed before the package list. (For details on include file processing, see 5.3.2 Processing Template include Files, p.56).
rootfs/file_system_fs Templates

The file system structure templates are generally processed first, because they are found at the top of the include files within the four supported file system templates. The three structure templates are:

glibc_fs This template provides a directory structure for all glibc_std and glibc_cgl-based file systems.

glibc_small_fs This template provides a directory structure for all glibc_small-based file systems.

uclibc_fs This template provides a directory structure for all uclibc_small-based file systems.

board Templates (BSPs)

This directory contains a template for each supported board, defining kernel and file system configurations specific to that board. A board template generally includes a kernel configuration file, a config.sh file and an include file. It may also have rootfs and kernel subdirectories for more detailed, board-specific configurations. The board template contains the board README file.
NOTE: Some board templates also include a bootloader subdirectory containing bootloader binaries and flashing instructions

27

Wind River Linux User's Guide, 3.0

test Templates

This directory contains several templates for different test suites. These tests are designed to validate a specific kernel and run-time system on a specific board. Some tests are board-specific, some are feature-specific, some are kernel and file system specific. There is also a common_tests template, for validation tests common to most boards, kernels and file systems.

feature Templates

The feature directory contains templates for special run-time features, some of which, like BusyBox, are automatically added to specific kernel and file system configurations, and others which you must add manually during the configuration phase, using either the --with-template-dir and --with-template options, or a custom layer.

extra Templates

The extra templates come from products other than the core Wind River Linux distribution providing, for example, SNMP support.

3.5.3 Toolchain Layer Template Directory Structure and Contents


The following summarizes the templates in the toolchain layer.

arch Templates

There is an arch templates directory for each supported architecturearm, ia32, mips, ppc, and sparc. With one exception their contents are not standard. Although they all contain a build environment variable file (config.sh), some also have package list remove files, some have kernel configuration files, and so forth.

cpu Templates

There is an cpu templates directory for each supported architecturearm, ia32, mips, ppc, and sparc.This directory contains a template for each supported CPU, defining CPU-specific configurations. Unlike the arch templates, the contents of the cpu templates tend to be standard. Each has a config.sh file and an include file; in addition, the CPUs that support uClibc have a uClibc build configuration file (uclibc.cfg).

multilib Templates

This directory provides information and environment variable settings (config.sh files) and includes architecture templates (with include files) to provide multiple library support. Information in the configuration files includes, for example, the valid combinations of the available versus the compatible CPU variants, and information on the available soft versus hard floating point libraries.

28

3 The Development Environment 3.5 Templates in the Development Environment

3.5.4 Kernel Layer Template Directory Structure and Contents


The basic structure of the kernel layer is shown in Figure 3-4Structure of the Kernel Templates Directory, p.29
Figure 3-4 Structure of the Kernel Templates Directory

wrll-linux-version

templates

board

default

karch

kernel

The sections below describe the contents of the wrll-linux-version kernel layer template directories.

default Templates

The templates/default directory contains a toolslist.add file. Default templates are always included with the layer, so these host tools are added to your configuration whenever the kernel layer is included.

feature Templates

The templates/feature directory provides configurations for various kernel features including boot-time tracing and kernel debugfs.

karch Templates

These templates provide makefile variables for the different kernel architectures.

kernel Templates

These templates configure the supported kernel types (cgl, ecgl, preempt_rt, rtcore, small, and standard). For more information on templates, see 5. Layer and Template Processing.

29

Wind River Linux User's Guide, 3.0

30

4
Configuring and Building
4.1 Introduction 31 4.2 Configuring Your Platform Project 32 4.3 Building Your Platform Project 43

4.1 Introduction
The Wind River Linux Distribution Assembly Tool, or LDAT, is the Wind River Linux cross-build system for producing optimized embedded device software. LDAT has been designed specifically to benefit distributed development environments (many to one), and environments in which there are multiple projects leveraging common code.

4.1.1 Design Benefits


The design of the Wind River Linux build system offers several important benefits:

If a pre-built kernel and file system is satisfactory for deployment, or current testing and development, you can build a complete run-time file system in minutes using prebuilt kernel and file system binaries (the RPM build method). You can build specific parts from source files, saving time by building only the file system, or only the kernel, or a specific package, whichever element is of current interest. Your builds cannot contaminate the original pre-built kernels, RPMs, configuration files, and source packages because the development environment is kept separate from the build environment. By using custom layers and templates (see 6. Custom Layers and Templates),you can add packages, modify file systems, and reconfigure kernels for repeatable, consistent builds, yet still keep your changes confined for easy removal, replacement, or duplication.

31

Wind River Linux User's Guide, 3.0

These last two features allow multiple builds, customized builds, and a strict version control system, while keeping the Wind River development environment pristine and intact. You create the build environment as a regular user with the configure command. It is in this environment that you build (make) Wind River Linux run-time system, either default or customized, using software copied or linked from the development environment. This chapter describes how you use the configure script to configure your project for the run-time software targeted for your board or simulation. It then describes the various make commands you use to build all or selected parts of the run-time. Although this chapter is oriented toward the command line, it will also give Workbench users a better understanding of the process of creating a Wind River Linux platform project.

Post-installation

Before you create your first run-time system, there is an immediate post-installation step that you must perform only onceinstalling host updates (if required).
Installing Host Updates

After product installation, you must install any necessary host updates. Please read the required-*.txt text files for your host, within installDir/wrlinux-3.0/ldat/. Use the -q flag with the RPM command to check your versions. The version numbers within the required text files are the minimum acceptable.
NOTE: The configure command checks for required host updates, and notes them in config.log, a text file within the build directory.

4.2 Configuring Your Platform Project


Creation of a platform project is basically a two-step processconfiguring your build with the configure command, and then performing the build with the make command. This section describes how to configure your build, and 4.3 Building Your Platform Project, p.43 describes the build itself.

4.2.1 Creating the Project Build Directory


The build environment should be kept separate from the development environment. Wind River recommends you create a separate work directory with a project build subdirectory holding the build environment. Figure 4-1 shows one example of this directory structure.

32

4 Configuring and Building 4.2 Configuring Your Platform Project

Figure 4-1

common_pc as the Project Build Directory, Holding the Build Environment

/home/user/

workdir/

WindRiver/

common_pc/

wrlinux-3.0/

In this example, the new work directory is named workdir. Within workdir is the common_pc project build directory which will hold the build environment, in this case for a common PC board. Directory names have been chosen for clarity; you can name them as you like. In this document, the variable prjbuildDir refers to your project build directory, which is common_pc in the example in the figure.
NOTE: When using Workbench to create a platform project, by default Workbench creates the installDir/workspace work directory, and the project build directory, with a _prj suffix, beneath it. You may override this default behavior, and select whatever work directory and project build directory structure you choose. For the example shown in Figure 4-1, the Workbench project directory would be in /home/user/WindRiver/workspace/common_pc_prj.

4.2.2 Configuring the Build Environment


This is the first step in both the RPM and the source method of building the run-time system, and is performed by running the configure script.
NOTE: Do not run configure, subsequent builds (make target), or Workbench as root because this may interfere with the operations of the build system.

The configure script, found within installDir/wrlinux-3.0/wrlinux/, must be run from within the project build directory, in this example, from within workdir/common_pc/. The configure script is the most important of several key configuration filesit initiates the entire configuration process. It creates a subdirectory structure within the project build directory and populates it with the script framework, configuration files and tools necessary to build the run-time system. It processes board templates and initial package files, and copies basic run-time file system configuration files (for the etc and root directories), from the development environment. The script is always run with options. Which options you supply depend on which kernel and file system you wish to build for your board, which features you want to include, and whether you wish to build a complete run-time system, or only a kernel or only a file system.

33

Wind River Linux User's Guide, 3.0

The configure script produces a plain text log file, config.log, within the project build directory, in this case, workdir/common_pc. This is a very useful file, recording configure options, automatic checking of host RPM updates, and so on. Workbench saves a similar log file, creation.log which contains the screen output of the configure command.

4.2.3 Build Environment Directory Structure


An illustration of some of the subdirectories the configure script creates within a project build directory is in Figure 4-2.
Figure 4-2 Partial Contents of the Project Build Directory

prjbuildDir

build

build-lib

build-tools

export

host-cross

filesystem

scripts

RPMS

dist

fs

Selected directories and their contents are described in further detail below:

build

Contains target packages for the default CPU of the build. During an RPM build (make fs), source code for the kernel and its patches are copied to the linux-version subdirectory within build/. During a source build (make build-all), the source code for all packages is copied to and built within each packages named subdirectory within build.

build-lib

This directory appears in cases where the project supports multilibs. For example, for the common_pc_64 board type, the build-x86_32 directory appears here. This is how the build of each multilib for the respective packages is kept separate.

build-tools

A special build directory where any required host tools are built.

export

The build stores its end products in export/. During an RPM build (make fs), the pre-built kernel image is copied to export/. The run-time file system, built from RPMs, is copied to the dist subdirectory, and a compressed run-time file system (a tar.bz2 file), is placed in export/. During a source build (make build-all), this directory also contains the new kernel, the vmlinux file, System Map file and a compressed modules file. During a source build (make build-all), the newly created RPMs are placed within the RPMS subdirectory, in addition to the contents placed in export/.

34

4 Configuring and Building 4.2 Configuring Your Platform Project

filesystem/fs

Contains run-time system files such as configuration files for etc and boot.

host-cross

Contains tools that run on the host and assist in cross-compiling and using the build environment. This is basically the infrastructure for the project build and includes the toolchain wrappers, toolchain, host tool binaries, and libraries.

scripts

Contains macros and scripts for the build system.

Local Custom Layer Directories

Three directories not shown in Figure 4-2, are dist, packages and tools. These directories allow you to add packages and tools to experimental builds, without the necessity of first creating a custom template or layer. Your project build directory essentially becomes the highest-level layer containing these directories. To use them, populate them with the following:

distMakefiles and patches for target packages. toolsMakefiles and patches for host-tool packages. packagesThe SRPM and classic packages themselves.

The contents of these directories will augment (if unique), or replace (if identically-named,) the contents of dist, packages, and tools directories in lower-level layers, including the layers installed with the product. Once validated, the contents should be moved to a custom layer. Each of these directories contains a README file with additional information. Instructions for using these directories, and for adding packages in general, can be found in chapter 10. Adding Packages.

4.2.4 Configure Options


Many common options are explained in section 4.2.7 Some Common Configure Options, p.40. Necessary options are determined by what you wish to builda complete run-time system, just a kernel, or just a file system. The rules for these three types of build are shown below.
NOTE: With the exception of the uclibc_small and glibc_small file systems, the configure command creates file systems that by default contain debugging functionality. See 4.2.6 Configure Option Examples, p.37 for details on adding debugging capabilities to small file systems.

35

Wind River Linux User's Guide, 3.0

Configuring with Profiles

Profiles provide a convenient way to specify multiple options for pre-defined configurations. These configurations typically serve as starting points for more custom development. Pre-defined profiles include the following:

consumer_premise_equipment industrial_equipment mobile_multimedia_device pne epne lpne

This makes configuration a simple matter, for example:


$ configure \ --enable-board=common_pc \ --enable-profile=industrial_premise_equipment

For more on the profile configurations, see profile Templates, p.25.

Complete Run-time System

To configure a complete run-time system, necessary options are:


--enable-board --enable-kernel --enable-rootfs

Kernel-only

To configure only a kernel, necessary options are:


--enable-board --enable-kernel

File System-only

To configure only a file system, necessary options are:


--enable-rootfs --enable-cpu

4.2.5 Configure Option Rules


The following configure option rules follow from the Wind River Linux build system design philosophy, in which file systems are closely identified with CPUs, and kernels are closely identified with boards.

You can specify --enable-kernel or --enable-rootfs or both. If you specify --enable-kernel, you must also specify a board. It is an error to specify both --enable-cpu and --enable-board. A board implies a CPU.

36

4 Configuring and Building 4.2 Configuring Your Platform Project

Note that with the use of profiles which contain a kernel and root file system specification, you only need to specify the board and profile on the configure command line. The use of profiles is discussed in more detail in profile Templates, p.25 and 6. Custom Layers and Templates.
NOTE: Do not repeat arguments to configure, because only the last one will be used. For example, if you specify: ... --with-toolchain-version=x --with- toolchain-version=y configure just sets the version to y. If you want to specify multiple non-exclusive features, you can use comma-separated lists, for example: ... --with-template=template1,template2 or add features to the root file system with the + shorthand such as ... --enable-rootfs=glibc_small+feature1+feature2+feature3.

4.2.6 Configure Option Examples


The following example demonstrates configuring the Intel Common PC board with the standard kernel and standard Glibc file system. At a minimum, you should specify a board, kernel, and root file system. The configure options used below will build the system with a full suite of board validation tests.
NOTE: For examples of configuring Wind River Linux platform projects with Workbench, see Wind River Linux Getting Started, and Wind River Workbench by Example, Linux Version.

Within the project build directory, type the following on the command line:
$ configure \ --enable-board=common_pc \ --enable-kernel=standard \ --enable-rootfs=glibc_std \ --enable-test=yes \ --with-test=bsp

The configure command is a script located in installDir/wrlinux-3.0/wrlinux/. If this directory is not in your PATH, include the absolute or relative path to the configure command in the examples given in this guide.
NOTE: The configure command fails with an error if you have "." in your PATH environment variable. In addition to being a security issue, having a "." in your PATH can cause problems with the build. Remove "." from your PATH (for example, by editing and reinitializing your .bashrc, .cshrc, or other startup file) before issuing the configure command.

Press ENTER to configure the project build directory. This takes two or three minutes. When configuration is finished, type:
$ make fs

This creates a complete run-time file system from pre-built RPMs, in approximately 20 to 30 minutes, depending on your configuration and environment. The system copies the pre-built kernel for this project from installDir/wrlinux-3.0/layers/wrll-linux-version/boards/board-name/kernel-type/ to the export/ directory within the project build directory.

37

Wind River Linux User's Guide, 3.0

Three basic sets of configure options for building run-time systems for the common PC are shown below. An additional set is shown for building a run-time system for the Platform CD ARM Versatile AB-926EJS.
NOTE: For details on developing platform projects for the optional Real-Time Core

product, refer to the Wind River Real-Time Core Programmers Guide.

Common PC Complete Run-time

The following command configures a complete run-time system (kernel and file system) for the common_pc
$ configure \ --enable-board=common_pc \ --enable-kernel=standard \ --enable-rootfs=glibc_std

Alternatively, you could use a profile instead of specifying the kernel and root file system on the configure command line as described in Configuring with Profiles, p.36.

Common PC Kernel Only

The following command configures only a kernel for the common_pc BSP.
$ configure \ --enable-board=common_pc \ --enable-kernel=standard

You can then use the source build method (make build-all) to build the kernel.

Common PC File System Only

The following command configures a file system only for the common_pc:
$ configure \ --enable-cpu=x86_32_i686 \ --enable-rootfs=glibc_std

NOTE: Correct CPU codes for each board can be found in the wrll-wrlinux/templates/board/boardname/include file.

Enter make build-all to build the file system packages in export/RPMS/.

38

4 Configuring and Building 4.2 Configuring Your Platform Project

ARM Versatile AB-926EJS Complete Flash-Capable Run-time

You can configure a a complete run-time system (kernel and file system) for the ARM versatile AB-926EJS, with subsequent creation of a flash file system enabled, using either the RPM (make fs), or source build (make build-all) method.
$ configure \ --enable-board=arm_versatile_926ejs \ --enable-kernel=small \ --enable-rootfs=glibc_small \ --enable-bootimage=flash

NOTE: In this example no debug or demo templates have been added to the small

file system configuration, which makes for a smaller run time, but it is one that does not have debug capabilities, such as usermode-agent, built in. In the next example, debug capabilities are added.

ARM Versatile AB-926EJS Complete Debug-Capable Run-time

You can configure a complete run-time system (kernel and file system) for the ARM versatile AB-926EJS, with subsequent debugging enabled, using either the RPM (make fs), or source build (make build-all) method.
$ configure \ --enable-board=arm_versatile_926ejs \ --enable-kernel=small \ --enable-rootfs=glibc_small \ --with-template=feature/debug

The final option in the example, --with-template=feature/debug, adds application debugging features to the file system. Note that a shorthand way of adding file system profile templates is available, and you could specify the file system with --enable-rootfs=glibc_small+debug. Therefore, an equivalent configuration command for this example is:
$ configure \ --enable-board=arm_versatile_926ejs \ --enable-kernel=small \ --enable-rootfs=glibc_small+debug

Similarly, to add demo capability (graphics capabilities) to a uclib_small file system, you could either include the --with-template=feature/demo option to the configure command, or just specify the file system as --enable-rootfs=uclibc_small+demo:
$ configure \ --enable-board=arm_versatile_926ejs \ --enable-kernel=small \ --enable-rootfs=uclibc_small+demo

NOTE: For more information on the features provided by the debug and demo templates, see the installDir/wrlinux-3.0/wrll-wrlinux/templates/feature/demo and debug directories.

39

Wind River Linux User's Guide, 3.0

Adding Analysis Tools Support to Projects

Analysis tools are primarily used with Workbench as documented in online Analysis Tools and Workbench documentation. Like other Wind River Linux configuration commands, you can perform the following through Workbench or the command line. Note that if you create projects through the command line, you then have to import them into Workbench for them to become visible. Backtracing, which is used by the analysis tools, is performed differently by MIPS boards than by non-MIPS boards, so the following presents two examples of configuring builds for analysis tools.
Non-MIPS Targets

You can use the following configure command to add analysis tools support to, for example, a SUN CP3020 target:
$ configure --enable-board=sun_cp3020 \ --enable-rootfs=glibc_std \ --enable-kernel=standard \ --with-template=feature/analysis \ --enable-build=profiling

NOTE: The --enable-build=profiling option enables frame pointers for the backtrace code. (The --enable-build=debug option also enables frame pointers which enables backtrace.) To build this project, you must perform a build-all to rebuild all the packages with the new flag. MIPS Targets

You can use the following configure command to add analysis tools support to, for example, the following Cavium Octeon target:
$ .../configure --enable-board=cavium_octeon_cn38xx_evb_nic4 \ --enable-rootfs=glibc_std \ --enable-kernel=standard \ --with-template=feature/analysis

NOTE: The MIPS boards do not need additional configure options because they use a different method for backtracing. You can use make fs to build this configuration.

4.2.7 Some Common Configure Options


The configure command can be run with a large number of options. You can display a complete list with the following command:
$ installDir/wrlinux-3.0/wrlinux/configure --help

This section describes some of the more commonly used configure options.

Basic Configure Options

A full platform configuration requires that you specify a board, kernel, and root file system. ---enable-board=boardname Specifies the target board. The list of board support packages that are currently installed is given in the --help output. A full list of supported boards can be

40

4 Configuring and Building 4.2 Configuring Your Platform Project

found at Wind River Online Support. A board specification implicitly includes cpu and arch because the board template includes defaults through include files. (For details on include files, see 5.3.2 Processing Template include Files, p.56). This option is equivalent to specifying --with-template=board/boardname. ---enable-kernel=kernel Specifies the kernel. This option is equivalent to specifying --with-template=kernel/kernel option. ---enable-rootfs=rootfs Specifies the file system. This option is equivalent to specifying --with-template=rootfs/rootfs option.

Some Additional Configure Options

--enable-ldat-checksum=[yes|no] Rebuilds packages from source when the checksum of package meta data changes instead of using the prebuilt RPMs. Meta data includes build system makefiles, the tar packages, patches, version, toolchain information, and so on. The default is yes. ---enable-cpu=cpu Specifies the CPU. Typically, you do not specify this option because there is a default CPU for the board you choose with --enable-board. --enable-jobs=number Specifies the maximum number of parallel jobs that make should perform. This should be set to the number of CPUs your system has available. --enable-bootimage=iso Enables the subsequent build of an ISO bootimage. Note that after the build completes, you must run a further command, make boot-image, to actually build the .iso image (found within export). --enable-bootimage=flash Enables the subsequent build of a flash file system. Note that after the build completes, you must run a further command, make boot-image, to actually build the image file (found within export). --enable-test=yes Includes that file systems and kernels standard suite of test packages. --with-test=testname Includes a specific test. --with-template=template1,template2,template3... Appends the specified templates to the usual template list created by the configure options.When used with the --with-template-dir option, it can be used to include a custom template. --with-template-dir=templatedirectory Specifies a user-selected directory for a custom template, to be processed after the usual templates in wrll-wrlinux/templates. --with-layer=layer1,layer2,layer3... Specifies custom layers. The system will process any template of the same name found within a layer instead of the regular template within the

41

Wind River Linux User's Guide, 3.0

development environment. (The regular template may, however, be included by the template in the custom layer.) --enable-quilt=yes Applies the quilt model instead of patch when applying patches. This is the default. --with-package-dir=packagedirectory Specifies the location and name of the directory containing package source files. Without this option, configure defaults to wrlinux-3.0/layers/wrll-wrlinux/packages/. --with-toolchain-dir=toolchaindirectory Specifies the location and name of the directory containing the toolchain. Without this option, configure defaults to installDir/wrlinux-3.0/layers/wrll-toolchain-version/. --enable-build=debug or --enable-build=production When doing a source build (make build-all), debug will compile and install binaries and libraries with debugging information (-g). This also lowers default optimizations. Use production (the default) to optimize and strip installed libraries and binaries.

Rebuilding the Toolchain or Libc From Source

Arguments to the configure command allow you to rebuild the toolchain or libc from source. !
WARNING: While building the toolchain or libc from source is supported, the

resulting binaries are not supportedall defects must be reproduced using the prebuilt binary toolchain and libc (glibc or uclibc).
Building libc from Source

To build libc from source, rather than using the pre-built version from the toolchain, add the feature/build_libc template with --with-template=feature/build_libc, or use the shorthand method of adding templates when specifying your root file system, for example: --enable- rootfs=glibc_std+build_libc
NOTE: Note that the --enable-build-libc argument is deprecated. Use the build_libc template to build libc.

To change the options for the C library, specify them in the package list (prjbuildDir/pkglist). For example, if you wanted to build glibc with frame pointers, you would modify the glibc entry in pkglist to read:
glibc EXTRA_CFLAGS=-fno-omit-frame-

42

4 Configuring and Building 4.3 Building Your Platform Project

Building the Toolchain

To build the toolchain from source, use --with-template=feature/build_toolchain. When using this option, once the project is configured, perform a make toolchain. The system will give you further instructions.
NOTE: Note that the --enable-build-toolchain argument is deprecated. Use the build_toolchain template to build the toolchain.

Configuring Rebuilding of Host Tools

To configure to build the host tools, use --enable-prebuilt-tools=no. Note that this does not rebuild the toolchain components. Use make host-tools to build the host tools.

4.3 Building Your Platform Project


This section describes the RPM and source build methods, and provides examples of each.
NOTE: Time estimates for the various commands are supplied in this section. Your times will differ depending on the particular configuration you are building and development host resources.

4.3.1 Two Build Methods


The first step in creating your run-time software is to create the build environment by running the configure command, with its necessary options. You then build your run-time software from source or pre-built RPMS using one of two build methods. Wind River Linuxs Platform Developer Package builds board, kernel and file system-specific run-time systems from pre-built Linux kernels, and pre-built RPMs. The build system can also build board, kernel and file system-specific run-time systems from source. There are thus two methods for creating a run-time system:

The RPM build method (make fs). This method uses pre-built kernels, and builds run-time file systems from pre-built RPMs where available, otherwise from source packages.

The source build method (make build-all). This method always builds both kernels and file systems from source packages.
NOTE: You do not have to rebuild everything if you modify individual package sources or meta data. See Rebuilding Packages with Changed Checksums, p.45 for more information.

43

Wind River Linux User's Guide, 3.0

4.3.2 Using the RPM Build Method (make fs)


This is the fastest way to create a run-time system. It uses a pre-built kernel and builds both a compressed and an uncompressed version of the file system from pre-built RPMs. To include a prebuilt kernel and build the file system, do the following:
Step 1: Create the build environment.

Run the configure command within the project build directory to create a board, kernel, and file system-specific build environment, separate from the development environment created upon product installation. As part of this process, makefiles, configuration files, and a directory structure are configured for the new build environment. For example, you could configure a complete platform project as follows:
$ configure --enable-board=common_pc --enable-kernel=standard \ --enable-rootfs=glibc_std

Step 2:

Build the run-time file system.

Run make fs within the project build directory:


$ make fs

This will typically take 10 to 20 minutes, depending on the complexity of the file system.
NOTE: The commands make, make fs, and make all do the same thingbuild the file system from RPMs where possible, from source otherwise, and create a link to the default kernel. These and associated files are placed in prjbuildDir/export.

Looking More Closely at the RPM Build Process

The output you see in the RPM build follows this sequence: 1. 2. The sysroot contents are updated. The kernel source from the development environment is unpacked into prjbuildDir/build/linux-version-type/, and all of the platform, file system, board, and feature-specific patches are applied. It creates a list of the package RPMs necessary to build the run-time file system. It then creates the run-time file system by extracting the package RPMs into prjbuildDir/export/dist/. File system information from prjbuildDir/filesystem/fs is extracted and copied to prjbuildDir/export/dist/ last, to be able to overwrite files from the RPMs. It compresses that run-time file system and installs the file into prjbuildDir/export, as a tar.bz2 file, so that it can be easily copied to an NFS-exported, or other, directory. For example, the compressed run-time file system for the common_pc BSP is: prjbuildDir/export/comon_pc-rootfs-kernel-dist.tar.bz2. For convenience, the prebuilt specific kernel for the particular platform and board is automatically copied from its location in the development environment

3.

4.

44

4 Configuring and Building 4.3 Building Your Platform Project

(installDir/wrlinux-3.0/layers/wrll-linux-version/boards/board/kernel), to prjbuildDir/export/.
Rebuilding Packages with Changed Checksums

If you modify the source or meta data of a package that is part of your current configuration, this will be detected when you rebuild your file system as long as you have the configuration option --enable-ldat-checksum set to yes (the default). You will be prompted to perform a distclean and rebuild of the package. You can set LDAT_FORCE_CLEAN=distclean on the command line to distclean it without intervention, for example:
$ make package_name LDAT_FORCE_CLEAN=distclean

If you have LDAT_FORCE_CLEAN=distclean set in your environment, you will not be prompted to distclean, you will not have to put it on the command line, and the build system will automatically rebuild packages with changed checksums. The meta data of each package contains the following:

Configuration Data Variables

package_MK5SUM package_DEPENDS package_CONFIG_VAR package_CONFIG_OPT package_MAKE_VAR package_MAKE_OPT package_EXTRACONFIGS package_TEMPLATE_DIRS package_DIST TARGET_FUNDAMENTAL_CFLAGS

Configuration Data Files

dist/package/patches/* dist/package/Makefile* dist/package/specfile

Other Configuration Data

Toolchaintoolchain version/wrapper Dependent packagesSums of all packages listed inpackage_DEPENDS

the tar packages, patches, version, and toolchain information, and so on. Note that packages that are dependencies are part of the checksum so that, for example, a change to glibc will affect the checksum of all dependent packages.
Dynamically Removing a Package

If you want to dynamically remove a particular package from the build, remove it from the pkglist file before the build.

45

Wind River Linux User's Guide, 3.0

4.3.3 Using the Source Build Method (make build-all)


This builds both the run-time file system and the kernel from source files. The initial steps are identical to the RPM build method, above. However, after running configure, run make build-all instead of make fs. To build the kernel and file system from source, use the following procedure.
Step 1: Create the build environment.

This step is identical to the RPM method with the configure command as shown in 4.3.2 Using the RPM Build Method (make fs), p.44.
Step 2: Build the kernel and run-time file system.

Run the command make build-all to build a new board-specific kernel from source, and generate a new set of RPMs from source files to build a compressed run-time file system image:
$ make build-all

This may easily take an hour or more for the first build, although subsequent builds should be faster. The make build-all first generates a new set of RPMs from open source archive files and source RPMs. Then it compiles the source files to generate new binary executables, and bundles the executables into RPMs. Finally, it builds the run-time file system from the new RPMs.

Looking More Closely at the Source Build Process

The process of building from source is described in more detail and sequentially below.
Step 1: Unpack, patch and compile packages.

This two-phase step proceeds package-by-package, that is, each phase is completed for one package, and the system then moves on to the next package.
Phase One

The source files for a specific board are unpacked from the original development environment (wrlinux-3.0/layers/wrll-wrlinux/packages/), directly to their own named subdirectory within the build directory. They are then patched if necessary (patches integrate packages into the build environment, add functionality, enable cross-compilation, or repair defects in the original source).
Phase Two

Each unpacked and patched package is configured and compiled. The binaries and any configuration files are then installed into each packages prjbuildDir/build/INSTALL_STAGE/package/ directory.
Step 2: Generate RPMs.

One or more binary RPMs are generated from each package, and installed in prjbuildDir/export/RPMS/processor or prjbuildDir/export/RPMS/noarch. The

46

4 Configuring and Building 4.3 Building Your Platform Project

noarch subdirectory holds binaries designed to run on all architectures, instead of a given processor type.
Step 3: Build the run-time system.

The next step builds the kernel and compressed run-time system file from the newly created RPMs, and installs them in export/. It creates the Linux kernel; compresses the file system and modules into a tar file, and compresses the modules alone into a separate bzipped tar file. It also creates a separate vmlinux file for debugging and a System.map file. Examples of these files, for the common_pc board are:

common_pc-System.map-WR3.0zz_standard@ common_pc-default_kernel_image-WR3.0zz_standard@ common_pc-linux-modules-WR3.0zz_standard.tar.bz2 common_pc-bzImage-WR3.0zz_standard@ common_pc-vmlinux-stripped-WR3.0zz_standard* common_pc-vmlinux-symbols-WR3.0zz_standard@ common_pc-glibc_std-standard-dist.tar.bz2

The run-time systems kernel and file system are ready to be exported to a target once the file system is unpacked and the kernel and file system are copied to their respective download and export directories.
NOTE: QEMU simulates deployment of supported boards directly from the project build directory, using the file system in export/dist. See chapter 14. Simulated Deployment with QEMU.

4.3.4 Building Parts of the Run-Time from Source


You may want to use the source method to build a complete run-time system, or the kernel alone, or the file system alone, or single packages. This can be a useful and time-saving feature if, for example, you have an acceptable kernel but want to modify the file system or, alternatively, only want to modify the kernel.

Building Only the Kernel from Source

To build only the kernel from source, perform the following procedure.
Step 1: Create the build environment.

This step is identical to the RPM method.


Step 2: Build the custom kernel.

Run make -C build linux to build a new board-specific kernel from source in approximately 20 to 30 minutes. The resulting build process can easily take an hour or more the first time it is run; subsequent builds should be considerably faster.

47

Wind River Linux User's Guide, 3.0

Building Only the File System from Source

To build only the file system from source, specify the file system and the CPU, for example:
$ configure --enable-rootfs=glibc_small --enable-cpu=x86_32

You can find the default CPU for a board by viewing the include file in the board template. For example:
$ cat \ installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/board/common_pc/include cpu/x86_32_i686 karch/i386

Building Individual Packages from Source

To build a single package from source, do the following:


Step 1: Create the build environment.

This step is identical to the RPM method.


Step 2: Build the custom package.

Run make -C build package_name. For example, to build the tar package, enter make -C build tar.

48

5
Layer and Template Processing
5.1 Introduction 49 5.2 Understanding Layers 50 5.3 Understanding Templates 52 5.4 Processing Template Components 60 5.5 Constructing the Target File System 62

5.1 Introduction
Wind River Linux includes layers, and some optional products from Wind River are implemented as layers. In addition, you can create your own custom layers, and include layers created by others. The layers that are provided by the Wind River Linux development environment were described in 3.4 Layers in the Development Environment, p.21. This chapter will help you understand how layers and templates are used in the Wind River Linux build system. Refer to 6. Custom Layers and Templates for details on how you can customize the default build environment with your own layers and templates. You have already used layers and templates to build kernels and file systems if you have performed any of the examples in the Getting Started, or in 4. Configuring and Building. When you create a project with the configure utility, you do so using the available templates and layers. The configuration process creates a list of available layers, and then searches them to obtain any required templates. If a required template is not found, it is an error. Layers provide templates and packages, while templates provide configuration. For example, a new package becomes available to the build system when you add it to a layer, but it only becomes part of a given project when you configure in the template that selects it. The template does not contain the package, it merely marks the package for inclusion. Throughout this discussion, the terms higher and lower are used to describe the priority layers or templates have. A higher-level template (or layer) takes precedence over a lower-level one, and is thus more specific, rather than less specific.

49

Wind River Linux User's Guide, 3.0

When configure searches for components, it selects higher-level components first. When configure applies multiple components, it applies lower-level components first; this design allows higher-level components to override lower-level components. For example, a given BSPs kernel configuration fragment is at a higher level than the generic standard kernel configuration. The BSP-specific kernel configuration settings can then override more generic kernel configuration settings.

5.2 Understanding Layers


This section describes how the build system configures layers in a hierarchical relationship. When this is combined with the configuration of templates discussed in the next section (5.3 Understanding Templates, p.52), it results in a powerful and flexible mechanism for you to use in producing run-time software for your targets.

5.2.1 Creating the Layer Search List


The ordered list of layers that configure creates is based on configure command line options and various defaults. The list is created from the following sources, listed in priority from highest to lowest: 1. Configure layersthe list of layers you optionally provide to the configure command with the --with-layer argument. You may specify one or more layers by separating them with commas. The first layers you list are the highest priority, for example: --with-layer=/path/toplayer,/path/middlelayer,/path/bottomlayer 2. Install layersthe layers included automatically based on the installDir/install.properties file. Added products may modify the install.properties file to automatically include layers in the build. Analysis layersThe wrll-analysis-version layer contains analysis tools and associated files. For Workbench there is also an analysis/wrlinux layer to provide certain backward compatibility. Kernel layerBy default, this is layers/wrll-linux-version. Toolchain layerBy default, this is layers/wrll-toolchain-version. For consistency in build operations, all of the toolchains are linked to (shadowed) in each project build area, regardless of the particular architecture of the project. Host tools layerBy default, this is layers/wrll-host-tools. Core layerBy default, this is layers/wrll-wrlinux.

3.

4. 5.

6. 7.

As layers are added to the layer search list, any include files they provide are processed, inserting the included layers on the list below the layer that includes them.

50

5 Layer and Template Processing 5.2 Understanding Layers

As an example of layer processing, consider the following configure command line:


$ configure --enable-board=common_pc --enable-kernel=standard \ --enable-rootfs=glibc_std \ --with-layer=/mylayers/toplayer,/mylayers/middlelayer,/mylayers/bottomlayer

The result is a prjbuildDir/layers file, ordered from top (highest priority) to bottom (lowest priority) that looks like this:
/path/toplayer /path/middlelayer /path/bottomlayer /path/scopetools-version/wrlinux /path/wrll-linux-version /path/wrll-toolchain-version /path/wrll-toolchain-version/wrll-toolchain-version-arm /path/wrll-toolchain-version/wrll-toolchain-version-ia /path/wrll-toolchain-version/wrll-toolchain-version-mips /path/wrll-toolchain-version/wrll-toolchain-version-powerpc /path/wrll-toolchain-version/wrll-toolchain-version-sparc /path/wrll-toolchain-version/wrll-toolchain-version-common /path/wrll-wrlinux /path/wrll-host-tools

Note that some layers are included by default, for example a default kernel layer, because no alternative was specified. And note the toolchain layers for each architecturethese specific layers are included by an include file in the wrll-toolchain-version layer. Layers are searched for specific templates based on your configuration command as described next.

Layer Path Environment Variable

You may also have an environment variable called LDAT_LAYER_PATH, which is a comma-separated list of locations to look for layers before the default installDir/wrlinux-3.0/layers directory.

Basic Layer Contents

Layers contain various configuration and source directories as well as directories containing pre-built packages that are recognized by the build system.
Configuration and Source Directories

templates/Configuration templates. dist/Makefiles and patches for tools and packages. tools/Host tool source packages. packages/Target packages.

Pre-Built Directories

host-tools/Pre-built host tools, mirrored into prjbuildDir/host-cross/. boards/Pre-built kernels, mirrored into prjbuildDir/export/. RPMS/Pre-built target packages, mirrored into prjbuildDir/export/RPMS/.

51

Wind River Linux User's Guide, 3.0

toolchain/ Toolchains mirrored into prjbuildDir/host-cross/toolchain/.

Any layer can have any or all of these components, and these components then augment or override components in lower layers.

5.3 Understanding Templates


Templates are searched for and processed in a specific order based on configure command line options and various defaults as described in this section.

Identifying Explicit and Implicit Templates

When you specify a configure command line option, such as the following:
--enable-rootfs=glibc_std

that is equivalent to providing the following configure command line option:


--with-template=rootfs/glibc_std

Similarly, when you specify a kernel with --enable-kernel, you are specifying a kernel template such as kernel/standard, and when you specify a board with --enable-board you are specifying a board template such as board/fsl_hpcii. Therefore, the following configure options:
--enable-rootfs=glibc_std --enable-kernel=standard --enable-board=fsl_hpcii

are equivalent to:


--with-template=rootfs/glibc_std,kernel/standard,board/fsl_hpcii

In addition, when you specify a typical board, you implicitly specify additional templates because the specified board template includes them. For example, the fsl_hpcii board template contains an include file, which causes it to include cpu, multilib, and arch templates. These templates are included implicitly to save you from having to specify them explicitly on the configure command line. Exactly how templates include other templates with include files is described in 5.3.2 Processing Template include Files, p.56.

5.3.1 Template Search Order


Each layer in the layer search list is searched in order from highest to lowest priority for the implicitly or explicitly specified template. When a template is found, if it contains an include file, the templates in the include file are searched for using the same procedure, and so on for any additional include files in the templates found. (See 5.3.2 Processing Template include Files, p.56 for details on include file processing.) The configure process first creates a prioritized list of template names to search for. It then places each of these template names in a sequence of template paths. It then searches for each template path in the ordered list of layers. The details of this process are described in this section.

52

5 Layer and Template Processing 5.3 Understanding Templates

The Initial Template Search List

The configure process builds the initial list of templates to search for from explicit and implicit configure command options, arranging them in the following order: 1. 2. 3. 4. 5. 6. rootfs kernel profile board default command line

where later templates have a higher priority over earlier templates. This would be equivalent to putting them all on the command line in the order:
--with-template=rootfs/type,kernel/type,profile/name,board/type,default,whatever...

because the last templates specified in this syntax have the highest priority.
NOTE: You do not have to place your arguments on the configure command line in the correct order, they will be ordered correctly in the template list that the configuration process constructs.

What is important is to recognize the priority of templates. So, for example, if profile/name contained an include file listing rootfs and kernel templates, the templates listed in the profile/name include file would override the --enable-rootfs and --enable-kernel templates you gave on the command line, because profile templates have a higher priority than the rootfs and kernel templates in the ordered search list constructed by configure.

The Final Template Search Paths

Templates are searched for in the following order: 1. template as a path relative to your project build directory, (or an absolute path, but absolute paths can only come from the --with-template option or include files.) board/template in each layer from top to bottom. cpu/template in each layer from top to bottom. arch/template in each layer from top to bottom. template in each layer from top to bottom.

2. 3. 4. 5.

Each template is searched for as each member of this template path list until it is found. If it is not found in any form in any layer, it is an error. (But some included templates may be specified as optional as described in Marking an Included Template Optional, p.59.)

53

Wind River Linux User's Guide, 3.0

Template Processing

A template is processed when it is found, unless the template contains an include file listing one or more other templates. The templates listed in any include files must be processed before the template that contains the include file. Once a template has been found and all include files processed, it is processed, and then the search for the next template (if any) begins. While template processing order and therefore priority may seem at first confusing, it is this design that gives templates their power, allowing you to replace system and other templates, or selectively add and remove components from them. You can always determine the order in which your templates were processed by viewing the templates and template-paths files as described in the next section.

An Example of Template Processing Order

As an example, suppose you entered the following configure command:


$ configure --enable-kernel=standard --enable-board=common_pc \ --enable-rootfs=glibc_small --with-template=feature/glibc_small_debug

Based on this command line, configure creates an initial ordered search list 1. 2. 3. 4. 5. rootfs/glibc_small kernel/standard board/common_pc default feature/glibc_small_debug

It is as if you had entered the following command


$ configure --with-template=rootfs/glibc_small,kernel/standard, \ board/common_pc,default,feature/glibc_small_debug

An Example of the Final Template Search List

The configure process then inserts each template from the initial list into the ordered template path search list: 1. 2. 3. 4. 5. prjbuildDir/template templates/board/template in each layer templates/cpu/template in each layer templates/arch/template in each layer templates/template in each layer

So, for our command line example, configure first searches for rootfs/glibc_small as: 1. 2. 3. rootfs/glibc_small in your project build directory board/rootfs/glibc_small in the templates/ directory of the highest priority layer cpu/rootfs/glibc_small in the templates/ directory of the highest priority layer

54

5 Layer and Template Processing 5.3 Understanding Templates

4. 5.

arch/rootfs/glibc_small in the templates/ directory of the highest priority layer rootfs/glibc_small in the templates/ directory of the highest priority layer

If a rootfs/glibc_small template is found and it does not contain an include file, the template is processed and the next template is searched for. If it does contain an include file, the include chain is processed as described in 5.3.2 Processing Template include Files, p.56 and then the template is processed. If a glibc_small template is not found in any template path variation after searching in the highest priority layer, configure then searches for it in the next highest priority layer, and so on until a rootfs/glibc_small has been found. If it is not found and all layers have been searched, it is a configuration error.
An Example of Template Search Results

The result of the search for the templates in the layers is recorded in your prjbuildDir, in the layers, templates, and template_paths files.The template_paths file for the example is given in Example 5-1. In the template_paths file shown, templates that contain include files are shown in bold text.
Example 5-1 Example of template_paths and include Files installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/rootfs/glibc_small_fs installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/feature/busybox installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/rootfs/glibc_small installDir/wrlinux-3.0/layers/wrll-linux-version/templates/kernel/standard installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/arch/ia32 installDir/wrlinux-3.0/layers/wrll-toolchain-version/i586/templates/multilib/ x86_32 installDir/wrlinux-3.0/layers/wrll-toolchain-version/i586/templates/cpu/x86_3 2_i686 installDir/wrlinux-3.0/layers/wrll-linux-version/templates/karch/i386 installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/board/common_pc installDir/wrlinux-3.0/layers/wrll-host-tools/templates/default installDir/wrlinux-3.0/layers/wrll-linux-version/templates/default installDir/workbench-3.1/analysis/wrlinux/templates/default installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/feature/debug installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/feature/small_debug installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/feature/glibc_small_debu g

For example, starting at the top of the template-paths file, we can see in the third line that configure found a rootfs/glibc_small template in the core layer (wrll-wrlinux). That template contains an include file listing a glibc_small_fs structure template and a busybox feature template, so they were found and processed first, before the rootfs/glibc_small template. Figure 5-1 illustrates the prjbuildDir/templates file for this example, showing which templates included others.

55

Wind River Linux User's Guide, 3.0

Figure 5-1

An Illustrated Example of a templates File

rootfs/glibc_small_fs feature/busybox rootfs/glibc_small kernel/standard arch/ia32 multilib/x86_32 cpu/x86_32_i686 karch/i386 board/common_pc default default default feature/debug feature/small_debug feature/glibc_small_debug
Note that in addition to the templates explicitly provided by configure command line arguments, there are several included templates, and also all default templates encountered in the layers. The highest priority template is listed at the bottom of the templates and template_paths files. This is feature/glibc_small_debug from the command line in the example.

5.3.2 Processing Template include Files


Templates are processed depth-firstif a template contains an include file, any templates listed in the include file are processed before the template that contains the include file. Included templates are processed before including templates. For example, suppose you specify the template template1 to configure and template1 has an include file listing templateA and templateB:
template1/include templateA templateB

and templateA includes template2:


templateA/include template2

Then the templates are processed in this order: 1. 2. 3. 4. template2 templateA templateB template1

56

5 Layer and Template Processing 5.3 Understanding Templates

A slightly more complex example illustrates this more clearly. This time, template1 includes four templates in its include file:
template1/include templateA templateB templateC templateD

The included templateA in turn includes template2:


templateA/include template2

And templateC also has an include file:


templateC/include template3 template4

In this case, the depth-first search of template1 encounters an include file, which causes it to include templateA. But templateA also has an include file, containing an entry for template2, so it includes template2. In template2 there is no include file so template2 is processed, then templateA is processed, and then it examines the next entry in template1s include file, templateB. templateB is then processed (unless it contains an include file, in which case the included template(s) are first examined for include files, and so on). The end result for our example is a processing of templates in this order: 1. 2. 3. 4. 5. 6. 7. 8. template2 templateA templateB template3 template4 templateC templateD template1

This gives template1 the highest priority of these templates, and it can override actions performed by any of the templates processed before it. The highest priority template is processed last. As an example of how priority of processing can affect outcome, consider two filespkglist.add and pkglist.removeas they might occur in some of these templates. These files cause packages to be added or removed from a package list that is created as the templates are processed. When processing of all templates is complete, the result is the contents of the prjbuildDir/pkglist file. When processing the templates in the order shown in the example above, packages in any pkglist.add file in template2 will be added to the package list, then packages in any pkglist.remove file in template2 will be removed from the list. Then any packages in any pkglist.add file in templateA will be added, then packages in any pkglist.remove file in templateA will be removed. This may cause packages added in either template2 or templateA to be removed from the list by the pklist.remove in templateA, although they could be added back by a later (higher priority) template.

57

Wind River Linux User's Guide, 3.0

Then templateB is processed and so on. Finally, the packages in any pkglist.add file in template1 are added, and the packages in any pkglist.remove in template1 are removed. Note that you can also completely control the entire package list with a custom template that includes a pkglist.only file, which restarts the package list with the contents of the file. See 5.4 Processing Template Components, p.60 for more details on template component processing.

Including Templates of the Same Name

Templates are protected from multiple inclusion while processing through any included templates (called processing the include chain). When configure first looks for a template it will find the first version with that name. If that template includes a template of the same name, it will then find the next one with the same name after the first one it found, and so on. In other words, it will go through the same search algorithm again, but this time skipping the one it already found and finding the next one of that name.
NOTE: Note that templates are not protected against inclusion at other times, so the same template may be included more than once in an overall configuration.

In practice, including templates of the same name is a common and useful technique. For example, if a board template contains a rootfs/glibc_std directory which in turn has an include file naming the rootfs/glibc_std template, the process is this: 1. 2. 3. 4. When the search for rootfs/glibc_std begins, the board template containing a rootfs/glibc_std directory is found and its include file processed. The include file contains an entry for rootfs/glibc_std, so a search is made once again for rootfs/glibc_std. The same rootfs/glibc_std in the board directory is found again, but since it has already been found in this search, it is skipped. The search continues and the next rootfs/glibc_std template found is processed (unless it contains an include file, the contents of which would be processed first). Finally, the board directorys rootfs/glibc_std template (which was the first one found) is processed.

5.

Note that if the second rootfs/glibc_std template does contain an include file, the templates listed in that file are processed before the other contents of the template, and so on for each template encountered. The result is that the entire include chain is processed depth-first, so that the first template that was found in the chain is processed last, giving it the highest priority. The depth-first application of include files ensures that the more-specific versions of templates are able to override or replace components of more generic ones. Note that you cannot specify which layers version of a template to includethe build system automatically seeks out the highest-level layer containing a template which has the right name, and which is not already being processed.

58

5 Layer and Template Processing 5.3 Understanding Templates

Marking an Included Template Optional

A minus sign - at the beginning of an include file line means "It is not an error if this included template does not exist." For example, if you have an include file that has this line: -feature/superfeatures configure will search for the template feature/superfeatures, as always, using the usual search procedure, and include it if it exists, but will not produce an error if it does not exist. (See the toolchain layer in the development environment for an example of an include file with optional templates.) If an include file lists a template that does not exist and is not preceded by a minus sign, an error results. Figure 5-2 summarizes template include file processing.
Figure 5-2 Processing Template include Files

59

Wind River Linux User's Guide, 3.0

5.4 Processing Template Components


How a component of a template is processed depends on the type of component it is. The different types of components are:

File fragments, such as config.sh or *.cfg. File system changes, in the fs directories. Package list handling *.add, *.remove, and *.only files.

Processing of each of these different kinds of template components is discussed in this section.

Processing File Fragments

The config.sh and *.cfg files are fragments that are concatenated to produce the final config.sh and kernel .config files used by the build system. The config.sh fragments contain build environment variables as described in G. Build Variables, and the *.cfg fragments contain kernel configuration options and are discussed in 9. Configuring the Kernel. Templates components processed first appear first in the concatenations, so, for example, if an earlier template (lower priority) sets a kernel config option that is set differently by a later template, the setting in the later template will override the earlier setting, due to the way the final .config file is processed.

Processing File System Components

Files found in the fs subdirectories of templates are applied in bottom-first order, with files created by the last template processed overriding files created by previously-processed templates There is no concatenation of fragments, only replacement, so the fs files in a template override identically-named files of previously processed templates.

Processing Package Lists

These are the *.add, *.remove, and *.only files for packages, host tools, and kernel modules. They determine the list of packages that will comprise the set of packages, host tools, or kernel modules, and are:

pkglist.add pkglist.remove pkglist.only toolslist.add toolslist.remove toolslist.only modlist.add

60

5 Layer and Template Processing 5.4 Processing Template Components

modlist.remove modlist.only

As each template is processed, first any files in *.add are added from the package list, then any files in *.remove are removed from it. If there is a *.only file, its contents become the start of a new package list, effectively making any *.add or *.remove in the template or any preceding templates meaningless. Note that the order of processing of the *.add and *.remove files may produce results noticably different from simply appending all of the *.add and *.remove files. For example, imagine a pair of templates, called A and B that contain pkglist.add and pkglist.remove files. As include file specifies B, so B is included from A. Thus, Bs pkglist.add and pkglist.remove files are processed first. Here are the files: A/pkglist.add: package_2 A/pkglist.remove: package_3 B/pkglist.add: package_1 B/pkglist.remove: package_2 If the package list files were simply appended, the results would be that pkglist.add would add packages 1 and 2, then pkglist.remove would remove packages 2 and 3. However, this would result in the included template (B) overriding the including template (A). Instead, each pkglist.add and pkglist.remove pair is processed in turn. Thus, after B is processed, package 1 has been added, and package 2 has been removed. When A is processed, package 2 is added back to the project, and package 3 is removed. This produces the desired result; the including template overrides the included template. Figure 5-3 illustrates how combining templates in layers can contribute to a final product.

61

Wind River Linux User's Guide, 3.0

Figure 5-3

Combining Layers and Templates

fs/etc/inittab(1) file.cfg w/optionA


feature/cool

fs/etc/inittab(2) .config w/options A&B

.add
rootfs/glibc_std

layerlow

fs/etc/inittab(2) file.cfg w/option B


feature/awesome

glibc_std

Processed Configuration

.remove
board/myboard/rootfs/glibc_std

layerhigh

5.5 Constructing the Target File System


File system construction begins with the processing of the templates, and is completed after the project is built and any necessary compilation of the file system packages or kernel has taken place.

Configure Time File System Construction (filesystem/fs)

In addition to constructing the package lists, config.sh, and .config files during template processing, the configure process performs the following steps for each template:
Step 1: Runs the pre-cleanup script.

During the processing of a template, the configuration process changes directory to the prjbuildDir/filesystem/fs directory and runs the path_to_template/fs/pre-cleanup script if it exists.
Step 2: Populates the fs directory.

During the processing of a template, the configuration process copies the path_to_template/fs directory to prjbuildDir/filesystem/fs.

62

5 Layer and Template Processing 5.5 Constructing the Target File System

Step 3:

Runs the post-cleanup script.

During the processing of a template, the configuration process changes directory to the prjbuildDir/filesystem/fs directory and runs the path_to_template/fs/post-cleanup script if it exists.
Step 4: Processes the fs-install* scripts.

During the processing of a template, the configuration process appends the contents of path_to_template/fs/fs-install if it exists to the prjbuildDir/filesystem/fs/fs-install script. If a path_to_template/fs/fs-install-only script exists, any existing prjbuildDir/filesystem/fs/fs-install script is overwritten with its contents.
NOTE: Using these scripts is the preferred way of adding to or overwriting pieces of the target file system. They are part of the work of the configuration utility, and therefore are included in the RPM configuration database as discussed in the next section. Note, however, that they can't remove things from the target file system (the contents of export/dist/ and *.dist.tar.gz)for that you would have to use an fs_final script as described in Build Time File System Construction (export/dist), p.63.

Build Time File System Construction (export/dist)

The final steps of file system construction occur during the build process in the following order:
Step 1: Determine that all RPMS are available.

The build process uses rpm to install the file system in prjbuildDir/export/dist. If source needs to be recompiled for the RPMfor example, if the package meta data has changed (as described in Rebuilding Packages with Changed Checksums, p.45), or only source exists at this pointthe source is compiled and the RPM produced.
Step 2: Begin populating the export/dist directory.

The package RPMs are installed in the prjbuildDir/export/dist directory. This directory will ultimately contain the file system that is exported when using QEMU, and which is compressed into the compressed tar file export/*.dist.bz2 which you can download to your target.
Step 3: Run the fs-install script.

The fs-install script is executed (called from prjbuildDir/filesystem/Makefile) in a pseudo-root environment so it can operate as the root user and do such things as assign root permissions to files and directories. (See Viewing the Target File Settings, p.64 for more on the pseudo-root environment.)
Step 4: Install the configuration rpm.

The configuration rpm contains the prjbuildDir/filesystem constructed by the configuration utility (see Configure Time File System Construction (filesystem/fs), p.62). It is installed over the contents of export/dist so that it can overwrite anything required.

63

Wind River Linux User's Guide, 3.0

Step 5:

Run the fs_final scripts.

The fs_final script, if it exists, is run. Note that this script can do whatever you want it to do. It can, for example, remove files from the target file system, unlike fs_install which can only add or replace contents. !
CAUTION: Actions performed by fs_install are reflected in the RPM configuration database, which is a database of files and the packages that own those files. Actions performed by fs_final are not reflected in the database, so they may cause it to differ from the contents of export/dist. It is assumed you know what you are doing if you use fs_final scripts. Create the new compressed tar file.

Step 6:

The target file system assembled in the prjbuildDir/export/dist directory is also stored as a compressed tar file in prjbuildDir/export/arch-rootfs-kernel.dist.tar.bz2.

Viewing the Target File Settings

Ownership and permission settings are managed in parallel by the pseudo tool, so that files you create are owned by you on the host, but may become root files on the target. You can enter this pseudo environment on your host to examine the target-specific ownership and permission settings. For example, if you were to examine target file ownership without pseudo, it might look like this:
$ pwd prjbuildDir $ ls -l export/dist/bin/sh lrwxrwxrwx 1 user user 7 Feb

2 10:38 export/dist/bin/sh -> busybox*

To examine the file with its settings as they will appear on the target, you can supply the same command to pseudo:
$ host-cross/bin/pseudo ls -l export/dist/bin/sh pseudo: Warning: PSEUDO_PREFIX unset, defaulting to prjbuildDir/host-cross. lrwxrwxrwx 1 root root 7 Feb 2 10:38 export/dist/bin/sh -> busybox

Note that ownership now shows as root. You can also enter a pseudo shell to move around the target file system and view multiple settings:
$ host-cross/bin/pseudo sh pseudo: Warning: PSEUDO_PREFIX unset, defaulting to prjbuildDir/host-cross. $ cd export/dist/bin $ ls -l s* lrwxrwxrwx 1 root root 7 Feb 2 10:38 sh -> busybox lrwxrwxrwx 1 root root 7 Feb 2 10:38 sleep -> busybox $ exit

Exit the pseudo shell to return to your normal shell. See 8. Changing Basic Linux Configuration Files for information on making changes to filesystem/fs/ configuration files.

64

5 Layer and Template Processing 5.5 Constructing the Target File System

Determining Which Package Contributes a File

You can query the RPM database to find which RPM supplies a particular file, for example:
# rpm -qf /bin/hostname

There may be more than a single file with the same name supplied by different packages.In that case, the file associated with the default CPU type will be the only version installed. If more than one RPM in the same CPU configuration supplies a file, the file must be identical in the two versions or it produces a hard error.

65

Wind River Linux User's Guide, 3.0

66

6
Custom Layers and Templates
6.1 Introduction 67 6.2 Creating Custom Templates 67 6.3 Using Custom Templates 71 6.4 Creating Custom Layers 73 6.5 Using Custom Layers 78 6.6 Combining Custom Layers and Templates 79

6.1 Introduction
This chapter describes how you can create and use custom templates, create and use custom layers, and how you can combine your custom templates and layers.

6.2 Creating Custom Templates


You can create custom templates to configure your run-time software. You can create the same kinds of templates available in the development environment, for example, board, feature, and profile templates, and you can even create templates to override or include development environment templates as described in this section. Development environment templates are typically identified by pairs of directory names in the form general/specific, such as rootfs/glibc_std, feature/debug, and board/my_board. To create a template, first create a template directory, for example, my_templates.You can then create template directory/subdirectory pairs modelled after the development environment. For example, if you want to create a custom feature template called my_feature, you would create a my_templates/feature/my_feature/ directory structure.

67

Wind River Linux User's Guide, 3.0

You can populate your custom templates with the same types of files used in templates in the development environment including:

*.cfg kernel configuration fragments pkglist.*, toolslist.*, and modlist.* package lists include files listing other templates config.sh files listing environment variables

Refer to 3.5 Templates in the Development Environment, p.23 for more details on the contents of templates. Although you can place custom templates anywhere, you would typically locate them outside of both the development and build environments. Figure 6-1 illustrates one possible example.
Figure 6-1 A Possible Organization of Development, Build, and Template Directories

/home/

user/

my_templates/

workdir/

installDir/

...

feature/

profile/

board/

prjbuildDir/

wrlinux-3.0/

my_feature/

my_profile/

my_board/

Build Environment

Development Environment

Custom Templates In Figure 6-1, each template is shown with only one instance, but you could have multiple feature templates under feature/, for example, and multiple profiles under profile/, and so on.

Naming Your Templates

Note that the development environment template naming convention is not a limitation to how you can name templates. For example, you could create a template named whatever in your home directory, or create custom directory/subdirectory pairs under my_templates such as options/one-option, options/another, and so on. You just need to inform configure about your custom template names on your configure command line as described in 6.3 Using Custom Templates, p.71.

68

6 Custom Layers and Templates 6.2 Creating Custom Templates

Duplicating Other Template Names

You may want to override existing development environment templates. For example, you may want to specify your own debug feature and not use the one supplied in the development environment, or you may want to customize the supplied version. You can replace the development environment version by creating your own feature/debug template and informing configure about it on the command line. The most common reason for duplicating a development environment template name, however, is to incorporate, override, or enhance functionality that it provides. By creating a custom template with the same name, you can modify the action of the development environment template without making modifications in the development environment itself. For example, in the case of a custom feature/debug template, you could include the feature/debug template normally found by configure, and add some packages to it. Your template could look like this:
/home/user/my_templates/feature/debug/ pkglist.add include

The include file in your custom template would list feature/debug. When configure found your custom template, it would first process the template listed in the include file. This causes it to search again for feature/debug. It would first find your custom template again but skip it because it was already found, then process the next feature/debug template found, in this case a development environment template with no include file. So the development environment template would be processedit just contains a pkglist.add file, so the contents of that would be appended to the package list being assembled by configure. It would then finally process your custom feature/debug template, adding the contents of your pkglist.add file. (For details on template processing see 5.4 Processing Template Components, p.60). Of course, you could also remove packages from the list added by the development environment template with a pkglist.remove file in your custom template, and in general perform any of the actions templates can perform.

The Structure of Templates

The best way to know where you can put things in templates is to look at working templates. The installDir/wrlinux-3.0/layers/wrll-lwrlinux layer and other layers at the same level in the development environment provide examples of many types of templates. Additional points to note are that include files and default templates can help you structure your templates and reduce duplication of work.
include Files

You can include other templates by listing them in an include file in your custom template. One common use of this is to create a template with the same name as a template in a lower priority layer as described in Duplicating Other Template Names, p.69. You can list multiple templates in include files, and include development environment as well as custom templates.

69

Wind River Linux User's Guide, 3.0

The default Template

The contents of a default template are applied if the layer it is in is includedyou do not need to specify the default template, and it will be applied if you select the layer.
NOTE: It will also be included if you specify the templates directory it is in on the

configure command line with --with-template-dir as discussed in 6.3 Using Custom Templates, p.71. You could, for example, have a set of feature templates for various but related purposes where each feature adds a set of packages, but many of the packages are added by all features. Rather than maintain the set of common packages across each feature, you could have a pkglist.add file in a default template, and then just maintain the unique packages in the pkglist.add files in each feature. Figure 6-2 illustrates such a scenario and also a system file that is common to all the features. Default templates are processed just before the templates you specify on the command line as described in 5.3 Understanding Templates, p.52. .
Figure 6-2 An Example of default Template Usage

templates/

feature/ ...

default/

feature_3/

feature_2/

feature_1/

pkglist.add

fs/ etc/ rcS.d/ S99test.sh

pkglist.add

pkglist.add

pkglist.add

NOTE: When you configure in the layer that contains the default template shown in Figure 6-2, you also configure in the default pkglist.add and the S99test.sh startup scriptregardless of whether or not you also configure in any of the feature or other templates in the layer. (See 6.5 Using Custom Layers, p.78 for details on configuration with layers.)

70

6 Custom Layers and Templates 6.3 Using Custom Templates

6.3 Using Custom Templates


This section describes how to configure in yourcustom templates, including custom profiles.

Configuration with Templates

When you specify a development environment template to configure, you use the --with-template option, and configure searches the development environment for the template. For example, to add the debug feature to a configure command you would specify:
$ configure ... --with-template=feature/debug ...

Your custom templates will typically reside outside of the development environment (to keep the development environment pristine), so you would also specify the directory location of your templates with the --with-template-dir option. For example, to specify the my_profile template shown in Figure 6-1, you would specify the following:
$ configure ... --with-template-dir=/home/user/templates \ --with-template=profile/my_profile ...

You can specify multiple templates with a comma separated list, for example:
$ configure ... --with-template-dir=/home/user/templates \ --with-template=profile/my_profile,feature/my_feature,board/my_board ...

You can also specify specific templates directly, for example:


$ configure ... --with-template=/tmp/template1,/opt/templates/template2 ...

In summary, the syntax for common template configure options is:

--with-template-dir=fulldirectorypath --with-template=template-dir-pair1,template-dir-pair2... --with-template=template-path1,template-path2...

Custom Template Processing

Custom templates are processed last, after all other templates. Because of this, they are especially useful for:

Kernel configuration file fragments that override default kernel options. Additions to the file system configuration files under fs/, such as the networking configuration files under /etc/sysconfig, that may have to override default values. Package list files that you want to override previous package list files.

Refer to 5. Layer and Template Processing for details on the order of template processing. You can check the order in which your templates are applied as described next.

71

Wind River Linux User's Guide, 3.0

Verifying Template Processing

Two files in your project build directory show you the order of template processingtemplates and template_paths. Both list the templates processed in the order of their priority, from the lowest priority at the top to the highest priority at the bottom. The templates file lists only the template names, and template_paths lists the full paths.For more details on these files, refer to An Example of Template Search Results, p.55 If you are making changes with a template that are not taking effect, check to see if a higher-priority template is overriding your settings.

Creating Custom Profiles

Wind River Linux release 3.0 introduced profiles based on templates. A profile is a template that typically includes a kernel, a root file system and various other template components. You only need to specify a valid board and a profile that includes a kernel and root file system to the configure script to configure a full platform build environment. When you create a custom profile, you will probably create it in some custom template location or custom layer of your own, so your configure command line will include a --with-template-dir or --with-layer specification as well.
A Simple Custom Profile

As a simple example, consider the following. You create a profile called small_plus that includes the small kernel, the glibc_small file system, and the demo feature. Your directory structure would look like that shown in Figure 6-3.
Figure 6-3 A Simple Custom Profile

$HOME/

templates/

profile/

small_plus/

include

The include file would look like this:


kernel/small rootfs/glibc_small feature/demo

72

6 Custom Layers and Templates 6.4 Creating Custom Layers

The following configure command line creates a platform project for the common_pc using this profile:
$ configure --enable-board=common_pc \ --with-template-dir=$HOME/templates \ --enable-profile=small_plus

Refer to Another Custom Profile Example, p.80 for a more powerful example of the use of custom profiles in conjunction with layers.

6.4 Creating Custom Layers


Creating your own layers makes it easy to locate and review all the changes you or others have made, back out of undesirable changes, and neatly share your changes with others. You can, for example, add packages, remove other packages, and add and remove different kernel features with a single layer. You can distribute your layer to other developers who can then easily include or exclude your changes with a single configure command switch. You might use a custom layer, for example, to add packages to the build system and then create custom templates within the layer to provide various configurations of the packages. Although custom layers can be placed anywhere, they typically reside outside of both the development and build environments. Figure 6-4 illustrates a structuring in which custom layers are separate from the build and development environments.
Figure 6-4 A Possible Organization of Development, Build, and Custom Layer Directories

/home

user

layers

workdir

installDir

layer_name1

layer_name2

prjbuildDir

wrlinux-3.0

Custom Layers

Build Environment

Development Environment

In Figure 6-4, two custom layers are placed in a directory named layers. One or both could be configured into a project, and additional layers could be added from elsewhere.

73

Wind River Linux User's Guide, 3.0

The Structure of Layers

Layers contribute to your build environment exactly what you want them to contributeno additional files or directories are required. A layer (like a template) has no minimum structure, and it can even be an empty directory, useless as that would be. The installDir/wrlinux-3.0/layers/wrll-wrlinux is a good example of what a layer can do. By following the structure of the development environment layers in your layer, using only the parts that you want with the contents that you want, you can layer-over, or overlay, the contents of your layer on the contents of wrll-wrlinux during your project build You can create layers manually (as described in 6.4.2 Manually Creating Layers, p.77) or you can make changes in your build environment and then package those changes as a custom layer as described in the following section.

6.4.1 Workflow and the Local Custom Layer


Performing common tasks, such as reconfiguring a kernel, adding or customizing packages, or configuring a file system layout, can be multi-step processes, marked by trial and error. To save time, you may wish to perform these tasks entirely within your project build directory, only adding the necessary files to a custom layer once you are sure things are working as you expect. Portions of the project build directory function as an immediately available custom layeryou do not need to provide a configure option to include this layer. You can take advantage of this local custom layer to prototype your custom layers, developing in your project build directory and then extracting your changes to a layer when desired. For example, you may want to create a layer that adds one or more packages. The dist/ and the packages/ directories within your project build directory (which contain only readme files by default), are layer directories that you can populate to build additional packages. These directories function the same as the dist/ and packages/ directories in other layers, for example, in installDir/wrlinux-3.0/layers/wrll-wrlinux/. They only differ in prioritycustom layers and your project build layer having higher priorities than the development environment layers. To add a package, using the local custom layer, you use this procedure: 1. Add the new package name to the pkglist file and makefiles with the pkgname.addpkg target. This also adds the names of any known required packages. Copy the new package to the packages/ directory. Populate the dist/package directory with the makefile and patches. Within the build directory, run make packagename. This should unpack, patch, compile, and install the new package.

2. 3. 4.

Detailed instructions for adding packages can be found in chapter 10. Adding Packages.

74

6 Custom Layers and Templates 6.4 Creating Custom Layers

Once your packages are building and installing correctly, you can move them to a custom layer outside of your build environment. You can use the export-layer target as described next, or manually create a layer as described in 6.4.2 Manually Creating Layers, p.77 and move the package files to it.

Creating Exportable Layers with make export-layer

After making a number of changes in your build environment it is very useful to be able to create a layer capturing those changes. The layer then provides a way to recreate your current customized buildyou just enter the original configure command, this time also specifying the layer that contains the changes. Similarly, other developers can recreate your customized build environment in the same way, or you can modify your build environment by including their layers. To capture your changes to the original configuration of your project build directory, do the following:
$ cd prjbuildDir $ make export-layer

The first time you create a layer this way, make export-layer must create a reference project for comparison, so it takes longer than it will for future layer creations. The reference project is created from the configure command in the config.log file. Once that is done, the layer is created by comparing the original (reference) configuration with the current configuration. The layer itself is created in export/export-layer/ with a name comprised of your project build directory and a timestamp, for example common_pc.Wed_Aug_22_102239_PDT_2007. A tar file of the layer is also created in export/export-layer/. The following items are captured by make export-layer:

dist/, packages/, and tools/ additions pkglist changes modlist changes filesystem/fs/ modifications fs-final modifications fs-install modifications config.sh modifications kernel .config modifications

An Example Layer

When you create a layer with make export-layer, you create a directory structure that contains the changes that occurred between the time the project build directory was configured and the make export-layer command was issued.

75

Wind River Linux User's Guide, 3.0

If, for example, in the course of a project development you had modified a few system files, changed some kernel configuration parameters, and added and removed a few packages, when you created a layer the contents might look like the following:
common_pc.Sun_Sep_16_042116_PDT_2007 common_pc.Sun_Sep_16_042116_PDT_2007/conf_cmd.ref common_pc.Sun_Sep_16_042116_PDT_2007/README common_pc.Sun_Sep_16_042116_PDT_2007/templates common_pc.Sun_Sep_16_042116_PDT_2007/templates/default common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/README common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/etc common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/etc/rc.d common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/etc/rc.d/rc.local common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/etc/sysconfig common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/etc/sysconfig/netwo rk common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/root common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/root/.bash_logout.h ide common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/root/.profile.hide common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs/fs-install-only common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/pkglist.add common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/pkglist.remove common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/modlist.add common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/modlist.remove common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/fs_final.sh common_pc.Sun_Sep_16_042116_PDT_2007/templates/default/config.sh common_pc.Sun_Sep_16_042116_PDT_2007/templates/kernel common_pc.Sun_Sep_16_042116_PDT_2007/templates/kernel/knl-frag.cfg common_pc.Sun_Sep_16_042116_PDT_2007/dist common_pc.Sun_Sep_16_042116_PDT_2007/dist/testpkg3 common_pc.Sun_Sep_16_042116_PDT_2007/packages common_pc.Sun_Sep_16_042116_PDT_2007/packages/testpkg1 common_pc.Sun_Sep_16_042116_PDT_2007/tools common_pc.Sun_Sep_16_042116_PDT_2007/tools/testpkg2

You can untar the tar file for the layer anywhere that is accessible and reference it there, or copy it into your source management system. You, or others, could now reference this directory with the --with-layer option to the configure command to include your changes into a project. Refer to 6.5 Using Custom Layers, p.78 for examples of configuring layers into new projects. See 22.2 Adding SRPM Packages, p.256 for a use case for adding packages and then using make export-layer to create a layer that includes the new packages and associated changes.

76

6 Custom Layers and Templates 6.4 Creating Custom Layers

6.4.2 Manually Creating Layers


You can also create layers from scratch without the need to modify your project environment. Model your the directory structure after the structure in installDir/wrlinux-3.0/layers/wrll-wrlinux/, using only the parts you need.
NOTE: If you prefer not to create each individual directory you will require for your layer, the script layer_tool.sh in installDir/wrlinux-3.0/ldat/scripts/ will create a layer infrastructure for you when supplied the -c layer_dir option. Supply the tool with the --help option for information on additional options.

The following provides a high-level example of manually creating a layer that adds packages and feature templates a to create a layer that adds kernel and userspace functionality and also modifies the file system.
Adding Packages

You add packages to a custom layer as follows. First, create a directory to serve as your custom layer, for example layers/new_stuff. Then add packages and dist directories to the custom layer. Include the packages subdirectory within dist, and the patches subdirectory, as in Figure 6-5.
Figure 6-5 Directory Structure: Adding a Package in a Custom Layer

layers/

new_stuff/

packages/

dist/

package_x/

Makefile

patches/

Within packages, add each packages tar file or SRPM package. Within dist/packagename, add the packages makefile. Add the patches to the patches subdirectory.

77

Wind River Linux User's Guide, 3.0

Adding Features and a BSP

Add a templates directory and then add board and feature subdirectories as shown in Figure 6-6.
Figure 6-6 Directory Structure: Adding a Package in a Custom Layer

layers/

new_stuff/

packages/

dist/

templates/

package_1, package_2, package_3

package_x/

board/

feature/

patches/

my_board/

a_feature/

another/

pkglist.add

pkglist.add, knl-frag.cfg

Your feature directories might make use of various packages in your custom layer, and include kernel configuration settings to support your board. You must specify your layer to the configure command along with any templates you want to include, as described in the next section.

6.5 Using Custom Layers


You must inform the configure script of the location of your custom layer(s) as described in this section.

Configuration with Layers

To configure a platform project with a custom layer, use the --with-layer option. By default, the configure command will look for the layer specified with the --with-layer option in installDir/wrlinux-3.0/layers/. If you want to include a custom layer that you have in a different location, specify its full path to the --with-layer option. To include multiple layers, separate them by commas as in --with-layer=layer1,layer2,layer3. For example, to configure the layer new_stuff into your project, you would enter the following:
$ configure ... --with-layer=/fullpath/layers/new_stuff ...

78

6 Custom Layers and Templates 6.6 Combining Custom Layers and Templates

Specify the full path to the layer. Specify multiple layers with a comma-separated list:
$ configure ... --with-layer=/fullpath/layers/new_stuff,/fullpath/layoers/old_stuff

Verifying Layer Processing

Your custom layers are processed first, giving them the highest priority. You can view the order that the layers were processed for your configuration in the layers file in your project build directoryhighest priority layers are listed first. A configure command line like this:
$ configure ... --with-layer=/fullpath/layers/new_stuff,/fullpath/layoers/old_stuff

Results in a prjbuildDir/layers file that looks something like this:


/fullpath/layers/new_stuff /fullpath/layers/old_stuff installDir/workbench-3.1/analysis/wrlinux installDir/wrlinux-3.0/layers/wrll-linux-2.6.27 installDir/wrlinux-3.0/layers/wrll-toolchain-4.3-30 installDir/wrlinux-3.0/layers/wrll-toolchain-4.3-30/arm installDir/wrlinux-3.0/layers/wrll-toolchain-4.3-30/i586 installDir/wrlinux-3.0/layers/wrll-toolchain-4.3-30/mips installDir/wrlinux-3.0/layers/wrll-toolchain-4.3-30/powerpc installDir/wrlinux-3.0/layers/wrll-toolchain-4.3-30/sources installDir/wrlinux-3.0/layers/wrll-toolchain-4.3-30/sparc installDir/wrlinux-3.0/layers/wrll-toolchain-4.3-30/common installDir/wrlinux-3.0/layers/wrll-wrlinux installDir/wrlinux-3.0/layers/wrll-host-tools

Note that the custom layers are listed first.


A Note on the Local Custom Layer

Neither the project build directorys templates or layers files show the local custom layer (your project build directory). When you add packages or kernel configuration fragments using the local custom layer, check the results in the pkglist and .config files to determine if your work is being applied correctly.

6.6 Combining Custom Layers and Templates


Custom layers are processed differently from custom templates. During processing, the system looks first to the custom layer in an attempt to find the template, package, or file it is looking for. Thus, custom layers can be used to add templates, packages, or files, or to override the templates, packages or files within the development environment. This section provides examples of project configurations that make use of profile and feature templates to illustrate some ways to use the templates in a layer.

79

Wind River Linux User's Guide, 3.0

Another Custom Profile Example

The profile example give in Creating Custom Profiles, p.72 was based on a profile included in a template directory, but not in a layer. Now consider a somewhat more complex example that combines custom profiles and layers with some of the previous discussion concerning template processing (see 5.3 Understanding Templates, p.52).This example includes custom profiles, features and a BSP.
NOTE: See the Wind River Linux BSP Developers Guide for details on creating your

own BSP templates. For this example, you could substitute the name of any supported board, for example, common_pc, for the custom BSP, my_board. The following discussion is based on a custom layer called phones, where you have made profiles for a basic phone and a smart phone. You have also added custom features to your layer, and include or exclude them based on the profiles. Your directory structure might look something like the one shown in Figure 6-7.
Figure 6-7 Profile, Feature, and Board Templates Example

$HOME/

layers/

phones/

templates/

my_board/

profile/

feature/

glibc_small/

basicphone/

smartphone/

lcd_display/ pkglist.add

wireless/ pkglist.add

touchscreen/

include pkglist.add

include

include

knl-frag.cfg pkglist.add include

The include file in the my_board template includes CPU and architecture templates so that when you specify your profile along with the board, you provide the necessary board, CPU, architecture, root file system, and kernel that configure requires. The pkglist.add files in the feature templates might include packages from your custom layer or from some other layer including wrll-wrlinux/. Similarly, the include file in the touchscreen feature template might include additional feature templatesin this example, it includes lcd_display.

80

6 Custom Layers and Templates 6.6 Combining Custom Layers and Templates

The include file for the basicphone profile might look like this:
kernel/small rootfs/glibc_small feature/ feature/debug feature/demo feature/lcd_display

The include file for the smartphone profile might look like this:
kernel/small rootfs/glibc_small feature/debug feature/demo feature/wireless feature/touchscreen

You just choose a different profile to configure the different phones. To configure the basic phone for your board, enter:
$ configure --enable-board=my_board --with-layer=$HOME/layers/phones \ --with-profile=basicphone

To configure the smart phone:


$ configure --enable-board=my_board --with-layer=$HOME/layers/phones \ --with-profile=smartphone

The templates file in your project directory shows the order of template processing. Figure 6-8 illustrates where templates have included other templates for the smart phone profile example.
Figure 6-8 Template Processing with Profile Example (templates File)

Lowest Priority

Highest Priority

kernel/standard arch/ia32 multilib/x86_32 cpu/x86_32_i686 karch/i386 board/my_board kernel/small rootfs/glibc_small_fs feature/busybox rootfs/glibc_small feature/debug feature/demo feature/wireless feature/lcd_display feature/touchscreen profile/smartphone default default default

While Figure 6-8 may appear complicated, it is because the recursive nature of template processing is illustrated. The templates file itself (as well as the template_paths file) simply lists the results of the recursion, so that the last template in the list has been processed last, and has the highest priority. As you can see in Figure 6-8, the profile template is one of the last processed (included templates are processed first) and therefore can override lower priority templates.

81

Wind River Linux User's Guide, 3.0

Note that your custom profiles are not limited to include files, but may contain components like any other template such as pkglist.* files, and fs/ files and scripts as well. Typically, however, they do not include hardware configuration information.

Specifying Templates in a Custom Layer

You can supply configure with your templates in a custom layer as long as you specify the layer as well. For example, the following configuration would include the lcd_display feature:
$ configure --enable-board=my_board \ --enable-kernel=standard \ --enable-rootfs=glibc_std \ --with-layer=$HOME/layers/phones \ --with-template=feature/lcd_display

The template feature/lcd_display will be found in the layer $HOME/layer/phones as you can verify in your prjbuildDir/template_paths file after running configure. You can also specify templates that have the same name as other templates. For example, your BSP might be named common_pc instead of my_board, and your common_pc template would include the template board/common_pc. Refer to The Structure of Templates, p.69 for a discussion of the use of this functionality. Note that you can specify a mixture of custom and standard templates which come from custom and standard layers. Refer to 5. Layer and Template Processing for details on how priorities are determined and components processed.

82

7
Application Development
7.1 Introduction 83 7.2 Working with Sysroots 83 7.3 Adding Custom Applications to Platform Projects 87

7.1 Introduction
This chapter discusses how application developers use sysroots which are provided by the platform developer to build applications, and how applications can be incorporated into platform projects.

7.2 Working with Sysroots


Wind River Linux provides sysroots, which are a prototype target directory which contains the necessary library, header, and other files as they would appear on the target. Sysroots also include toolchain wrappers for each of the supported development hosts. In general pre-built libraries and toolchain wrappers are not provided for application development because they may not well reflect the actual platform prepared by the platform developer. Instead, the platform developer generates and exports a sysroot for the configured platform project. However, sample sysroots are provided for each architecture with the glibc_std target file system configuration in installDir/wrlinux-3.0/sysroots/. These sysroots enable application developers to get started, but a sysroot based on the actual platform target configuration should be used for any real application development.

83

Wind River Linux User's Guide, 3.0

Workbench automatically finds sysroots located in the installDir/wrlinux-3.0/sysroots directory, or you may point the developer environment at an arbitrary alternate directory where you have located an exported sysroot. It is also possible to point the application developer environment to the unexported sysroot from an existing platform build. This sysroot is located in prjbuildDir/host-cross/arch on the development host, but note that it is not suitable for export to other hosts, for example, to a Windows application development environment. To produce an exported sysroot environment from a configured build directory, use make export-sysroot as described in the following section.

7.2.1 Exporting Sysroots


You can export a sysroot to create a relocatable directory that contains the necessary header and library files, as well as the wrappers required to access the platform installation toolchains, for application development.

Exporting Sysroots

Wind River provides sysroots supporting application development for four different architectures in installDir/wrlinux-3.0/sysroots/. These contain the necessary build specs to run Workbench examples and may be sufficient to get started on application development, but you should export a sysroot based on the specific platform you configure and build. The exported sysroot can then be used by application developers on any supported host. To create a sysroot, run the make export-sysroot command in your prjbuildDir directory, for example, the following creates a sysroot/ directory in export/:
$ cd arm_versatile_926ejs/ $ make fs $ make export-sysroot

The resulting sysroot, for this example, is export/sysroot/arm_versatile_926ejs-926ejs_glibc-std. You can now copy this directory (for example, tar and untar it to) installDir/wrlinux-3.0/sysroots on a development host, or any arbitrary location, for example /sysroots, that developers will use.
NOTE: If you are creating a sysroot for a multilib-capable target, see 7.2.3 sysroots and Multilibs, p.86 for additional information.

7.2.2 Using sysroots in Application Development


To develop your cross-compiled applications, you must initialize your Wind River environment and specify the proper cross-compiler tools. The following provides an example of how to do this on the command line. Sysroots contain one or more of the build specs that you use when developing application projects with Workbench. For details on using sysroots and build specs in Workbench refer to Wind River Workbench by Example, Linux Version.

84

7 Application Development 7.2 Working with Sysroots

In the following example, the platform developer has created an exportable sysroot (see 7.2.1 Exporting Sysroots, p.84) for an arm_versatile_926ejs target and placed it in an arbitrary location, /sysroots/arm, on the development host. The application developer builds the supplied sample multithread program and directs the executable output to the exported target root file system.
Step 1: Set-up your environment.

Because in this example you are not using Workbench, you must set-up your environment properly. 1. Initialize the Wind River environment:
$ cd installDir $ ./wrenv.sh -p wrlinux-3.0

2.

Add the appropriate cross-build tools to your path. For example, if you are developing the application on the same host as the one with the platform install, add the path to the toolchain in your project build directory, for example:
$ export PATH=prjbuildDir/host-cross/arm-wrs-linux-gnuabi/bin:$PATH

Step 2:

Create an application project.

Create a project build directory such as mthread-app:


$ mkdir mthread-app $ cd mthread-app

Write your source code. In this case, we will just copy the existing mthread example source to our application project:
$ cp installDir/wrlinux-3.0/samples/mthread/mthread.c .

Step 3:

Compile and Link the program.

Specify the gcc wrapper from your sysroot and, for the mthread application, you must also specify the pthread library for the linker when building:
$ /sysroots/arm_versatile_926ejs-glibc_std/x86-linux2/arm-wrs-linux-gnueabi-a rmv5tel_vfp-glibc_std-gcc -g -lpthread -o \ ../arm_versatile_926ejs/filesystem/fs/mthread.out mthread.c

Note that in the example command line shown, the output is placed in filesystem/fs of the platform project so that it will be included when the runtime file system is built. Alternatively, the application developer could inform the platform developer of the location of applications ready for inclusion in a platform build. Some ways platform developers might include applications in their projects are described in 7.3 Adding Custom Applications to Platform Projects, p.87.
Step 4: Test the program.

You can now build the file system, download the compressed file system to the target, and test the program. Alternatively, if you are running an emulation, you can skip the step of building the file system by placing the mthread build output in export/dist instead of filesystem/fs as shown in the previous step, for example:
$ /sysroots/arm_versatile_926ejs-glibc_std/x86-linux2/arm-wrs-linux-gnueabi-a rmv5tel_vfp-glibc_std-gcc -g -lpthread -o \ ../arm_versatile_926ejs/export/dist/mthread.out mthread.c

85

Wind River Linux User's Guide, 3.0

Then start the emulator if it is not already running and execute the program:
root@localhost:/root> /mthread.out

7.2.3 sysroots and Multilibs


Multilib Targets

Wind River supports multiple libraries on certain targets. With these multilib targets, it is possible, for example, to compile an application against both 32- and 64-bit libraries, and not just one or the other. In cases where a board supports multilibs, a reasonable default library has been chosen, but you may need a different library. For example, common_pc_64 targets may include the x86_64 or x86_32 CPU types, with x86_64 being the default. If you want to provide for development with the x86_32 CPU type on a common_pc_64 target, you need to take additional action to be sure the appropriate packages are included in the sysroot you export.
Default and Variant CPU Types

When you configure a multilib-capable target, the default CPU type packages are listed in the pkglist file as normal, for example, glibc. If you have configured a common_pc_64 target, glibc would be the 64-bit glibc, because that corresponds to the CPU default for that target. With a multilib-capable target, packages included for other (non-default) libraries are called variants, and variants are listed as package.variant in pkglist. For example glibc.x86_32, is the name of the 32-bit variant of glibc when you are building the common_pc_64 platform. The glibc package is built in build/, as is normally the case. Packages for the variant are built in build-variant. Continuing with the same example, the glibc.x86_32 package would be built in build-x86_32/.
Adding Application Development Support for Variant Packages

The default build includes the proper packages for the default CPU type, but not all packages. It would take considerably more space on the target, for example, to include all library versions even though many are not used. Therefore, to create the proper sysroot for use in an application development environment that supports variant versions of packages, you must specifically include the additional libraries and packages of the variant in your platform build. For example, you will need libgcc/glibc (or uclibc) for each of the variants you want to be able to use, and you'll often need more (ncurses, openssl, and so on) depending on the application you want to build using the variant. You can add the variant packages by including them with a template at configure time, or adding them to the build system with make -C build pkgname.addpkg after configuration.

86

7 Application Development 7.3 Adding Custom Applications to Platform Projects

For example, if you want to develop with the 32-bit version of vim on the common_pc_64 target, you would need to add vim.x86_32, nurses.x86_32, libgcc.x86_32, and any other packages required, to pkglist. After you have added to the pkglist with a template or with make -C build pkgname.addpkg, a fragment of the pkglist might look like this:
glibc libgcc ncurses vim glibc.x86_32 libgcc.x86_32 ncurses.x86_32 vim.x86_32

Once you have added the files to pkglist, you can then build the platform and export the sysroot. It will include the variant packages you specified.
NOTE: The platform configurations for multilib-supported targets include the necessary toolchain wrappers, RPM macros, variant-specific variables, build directories (as mentioned above), and so on. You only need to add the packages.

7.3 Adding Custom Applications to Platform Projects


The Wind River Linux build system provides a flexible solution for user applications that are managed separately from the Wind River Linux installation, as well as applications that are under active development, and therefore currently not appropriate for keeping archived or packaged in the way open source applications are typically provided to the system. The following discusses two examples of applications contained in Wind River Linux that illustrate how you can add your user applications to your projects.

Referencing External Application Code from a Project

In this example, the source code is external to the project, but a wrapper package references it and builds it in the local project. This is an excellent solution when the source is under a configuration management system. The usermode-agent package is an example of this. The source is located in installDir/linux-2.x/usermode-agent/src", and the wrapper package makefile is located in installDir/wrlinux-3.0/layers/wrll-wrlinux/dist/wdbagent-ptrace/Makefile. The following makefile code shows the unpack rule, which copies the source from the external location.
wdbagent-ptrace.unpack: @$(ECHO) "Copying $(wdbagent-ptrace_CLEARCASE) ..."; if test ! -d $(wdbagent-ptrace_CLEARCASE); then $(ECHO) "Agent src not found in $(wdbagent-ptrace_CLEARCASE)"; \ exit 1; fi; if test ! -d $(wdbagent-ptrace_BUILD); then \ \

\ \ \

87

Wind River Linux User's Guide, 3.0

$(MKDIR) $(wdbagent-ptrace_BUILD) || exit 1; fi; d=$$(cd $(wdbagent-ptrace_CLEARCASE); $(ECHO) $$PWD); $(CP) -r $$d/* $(wdbagent-ptrace_BUILD) @$(MAKE_STAMP)

\ \ \

Typically, you would place your dist/app_name/Makefile that contains your unpack rule in a shared layer directory.

Including the Source in the Package dist Directory

For small or local applications, you may want to directly include the source in the layer, specifically within the dist/app_name/src sub-directory. The op_agent code in the analysis layer is an example of this. The source code is located in installdir/wrlinux-3.0/layers/wrll-analysis-1.0/dist/wr-opagent/src. The Makefile is located in installdir/wrlinux-3.0/layers/wrll-analysis-1.0/dist/wr-opagent/. The following makefile code shows the unpack rule for this application:
wr-opagent.config: wr-opagent.unpack @$(call echo_action,$@,nothing to do) wr-opagent.unpack: @$(call echo_action,Unpacking,$*) $(MKDIR) -p $(wr-opagent_SRC) $(CP) -r `echo "$($*_PATCH_DIRS)/*" | sed -e "s/patches/src/"` $(wr-opagent_SRC) @$(MAKE_STAMP)

In this case, the source is kept in "open", unpacked form for easy development. Instead of, for example, untar-ing a tar archive, the build system directly copies the source tree into the build directory. There is wrapping a "virtual" package for the purposes of the build system, but the application is not bundled into a package or archive because it is under active local development.

88

PA R T II

Configuring and Customizing


8 9 10 11 12 13 Changing Basic Linux Configuration Files ............................ 91 Configuring the Kernel ............................................................ 97 Adding Packages ...................................................................... 107 Configuring PREEMPT_RT ...................................................... 123 Configuring Scalable Features .............................................. 131 Patch Management ................................................................... 157

89

Wind River Linux User's Guide, 3.0

90

8
Changing Basic Linux Configuration Files
8.1 Introduction 91 8.2 Creating Basic Linux Configuration Files 91 8.3 Changing Preset Linux Configuration Files 92 8.4 Moving Changes to a Custom Layer 93 8.5 Moving Changes to a Custom Template 94 8.6 Tutorial: Configuring Robust Networking and NTP 94

8.1 Introduction
Most basic configuration files within every Linux system are within the /etc directory and its subdirectories. The run-time file of each Wind River Linux system comes complete with a set of preconfigured files within the /etc and /root directories. As with kernel reconfiguration, you may make changes to these files within the build environment for testing. You may also backport them, when and if you want them to be permanent, to either the development environment or to your own template.

8.2 Creating Basic Linux Configuration Files


Within each build environment is a file system directory holding an fs subdirectory. Within it are etc and root subdirectories. The etc directory holds a number of standard configuration files, such as inittab, fstab, and hosts, as well as subdirectories such as sysconfig and rc.d. The root directory holds the hidden files .bash_logout and .profile. Figure 8-1 below, shows this structure graphically. Note that most of the subdirectories within the project build directory are not shown.

91

Wind River Linux User's Guide, 3.0

Figure 8-1

Subdirectories under the filesystem Directory

prjbuildDir

export

filesystem

dist

fs

etc

root

sysconfig

rc.d

The contents of fs/ originate within templates in the development environment. During a make fs or a make build-all, the build system copies the contents of the fs directories in the templates to prjbuildDir/filesystem/fs. These files and directories are then copied to the complete run-time file system within prjbuildDir/export/dist. Finally, the build system compresses this file system to a single file within export/. For a typical NFS deployment, you would manually uncompress this file in the NFS export directory.
NOTE: You may need root privileges when performing this tar command, depending on the permission settings of the exported NFS directory.

You erase the existing file system within export/dist when you make a new one. However, any changes you made to files within prjbuildDir/filesystem/fs remain, and are copied over to the rebuilt file system. In this manner, changes to files within prjbuildDir/filesystem/fs migrate to each rebuild. For details on making changes to the runtime file system see C. File System Layout Configuration.

8.3 Changing Preset Linux Configuration Files


You may wish to make changes to configuration files to enable more robust networking, to change your bash profile, to enable additional services, or for many other reasons. Use the following procedure to modify the target configuration files: 1. 2. With any text editor, edit or add the configuration files within filesystem/fs/ (for example, etc/inittab, etc/hosts, and so on). Rerun make fs or make build-all.

92

8 Changing Basic Linux Configuration Files 8.4 Moving Changes to a Custom Layer

3.

Check to make sure the changes have migrated to export/dist/.

If they have migrated to export/dist/, then they have also migrated to the compressed file system archive file within export/.
NOTE: Although you may add and change files in fs/ within the project build directory, you may not add directories. Added directories will not migrate.

8.4 Moving Changes to a Custom Layer


Moving your changes to a custom layer has multiple advantages:

You keep the templates within the development environment pristine. You are not restricted to just filesyou may add additional directories to the file system as well. You can create board, kernel, and file system-specific layers.

The directory structure of the layer will depend on how restrictive you wish the changes to be. For example, you may want:

Your changes migrated to every Glibc-based file system. You can create the directory structure customlayer/templates/rootfs/glibc_fs/fs. Your changes migrated only to every Glibc CGL file system. You can create the directory structure customlayer/templates/rootfs/glibc_cgl/fs. Your changes restricted to the current projects board, CPU and file system. You can create the directory structure customlayer/templates/board/boardname/rootfs/rootfsname/fs.

NOTE: Refer to 5. Layer and Template Processing for details on template processing.

The procedure has three steps: 1. Using the cp * -f -r command, copy the entire template structure (examples above), from the templates subdirectory within the development environment, to your custom layer. Edit or add the configuration files and add any directories you wish, within the layers fs directory. Within the project build directory, run configure with the --with-layer= option.

2. 3.

Check to make sure the changes have migrated to filesystem/fs within the project build directory.

93

Wind River Linux User's Guide, 3.0

8.5 Moving Changes to a Custom Template


This is a simpler procedure, and more suitable when making changes to a single project. (This method, like the layer method, also allows you to create additional file system subdirectories). After making changes to filesystem/fs within the project build directory, copy your entire fs directory to your custom template. 1. 2. 3. 4. In the project build directory, edit or add the configuration files and add any directories you wish within filesystem/fs. Make a new directory, fs, within your custom template. Copy the contents of filesystem/fs to yourtemplate/fs. Run configure, with the --with-template-dir and --with-template options.

Check to make sure the changes have migrated to filesystem/fs within the project build directory.

8.6 Tutorial: Configuring Robust Networking and NTP


This tutorial enhances the networking capability of the run-time file system by adding the servers and targets hostnames and IP addresses to the boards hosts file, and the targets hostname to its network file. In addition, it sets up the Network Time Protocol daemon on the target, so that the targets time is synchronized with the servers.
NOTE: This tutorial assumes that NTPD has already been set up and is running on an RHEL server, using the standard Red Hat packages and methods. Path names and files for other hosts may differ.

In order not to disturb the development environment, the tutorial uses a custom template to add the necessary files and directories to the run-time file system. The following assumes you have created a platform project and performed a make fs.
Step 1: Add the network host names to the targets hosts file.

First, use the vi or other text editor to add the servers and targets hostnames and IP addresses to prjbuildDir/filesystem/fs/etc/hosts. An example addition to the hosts file is below:
192.168.10.1 server1.lab.org 192.168.10.2 target.lab.org

Step 2:

Add the targetss hostname to its network file.

In a similar fashion, add the targets hostname to filesystem/fs/etc/sysconfig/network. An example network file is below:
NETWORKING=yes HOSTNAME=target

94

8 Changing Basic Linux Configuration Files 8.6 Tutorial: Configuring Robust Networking and NTP

Step 3:

Create several new directories within filesystem/fs:


$ $ $ $ cd /home/user/workdir/sbc8560/filesystem/fs mkdir etc/ntp mkdir -p etc/rc.d/init.d mkdir -p var/lib/drift

Step 4:

Copy NTP files from the host machine to their identical directories within fs.

Copy these files from the host machine to filesystem/fs in the project build directory. In the case of etc/ntp, all files within the directory should be copied over.
$ $ $ $ cp cp cp cp /etc/rc.d/init.d/ntpd etc/rc.d/init.d/ntpd /etc/ntp.conf etc/ntp.conf /var/lib/ntp/drift var/lib/ntp/drift /etc/ntp/* etc/ntp/

Step 5:

Edit the targets etc/ntp.conf file to reflect the hosts IP address.

An example is below:
# IP address of host (NTP server) server 192.168.10.1 drift file /var/lib/ntp/drift server 127.127.1.1 fudge 127.127.1.1 stratum 10

Step 6:

Within the targets etc/ntp directory, change the hostnames in two files.

Within etc/ntp, edit the ntpservers and step-tickers files, replacing their hostnames with the hosts (NTP servers) hostname.
Step 7: Copy the timezone information to the targets etc/localtime file.

The targets timezone information must be copied to the targets etc/localtime file. As an example, if the timezone is Edmonton, Alberta, Canada, then the /usr/share/zoneinfo/America/Edmonton file on the host must be copied to the targets etc/localtime.
Step 8: Copy the entire fs directory to your custom template.

For example, you might make a directory /home/user/templates to hold your templates, and name this template networking. So you would now create a directory /home/user/templates/networking/fs:
$ cd prjbuildDir $ cp -rp filesystem/fs /home/user/templates/networking/

Step 9:

Rerun configure.

To finish, re-run configure with --enable-template and --enable-template-dir:


$ configure --enable-board=common_pc --enable-kernel=standard --enable-rootfs=glibc_std --with-template-dir=/home/user/templates --with-template=networking

Step 10:

Run make fs and check to make sure that your changes have propagated to export/dist.

For example, prjbuildDir/export/dist/etc/hosts should now contain the server and target addresses you added.
Step 11: Reboot the target.

Reboot the target with the new file system.

95

Wind River Linux User's Guide, 3.0

Step 12:

Set the targets date by querying the host.

In this example the servers hostname is server1.lab.org.


NOTE: The server must be running ntpd.

On the target, enter:


# ntpdate server1.lab.org

Step 13:

Synchronize the targets time by starting NTPD.


# service ntpd start

Step 14:

Set NTPD to automatically start at boot on the target.

On the target, enter:


# chkconfig --level 3 ntpd on

The new file system, with more robust networking and the Network Time Protocol daemon, will be propagated from the custom template every time this run-time system is built with the template.

96

9
Configuring the Kernel
9.1 Introduction 97 9.2 Initial Creation of the Kernel Configuration File 97 9.3 Kernel Configuration Fragment Auditing 99 9.4 Reconfiguring and Rebuilding the Kernel 103

9.1 Introduction
You can reconfigure kernels using standard Linux command line or GUI tools. You may want to start by making modifications to an existing BSPs configuration by simple additions to your build area first, and then move them to a custom template or layer when they prove successful. This chapter provides examples of how to perform kernel configurations in these ways. This chapters examples are based on the SBC8560 board built with the standard kernel and Glibc file system in the project build directory sbc85x0/.

9.2 Initial Creation of the Kernel Configuration File


Wind River aims to provide uniformity across BSPs of a given platform and across given architectures. To do so, non-hardware specific kernel options (for example, supported file systems) are generally chosen on a per-platform basis, and then the hardware specific options (for example, device drivers) are chosen on a per-BSP basis. To achieve this, fragments of kernel configuration files (called config files or config fragments) are placed among the other files that determine the content of a particular platform, architecture, feature, or BSP. These fragments contain just the relevant kernel settings that pertain to that area where they are placed.

97

Wind River Linux User's Guide, 3.0

A kernel configuration is generated any time the linux.config rule is processed, for example when you perform a make linux.reconfig. The kernel configuration is not performed by the configure command, so you do not have to reconfigure your project just to update the kernel configuration when changing a fragment. When you configure your project and make an initial selection of a platform and a BSP (board), you implicitly choose a subset of the various layers and feature templates that are available to be included in your build. Config files that are found in these layers and templates are collected together, and this concatenation of fragments form the initial input to the Linux Kernel Configurator (LKC). The kernel config fragments are collected, starting from the generic and proceeding to the specific, to assemble platform- and board- specific kernel configuration options into a format that is suitable for the LKC. This produces a link to an intermediate version of the .config file in prjbuildDir, with the name board_kernel-config-version. The intermediate version of the file is a flat file created by a concatenation of all the fragments that are used. It has a preamble listing all the fragments that were used and the order in which they were used. At this stage, only basic sanity checks on the config fragment inputs have been performed, for example filtering of duplicate settings. LKC evaluates the input and applies dependency information (contained in Kconfig files in your prjbuildDir/build/linux/ subdirectories). The LKC then creates the kernel configuration file, .config, in prjbuildDir/linux-version-standard/ (for our example) which is the list of options used to build the kernel. The last instance of an option that is found overrides any earlier instance duplicates are filtered out and the last instance of a parameter is its only instance in the top-level kernel configuration file. Note that the top-level kernel configuration file is transientany manual changes to it are ignored. Instead of editing this large file you can create a kernel config fragment file in your project build directory. The kernel config fragment in your project build directory is processed last, so any options you set in it will override any settings of the same options in any other kernel configuration file fragments.
NOTE: Specifying a particular setting in a config fragment does not automatically

guarantee that the option appears in the final .config file. Wind River still uses the built-in part of the default kernel.org configuration (usually referred to as the LKC) to process the fragments and produce the final .config, and the final dependency check may discard or add options as required, for example, due to dependency reasons. The config file that is used to generate a new kernel is prjbuildDir/linux-version-*/.config. It is created from default kernel.org option settings, plus the options settings from all the kernel configuration files in the distribution and build environments.

98

9 Configuring the Kernel 9.3 Kernel Configuration Fragment Auditing

9.3 Kernel Configuration Fragment Auditing


Wind River provides an informational audit that takes place when the configuration file is generated that looks for the following: 1. 2. 3. 4. Non-hardware specific settings in the BSP fragments. Settings specified in the BSP fragments that it is necessary to change or remove in the final .config to satisfy LKC's dependency information. Settings that were duplicated in more than one fragment. Settings that simply don't match any currently available option.

The intent of this on-the-fly-audit of the fragment content and the generated .config file is to warn you when it looks like a BSP may be doing things it should not be doing. For example, filtering is performed to identify duplicate entries, and warnings are issued when options appear to be incorrect due to being unknown or being ignored for dependency reasons. Because there are many kernel options available and many kernel configuration fragments, the auditing mechanism provides summary output to the screen and collects detailed information in a folder relevant to kernel configuration fragment processing. The warnings are captured in files in the audit data directory prjbuildDir/build/linux/wrs/cfg/build_name/*. The following section provides more detail on auditing.

More on Kernel Configuration Fragment Auditing

Kernel options are all sourced from Kconfig files placed in various directories of the kernel tree that correspond to the locations of the code that they enable or disable. The logical grouping has the effect of making each Kconfigs content either primarily hardware specific (for example, options to enable specific drivers) or non-hardware specific (for example, options to choose which file systems are available.) Auditing is implemented by the two scripts generate_cfg and kconf_check located in layers/wrll-linux-version/tools/kern-tools/. The auditing takes place in two steps, since the input first needs to be collated and sanitized, and then the final output in the .config file from the LKC must be compared to the original input in order to produce warnings about dropped or changed settings. These scripts are responsible for assembling the fragments, filtering out duplicates, and auditing them for hardware and non-hardware content. The files of interest under the build/linux/wrs/ directory include the following:

hardware.cfgItems listed here are explicitly considered as hardware items, regardless of which Kconfig file they are found in. hardware.kcfThe list of hardware Kconfig files. non-hardware.cfgItems listed here are explicitly considered as non-hardware items, regardless of what Kconfig file they were found in. non-hardware.kcfThe list of non-hardware Kconfig files.

99

Wind River Linux User's Guide, 3.0

By the end of this process, Wind River has sorted all the existing Kconfig files into hardware and non-hardware, and this forms the basis of the audit criteria.

Audit Reporting

The audit takes place at the linux configuration step and reports on the following:

Items in the BSP that don't look like they are really hardware related. Having a non-hardware item in a BSP is not treated as an error, since there may be applications where something like a memory-constrained BSP wants to turn off certain non-hardware items for size reasons alone.

Items in one fragment that are re-specified again in another fragment or even in the same fragment later on. Again this is not treated as an error, since there are several use cases where an over-ride is desired (e.g. the customer supplied fragment described below). Normally there should be no need for doing this -- but if someone does this, the usual rule applies, that is, the last set value takes precedence.

Hardware-related items that were requested in the BSP fragment(s) but not ultimately present in the final .config file. Items like this are of the highest concern. These items output a warning as well as a brief pause in display output to enhance visibility.

Invalid items that don't match any known available option. This is for any CONFIG_OPTION item in a fragment that is not actually found in any of the currently available Kconfig files. Usually this reflects a use of data from an older kernel configuration where an option has been replaced, renamed, or removed.

SeeExample 9-1 for a commented example of auditing output.

Example of Auditing Output

Example 9-1 provides comments on some sample kernel configuration auditing screen output.
Example 9-1 Commented Kernel Fragment Auditing Output $ make -C build linux.reconfig make: Entering directory `prjbuildDir/build' There were 2 instances of config options redefined within a single fragment. The full list can be found in your workspace at: build/linux/wrs/cfg/build_name/fragment_duplicates.txt

Duplicate instances of options, whether across fragments or in the same fragment, will generate a warning. You can view the indicated fragment_duplicates.txt file to see the specific options.
There were 1 kernel config options redefined during processing this BSP. These config options are defined in more than one config fragment. The full list can be found in your workspace at: build/linux/wrs/cfg/build_name/redefinition.txt

100

9 Configuring the Kernel 9.3 Kernel Configuration Fragment Auditing

This is much like the previous warning, only this time the duplicates that were detected occurred in different fragments. Whenever duplicate options are encountered, only the last instance is included in the final configuration file.
This BSP sets 3 invalid/obsolete kernel options. These config options are not offered anywhere within this kernel. The full list can be found in your workspace at: build/linux/wrs/cfg/build_name/invalid.cfg

You should look at the indicated invalid.cfg file to determine which options are not recognized. It may be you are using obsolete options. A mis-spelling of an option name may trigger this warning also. (A mis-spelling that is a syntax error, for example COFNIG_OPTION=y is ignored and unreported.)
This BSP sets 11 kernel options that are possibly non-hardware related. The full list can be found in your workspace at: build/linux/wrs/cfg/build_name/specified_non_hdw.cfg

The non-hardware options are meant to be in the domain of the platform, not the BSP. The provided BSP options are found to be non-hardware-related and so they are reported here.
WARNING: There were 1 hardware options requested that do not have a corresponding value present in the final ".config" file. This probably means you aren't getting the config you wanted. The full list can be found in your workspace at: build/linux/wrs/cfg/build_name/mismatch.cfg

View the indicated mismatch.cfg file for the option(s) causing this message. An example of a mismatch is a case where you have requested CONFIG_OPTION=y and you get the message Actual value set: "". In this case the option is not used because it is not valid for the input you provided. Another example is a case where you have an option CONFIG_OPTION=m, but you have not enabled modules. (In this case, LKC would provide CONFIG_OPTION=y, assuming that was a valid option.)
Contents of the Audit Data Directory

Audit data is stored in the prjbuildDir/build/linux/wrs/cfg/build_name/ directory. The contents of this directory are refreshed for every linux.config or linux.reconfig. Table 9-1 describes the contents of the files that appear in the audit data directory.
Table 9-1 Description of Files in Audit Data Directory

File

Description

all.kcf known_current.kcf

Alphabetical listing of all Kconfig files found in this kernel. List of previously categorized Kconfig files present in the patched linux tree about to be used for compilation. List of Kconfig files for which the build system has already information on whether to be classified as hardware or not. List of Kconfig files known to contain non-hardware related items.

known.kcf

non-hardware.kcf

101

Wind River Linux User's Guide, 3.0

Table 9-1

Description of Files in Audit Data Directory (contd)

File

Description

hardware.kcf unknown.kcf

Kconfig files that are to be treated as containing hardware options. List of Kconfig files present in the about-to-be-used linux tree that are not known by the build system to be either hardware or non-hardware items. Alphabetical listing of all the CONFIG_ items found in this kernel.
CONFIG_ items that are to be treated as

all.cfg always_hardware.cfg

always hardware, regardless of what Kconfig file they are in. always_nonhardware.cfg avail_hardware.cfg As above, but non-hardware. All the options from all the hardware-related Kconfig files, less those options found in always_nonhardware.cfg. List of the CONFIG_ items specified by the BSP. List of the CONFIG_ items specified by the BSP which are hardware (ideally this should be almost all of them). List of the CONFIG_ items specified by the BSP which are non-hardware (ideally this should be almost always empty). Settings which are specified multiple times within a single fragment. List of options that are set in one fragment and then re-set in another later on. Configuration options specified in the BSP that don't match any known valid option, that is, this item isn't in any Kconfig file. A concatenation of all the file fragments. The file of the same name in prjbuildDir is a symlink to this file. The output of the LKC processing as it creates the final .config file.

specified.cfg specified_hdw.cfg

specified_non_hdw.cfg

fragment_duplicates.txt redefinition.txt

invalid.cfg

BSP-kernel_type-kernel_version

config.log

102

9 Configuring the Kernel 9.4 Reconfiguring and Rebuilding the Kernel

9.4 Reconfiguring and Rebuilding the Kernel


Wind River Linux uses the same kernel configuration file, .config, and the same kernel configuration tools, such as menuconfig or xconfig, as do stock Linux kernels. After a reconfiguration a new wrs_sbc85x0-standard-config-version is assembled so that you may examine an up-to-date kernel configuration (with a text editor, for instance) without descending into the build directory. The original wrs_sbc85x0-standard-config-version is retained with a .orig suffix. You can use a make tool or a Workbench tool to edit the kernel configuration file as described in Using GUI Tools for Kernel Modification, p.103, or make your changes in kernel config fragments as described in Adding a Kernel Fragment File in a Template, p.104, and Adding a Config Fragment in Your Project Build Directory, p.105. !
CAUTION: When you use menuconfig, xconfig, or the Workbench kernel configuration tool, you are directly editing the .config file. The .config file is replaced by config file fragment processing, so if you regenerate the .config file from fragments using make linux.config or make linux.reconfig, you will lose any changes you made to the .config file with your GUI tool. To ensure your changes appear in reconfigurations, you should place them in config file fragments.

Using GUI Tools for Kernel Modification

The following example uses the console tool make linux.menuconfig in the prjbuildDir/build/ directory to reconfigure the SBC8560 kernel. You may also use the X window system tool make linux.xconfig or, if you are using Workbench, you can use the advanced features available with the Kernel Configuration tool in your platform project. Within a terminal window in sbc85x0/build, run make linux.menuconfig and navigate to the General Setup submenu as shown in Figure 9-1. Reconfigure the kernel to increase the printk ring buffer (LOG_BUF_SHIFT) to 16 by selecting the entry and then entering the value. Save your configuration, and exit.

103

Wind River Linux User's Guide, 3.0

Figure 9-1

Enabling Early Serial Port Debugging in an SBC8560 Kernel

To build the new kernel, within sbc85x0/build run make linux.rebuild. Do not run make dep, or make kernelimage.

Adding a Kernel Fragment File in a Template

You add kernel configuration file fragments in *.cfg files, which may contain any number of kernel configuration options. You specify and control one or more *.cfg files in a .scc file.
NOTE: In previous versions, the config files were named knl-base.cfg or knl-kernel_version.cfg, but you can now assign arbitrary names with the use of SCC files (see 13.5 Kernel Patching with scc, p.180).

For example, suppose you wanted to disable KGDB options for certain product configurations. You can create a template that contains the necessary files and then include the template when you configure the project. In the following example, a custom template is located at /home/user/templates/features/no-kgdb. The features/no-kgdb template contains the standard linux subdirectory for kernel modifications, and contains two files, no-kgdb.cfg and no-kgdb.scc. The contents of the files are as follows:

no-kgdb.cfg:
# CONFIG_KGDB is not set

no-kgdb.scc:
kconf non-hardware no-kgdb.cfg

The contents of no-kgdb.scc say to include the kernel configuration fragment file no-kgdb.cfg, which sets non-hardware options. (The scc file is discussed in more detail in 13.5 Kernel Patching with scc, p.180.)

104

9 Configuring the Kernel 9.4 Reconfiguring and Rebuilding the Kernel

To configure your project with these options, add the following to your configure command line:
--with-template-dir=/home/user/templates --with-template=features/no-kgdb

After configure is run, you can see the following at the end of the prjbuildDir/templates file:
... default default features/no-kgdb

To configure your kernel, run make -C build linux.config (or linux.reconfig). In this example, the KGDB options will be turned off even if the configuration otherwise turns them on, because your custom template, features/no-kgdb, is processed last. For example, your default configuration may include the features/kgdb template which enables these options, but your template will disable them. You can see the end result of your kernel configuration in your kernel config files. There is a link to a board-kernel-config-version file in your prjbuildDir, for example, common_pc-standard-config-version, that contains the settings of the kernel configuration options found during the configuration process.
NOTE: The syntax shown:
# CONFIG_KGDB is not set

is not a comment as it may appear with the initial # symbol. The option as shown is the correct LKC syntax for turning off an option. Do not use, for example, CONFIG_KGDB=no, which is incorrect. Also note that a space is required between the # and the C.

Adding a Config Fragment in Your Project Build Directory

Another way to make changes to the kernel configuration is to add kernel configuration fragments in your prjbuildDir. You must then create an scc file in the project build directory to source it. For example, create a file in prjbuildDir called log-buf.cfg that holds the option or options you want to include in your kernel configuration. This file will be the last kernel configuration fragment processed, even after the config fragments in any custom templates you add. For example, to modify the same option you modified with the previous make menuconfig example, you could do the following: 1. 1. Create a prjbuildDir/log_buf.cfg file with the following contents:
CONFIG_LOG_BUF_SHIFT=17

Create a prjbuildDir/my_options.scc file with the following contents:


kconf non-hardware log_buf.cfg

And reconfigure your kernel:


$ make -C build linux.reconfig

You can verify that the option has been turned off by finding the entry for it in the informational wrs_sbc85x0-standard-config-version file, for example:
$ grep CONFIG_LOG_BUF wrs_sbc85x0-standard-config-version CONFIG_LOG_BUF_SHIFT=17

105

Wind River Linux User's Guide, 3.0

9.4.1 Resetting the Original Kernel Configuration


The original kernel configuration file, saved within the project build directory with the suffix orig, is not rewritten during subsequent reconfigurations. Therefore the original configuration can always be remade by first copying the .orig file to the standard, hidden kernel configuration file build/linux-version/.config, and then rebuilding the kernel as before, with make linux.rebuild.

106

10
Adding Packages
10.1 Introduction 107 10.2 Before Adding a Package 108 10.3 Adding a Package: rpmbuild with a Source RPM 109 10.4 Adding a Package: rpmbuild with a Classic Package 120 10.5 Adding a Package: the Classic Method 121 10.6 Removing a Package 121 10.7 Adding a Package to a Running Target 122

10.1 Introduction
Although Wind River Linux comes with a full suite of standard, small foot-print, and Carrier Grade Linux packages, you may add or remove packages as the need arises. There are two ways to add a package:

The rpmbuild methodThis uses rpmbuild and a spec file, in concert with a simple makefile and the Wind River Linux system, to drive the cross-compilation and installation of package source code. Preferably, the source code will come packaged as a source RPM file (also called an SRPM file, and ending in the suffix .src.rpm). SRPMs from other Linux distributions such as Fedora typically come complete with pre-written spec files, and often with distribution-specific patches. If the source code is not packaged as an SRPM, you can use ordinary source code, but in this case you will have to write your own spec file.

The classic methodThis method uses a makefile and the Wind River Linux build system to drive the cross-compilation and installation of package source code. The source code typically comes packaged as a tar archive file.

107

Wind River Linux User's Guide, 3.0

The classic method often requires writing elaborate makefiles. The rpmbuild method uses simplified and largely boilerplate makefiles in combination with spec files. This results in easier and faster package integration, and easier package maintenance.
NOTE: This chapter gives general directions for both rpmbuild and classic build methods. For examples of adding specific packages using different methods, refer to 22. Examples of Adding Packages.

Following Wind River Linux design practice, you should use the local custom layer directories packages and dist within the project build directory during the development stage of adding a package. (See 6.4.1 Workflow and the Local Custom Layer, p.74 for more on the local custom layer.) Once they are set up, you may move the packages and files to a more permanent custom layer (see 6.4 Creating Custom Layers, p.73).
NOTE: The Wind River Linux build system shares basic similarities with RPM package management for systems that are not designed for embedded cross-development, so familiarity with those procedures is helpful in understanding this chapter. For detailed information on rpmbuild, spec files, source RPMs and other concepts discussed in this chapter see, for example, RPM Guide, available at http://fedora.redhat.com/docs/drafts/rpm-guide-en/index.html, and Maximum RPM at http://www.rpm.org/max-rpm/.

In the following discussion, as throughout this guide, installDir refers to /home/user/WindRiver and prjbuildDir refers to your project build directory.

10.2 Before Adding a Package


Before you go find the source of the package you want to add to your project, you should determine if the package you want to add exists in the Wind River Linux distribution. For example, if you are building a glibc_std file system, it will not include all of the packages in installDir/wrlinux-3.0/layers/wrll-wrlinux/packages/. You should check that directory first to see if the package you want to add is already available and integrated into the development system. Packages in the Wind River Linux distribution are packaged either as RPMs containing source, or as compressed source (*.tar.gz or *.tar.bz2) files. Adding a package from the development system to your project is typically as simple as using the pkgname.addpkg make target. This updates the package list and makefiles and checks for package dependencies. Execute the following command in the prjbuildDir/build directory:
$ make pkgname.addpkg

to add the package to the build system.

108

10 Adding Packages 10.3 Adding a Package: rpmbuild with a Source RPM

10.3 Adding a Package: rpmbuild with a Source RPM


The rpmbuild method varies slightly depending on whether the source is an SRPM or just a standard archive file. This section discusses SRPMs; archive files are discussed in section 10.4 Adding a Package: rpmbuild with a Classic Package, p.120. The general procedure for adding SRPMs to the Wind River Linux build system is described below.

Preparing to Add an SRPM Package

Before downloading a new SRPM package, inspect its maintainers web page, or any other source you can find, to make sure it will build on your host, and cross-build if necessary for your target. Determine your package dependencies, that is, which packages are required by your package for it to build and function properly. Check to see if all dependencies are present within the target boards package list by checking the contents of the prjbuildDir/pkglist file. If a dependency is not included in the pkglist file, check to see if it is a standard Wind River Linux package, by inspecting installDir/wrlinux-3.0/layers/wrll-wrlinux/packages/. If not, it must be added. All packages that are dependencies must be present, and listed in the pkglist file.

10.3.1 Adding a Third-Party SRPM Package


The following sections describe how to add third-party packages to your platform project.
Simplified Version

As of Wind River Linux 3.0, a more simplified version of the procedure described in this section is supported. The older way of adding an SRPM (see 10.3.2 Older Method of Adding SRPMs, p.114 for details) still works, but the new way saves some steps by copying the spec file from the SRPM and editing it directly, rather than creating an integration patch to patch it within the SRPM. The basic steps of the simplified procedure are: 1. 2. Download the SRPM into the packages/ directory. (This procedure assumes you are doing this in your prjbuildDir or a custom layer): Place a Makefile in dist/package_name/. You can copy an existing makefile from a package in installdir/wrll-linux-3.0/layers/wrll-wrlinux/dist/package_name. Edit the makefile as described in Necessary Makefile Contents, p.112. Unpack the SRPM to extract the spec file, and put the spec file in the dist/package_name/ directory. Edit the spec file as described in Necessary spec File Changes, p.113, including references to any patches you may be adding. If you are adding custom patches, place them in dist/package_name/patches and edit the spec file to reference them.

3. 4. 5. 6.

109

Wind River Linux User's Guide, 3.0

You do not need to make a patches.list file or integration patch, which were required in previous versions of Wind River Linux.
NOTE: With this new way of evaluating spec files in any layer, you can override the spec files of package in the installation, including replacing or augmenting patches. The spec file in a dist/package directory of the same name overrides the spec file in a lower level layer (for example, the installed SRPMs.) This works in the same way as using templates of the same name to override lower-level templates, as described in Naming Your Templates, p.68.

Detailed Procedure for Adding Third-Party SRPMs

You must set up the infrastructure that will include the necessary files and directories to build the package each time. Perform the following sequence of operations for each new third-party package you want to add:
Step 1: Create and configure your project build directory.

Typically, you will perform a sequence of commands like this:


$ configure project_config_options

The file system you use determines which packages are included by default. You can look in pkglist in you project build directory to see which packages are configured into your project.
Step 2: Get the package you want to add.

Place the package you want to add in your local packages directory:
$ cp package packages/

Download a package from the net or elsewhere and place it in prjbuildDir/packages.


Step 3: Prepare the infrastructure.
$ mkdir dist/package

This creates the directory structure to hold your Makefile and spec file. If you will be adding any custom patches to patch the SRPM, create a patches subdirectory as well:
$ mkdir dist/package/patches

Step 4:

Create the makefile.

You need to create a new makefile or modify an existing one and place it in your newly-created infrastructure:
$ edit dist/package/Makefile

Use your preferred editor to create and edit the makefile. You can start with another Makefile you have copied from one of the package directories in installDir/wrlinux-3.0/layers/wrll-wrlinux/dist/, or use your editor to create one with the contents as described in Necessary Makefile Contents, p.112. With SRPMs, these are small makefiles, typically 10 lines or so.

110

10 Adding Packages 10.3 Adding a Package: rpmbuild with a Source RPM

Step 5:

Add the package to the package list and to the build Makefiles.

The best way to do this is by using the pkgname.addpkg make target:


$ make -C build pkgname.addpkg

This adds pkgname to the pkglist file, adds any known dependencies of pkgname to pkglist, and regenerates the prjbuildDir/build/Makefile.* files to include pkgname.
Step 6: Unpack the package to extract the spec file.

Unpack the package and copy the spec file to your dist/package/ directory:
$ make -C build package.unpack $ cp build/package-version/SPECS/package.spec dist/package/

Step 7:

Edit the spec file.

Edit the prjbuildDir/dist/package/package.spec file as described in Necessary spec File Changes, p.113.
Step 8: Build the package

Make a clean build and build the package:


$ cd build $ make package.distclean $ make package

Resolve any errors with the dist/package/Makefile or dist/package/package.spec files until the build succeeds.
Step 9: OptionalAdd patches.

If you are custom patching the package, add your patches to the dist/package/patches directory and reference them in your dist/package/package.spec file. Repeat the package build until it builds correctly with your custom patches. If you get errors that require other packages to be built, add and build those packages first. When your package builds without error, you can install it in the file system as described in Install the Package in the File System, p.111.

Install the Package in the File System

Create the file system:


$ cd prjbuildDir $ make fs

In some cases, you will find that your package needs one or more other packages when you try to include it in the file system. You will have to add those packages as well. When you are able to build the file system without error, your new package is included in the compressed file system (and in prjbuildDir/export/dist).

111

Wind River Linux User's Guide, 3.0

Create a Layer with Your Changes

A convenient way to preserve the additions and other modifications you make to a platform project is to create a layer that captures the changes. You can then backup that layer someplace and use it at any time in combination with your original configure command to create your new build environment. A simple way to create a layer that will preserve the changes you make when you add packages is to use the make export-layer command in your project build directory:
$ cd prjbuildDir $ make export-layer

This creates a layer directory in prjbuildDir/export/export-layer/projectname.date and also an archive file of the layer prjbuildDir/export/export-layer/projectname.date.tar. Relocate the layer as desired and include it with your configure command (--with-layer=path_to_layer) to recreate your current configuration. Be sure that your added package(s) are included in a pkglist.add file in the layer, for example in templates/default/pkglist.add. For examples of adding SRPMs, refer to 22. Examples of Adding Packages.

Necessary Makefile Contents

The makefile for a package using the rpmbuild method is simple and largely boilerplate. You can usually copy a Makefile from an SRPM package in the distribution. In such cases you usually only need to change the package name, version number, and MD5 sum.

The following variables must be defined:


PACKAGES+=

This is the name that extends pkglist with this package.


pkg_RPM_DEFAULT

Lists all of the produced binary packages that should be installed on the target file system (usually excludes development packages.)
pkg_RPM_ALL

Lists all of the packages produced (does not inherit from any other list.) This is used as a validation that the package is being produced properly. If this (and RPM_IGNORE) do not match what RPM tells the build system will be produced, a warning message is generated telling you that you should update your makefile.
pkg_TYPE=

For an SRPM package, this must be set to SRPM. For ordinary compressed source files, it must be set to spec.
pkg_VERSION=

Version-release of the src.rpm package.


pkg_ARCHIVE=

Complete file name of the src.rpm package.


pkg_MD5SUM=

The MD5 checksum of the src.rpm package.

112

10 Adding Packages 10.3 Adding a Package: rpmbuild with a Source RPM

pkg_UPSTREAM=

The download site of the src.rpm package.

The following variables may be defined, if necessary:


pkg_RPM_NAME=

Necessary if the produced RPM name is different from pkg_NAME, or if more than one binary is produced.
pkg_DEPENDS=

A list of dependencies which must be built before this package is built.


pkg_RPM_DEVEL

Lists all of the development packages. These plus the pkg_RPM_DEFAULT list are installed into the sysroot for development purposes. This is only required if the package produces development RPM's, that is, binary RPMs that contain information that must be installed into the sysroot for other programs to build properly.
NOTE: The sysroot is populated by installing both pkg_RPM_DEFAULT and pkg_RPM_DEVEL.
pkg_RPM_IGNORE

In a few cases, the RPM program reports it will generate a package, that it doesn't actually generate. This is a way to capture those situations.

Necessary spec File Changes

With source RPMs, you will need to modify a packagename.spec file. Few spec files will need every change listed below, because the lines which need replacement or deletion will not be present. Some spec files will only require the change numbered 1 and, for your records, changes 7 and 8. (You can also use Lua scripting as described in Lua Scripting in Spec Files, p.114.) 1. 2. Immediately after every %build and %install section header, add the RPM macro, %configure_target. Remove any install scripts (scriptlets), such as: %postun, %preun, %pretrans, %posttrans, %pre, %post, %triggerpostrun, %triggerrun, %triggerin, %trigger and %verifyscript. If chkconfig is used, replace it with the macro %{_chkconfig_sh initscript} at the end of %install. If the package uses %ifarch, replace it with %if_arch. (%ifarch still works on a CPU basis, but %if_arch works on a CPU family basis. Inspect any BuildRequires: and BuildPreReq: lines. If packages not supplied by Wind River Linux are listed, comment the lines out with #. If the packages configure cant use the system-wide config.cache, override it by adding %define config_cache config_cache immediately after:
%build %configure_target

3. 4. 5. 6.

7. 8.

If you desire, add a change indicator (such as -WR), to the Release line. If you desire, add an entry to changelog.

113

Wind River Linux User's Guide, 3.0

Lua Scripting in Spec Files

Lua is a scripting language with an interpreter built into rpm. This allows you to write %pre and %post lua scripts to be run at pre- and post-installation. (Note that bash scripts are not supported at installation time.) The wrs library is included in the lua interpreter from Wind River. It consists of three functions, wrs.groupadd, wrs.useradd, and wrs.chkconfig. The following provides an example of a post-install section that creates a group and user named named.
%post -p <lua> wrs.groupadd('-g 25 named') wrs.useradd('-c "Named" -u 25 -g named -s /sbin/nologin -r -d /var/named named')

Each function takes one argument, which is the string you would enter at the shell prompt if you were running the Linux command of the same name. Spec file macros are expanded within the string, so the following works as expected.
%pre -p <lua> wrs.groupadd('-g %{uid} -r %{gname}') wrs.useradd('-u %{uid} -r -s /sbin/nologin -d /var/lib/heartbeat/cores/hacluster -M -c "heartbeat user" -g %{gname} %{uname}')

As can be seen from the file names, when the lua script executes, the "root" directory is the root of the target file system. The base, table, io, string, debug, loadlib, posix, rex, and rpm libraries are also built-in to the lua interpreter. Their use, and general lua programming is not covered here.
Additional Information

For more information on the LUA scripting language, see http://www.lua.org.

10.3.2 Older Method of Adding SRPMs


The following provides details on the older (pre-version 3.0) method of adding SRPMs to Wind River Linux. This method is still valid, and the creation of your basic directory structure, makefiles, and spec files must still be done as described for the new method. If you use the new method described in Simplified Version, p.109 however, you do not need to create the integration patch, or a patches.list file.

114

10 Adding Packages 10.3 Adding a Package: rpmbuild with a Source RPM

Figure 10-1

Overview of Adding a Package

Create Local Layer Package Environment


configure options make cp/wget package to packages/ mkdir -p dist/package/patches edit dist/package/Makefile make -C build pkgname.addpkg

Create Patch
Create Patch With Quilt or Manually

Build with Patch


edit ../../dist/package/patches.list make package

Install in File System


make make export-layer

The following sections describe each of these steps.

Create the Local Layer Package Environment

You must set up the infrastructure that will include your new patch and the necessary files and directories to build it each time. Perform the following sequence of operations for each new third-party patch you want to add:
Step 1: Create and configure your project build directory.

Typically, you will perform a sequence of commands like this:


$ configure project_config_options

The file system you use determines which packages are included by default. You can look in pkglist in you project build directory to see which packages are configured into your project.
Step 2: Get the package you want to add.

Place the package you want to add in your local packages directory:
$ cp package packages/

Download a package from the net or elsewhere and place it in prjbuildDir/packages.


Step 3: Prepare the infrastructure.
$ mkdir -p dist/package/patches

This creates the directory structure to hold your Makefile and patches.

115

Wind River Linux User's Guide, 3.0

Step 4:

Create the makefile.

You need to create a new makefile or modify an existing one and place it in your newly-created infrastructure:
$ edit dist/package/Makefile

Use your preferred editor to create and edit the makefile. You can start with another Makefile you have copied from one of the package directories in installDir/wrlinux-3.0/layers/wrll-wrlinux/dist/, or use your editor to create one with the contents as described in Necessary Makefile Contents, p.112. With SRPMs, these are small makefiles, typically 10 lines or so.
Step 5: Add the package to the package list.

The best way to do this is by using the pkgname.addpkg make target:


$ make -C build pkgname.addpkg

This adds pkgname to the pkglist file, and also adds any known dependencies of pkgname to pkglist. At this point, you can create a patch so that the package will be properly built each time you build the file system.

Create the Patch

You can use quilt to help you create and manage your patches, or you can create them manually by diffing your changes. Both methods are described here and it is a matter of personal preference which one you use. Use cases in 22.2 Adding SRPM Packages, p.256 present both methods. Figure 10-2 summarizes the two methods and the following sections provide details.

116

10 Adding Packages 10.3 Adding a Package: rpmbuild with a Source RPM

Figure 10-2

Overview of Creating a Patch

Creating a Patch with Quilt


Initialize your quilt Environment cd build make package.patch cd fullPackageName quilt new package-wr-integration.patch quilt edit SPECS/package.spec quilt refresh cp wrlinux_quilt_patches/package-wr-integration.patch\ ../../dist/package/patches

Creating a Patch Manually


cd build make package.unpack mv fullPackageName fullPackageName.ori make package.unpack edit fullPackageName/SPECS/package.spec diff -Nur fullPackageName.ori/SPECS/package.spec \ fullPackageName/SPECS/package.spec > \ ../dist/package/patches/package-wr-integration.patch

Create the Patch with quilt

The following steps describe how to use quilt to create a patch that integrates the SRPM package into the cross-build system.
Step 1: Initialize your quilt environment.

You can initialize the following environment variables as shown before starting to use quilt, or just add them to your shell startup file, for example .bashrc or .cshrc.
export export export export QUILT_PATCHES=wrlinux_quilt_patches QUILT_PC=.pc WRLINUX_USE_QUILT=yes PATH=$PATH:prjbuildDir/host-cross/bin

Step 2:

Unpack and patch the package.

Unpack the package source and necessary files for creating the patch in prjbuildDir/build:
$ cd build $ make package.patch

Step 3:

Change directory to the package directory and start the patch.

You now have a subdirectory named with the full package name including version number and suffix. cd into it and start a new patch:
$ cd full_package_name $ quilt new package-wr-integration.patch

117

Wind River Linux User's Guide, 3.0

Step 4:

Edit the spec file and refresh quilt.

Use quilt to make the changes to the spec file that will be the changes that the patch applies:
$ quilt edit SPECS/package.spec

At a minimum, you must add the following line immediately after the %build and %install lines:
%configure_target

Refer to Figure for details.


NOTE: Wind River Specific RPM MacrosIn order to assist with building RPM Spec files that are capable of easily cross compiling software, Wind River has added a number of macros to the rpmbuild portion of the build system. For a full list of macros specific to the Wind River environment see: prjbuildDir/scripts/rpm-macros.

Refresh quilt as follows:


$ refresh quilt

Step 5:

Add the patch to the package build infrastructure.

Copy the package to the patches directory:


$ cp wrlinux_quilt_patches/package-wr-integration.patch \ ../../dist/package/patches

Add the patch it to the list of patches:


$ edit ../../dist/package/patches.list

patches.list should contain the one line:


package-wr-integration.patch

For example:
$ cat ../../dist/thttpd/patches.list thttpd-wr-integration.patch

You can now build the package wit h the new patch as described in Build with the Patch, p.119.
Create the Patch Manually

An alternative to using quilt to create the patch is described in the following steps.
Step 1: Unpack the package source.

Unpack the package source in prjbuildDir/build:


$ cd build $ make package.unpack

Step 2:

Save the original source.

Save your original package source directory so that you can perform a diff against it:
$ mv fullPackageName fullPackageName.ori

118

10 Adding Packages 10.3 Adding a Package: rpmbuild with a Source RPM

Step 3:

Unpack the package again.

Unpack the source again, this time to get the source that you will modify:
$ make package.unpack

Step 4:

Edit the spec file.

Make the changes to the spec file that will be the changes that the patch applies:
$ edit fullPackageName/SPECS/package.spec

At a minimum, you must add the following line immediately after the %build and %install lines:
%configure_target

Refer to Figure 10-2 for details.


Step 5: Create the patch, place it, and include it in the patch list.

Perform a diff command, with options -Nur, to create a patch. Redirect the diff output to create the patch in the patches directory:
$ diff -Nur fullPackageName.ori/SPECS/package.spec \ fullPackageName/SPECS/package.spec > \ ../dist/package/patches/package-wr-integration.patch

Add the patch it to the list of patches:


$ edit ../../dist/package/patches.list

patches.list should contain the one line:


package-wr-integration.patch

For example:
$ cat ../../dist/thttpd/patches.list thttpd-wr-integration.patch

You can now build the package wit h the new patch as described next.

Build with the Patch

You can now build the package and it will include the patch each time:
$ cd prjbuildDir/build $ make package.distclean $ make package

[ make package.compile?] If you get errors, you need to repeat the patching cycle. When you edit the spec file, be sure to follow the directions in Necessary spec File Changes, p.113.

119

Wind River Linux User's Guide, 3.0

10.4 Adding a Package: rpmbuild with a Classic Package


Although a standard source archive file (usually a .bz2 or .tar.gz compressed tar archive), does not usually come with a pre-written spec file, you can still add the package to the system using the rpmbuild method. The major difference is that you will have to write the spec file yourself.

Preparing to Add a Standard Source Archive with rpmbuild

Before downloading a new package, inspect its maintainers web page, or any other source you can find, to make sure it will build on your host, and cross-build if necessary for your target. Then ascertain its dependencies, and check if they are present within the target boards package list by checking the contents of the pkglist file. If a dependency is not included in the pkglist file, check to see if it is a standard Wind River Linux package, by inspecting installDir/wrlinux-3.0/layers/wrll-wrlinux/packages/. If not, it must be added. All dependencies must be present and in the pkglist file.

Adding a Standard Source Archive with rpmbuild

Follow this sequence to add a classic package with the rpmbuild method: 1. 2. 3. 4. 5. 6. 7. 8. Put the compressed source file in prjbuildDir/packages. Create the Makefile and patch directories within prjbuildDir/dist/packagename. Create the packages Makefile and enter the MD5 checksum. Add the package to the pkglist file with make -C build pkgname.addpkg, and remove any generated makefiles. Write the spec file. (See Necessary spec File Changes, p.113 for details.) Try to build the package, testing for proper compilation, and adding makefile and source patches as needed. Add the package to the file system. Add the packages RPM to the development environment.

This method, in general terms, applies equally to any package you wish to add or upgrade using the rpmbuild method with a standard source archive file. For examples of adding classic packages with the rpmbuild method, refer to 10. Adding Packages.

120

10 Adding Packages 10.5 Adding a Package: the Classic Method

10.5 Adding a Package: the Classic Method


You can also add classic packages to the build system without using the rpmbuild method. This does not require you to create a spec file, but does require a more complex makefile.

Preparing to Add a Source Archive with the Classic Method

Before downloading a new package, inspect its maintainers web page, or any other source you can find, to make sure it will build on your host, and cross-build if necessary for your target. Then ascertain its dependencies, and check if they are present within the target boards package list by checking the contents of the pkglist file. If a dependency is not included in the pkglist file, check to see if it is a standard Wind River Linux package, by inspecting installDir/wrlinux-3.0/layers/wrll-wrlinux/packages/. If not, it must be added. All dependencies must be present and in the pkglist file.

Adding a Source Archive with the Classic Method

Follow this sequence to add a classic package with the classic method: 1. 2. 3. 4. 5. Install the compressed source package into packages. Create the makefile and patch directories within dist/packagename. Create the packages Makefile and MD5 checksum. Add the package to the pkglist file with make -C build pkgname.addpkg, and remove any generated makefiles. Unpack and build the package, testing for proper compilation, adding patches as needed, and including the name of each patch within the patches.list file, in dist/packagename. Add the package to the file system. Add the packages RPM to the development environment.

6. 7.

This method, in general terms, applies equally to any package you wish to add or upgrade using the classic method. For examples of adding classic packages with the classic method, refer to 22Examples of Adding Packages, p.255.

10.6 Removing a Package


You may want to remove a package because it conflicts with a more important package, or because it is simply not needed and is wasting space. Your goal, for initial testing, will probably be the removal of the package from the project build directory. This can be done by removing it from the pkglist, and running make reconfig, then repeating the make build-all.

121

Wind River Linux User's Guide, 3.0

A more permanent solution is to add it to a pkglist.remove file, within a custom template.

10.7 Adding a Package to a Running Target


You may add a package directly to a target that is up and running, either using NFS, or stand-alone. First, either obtain an RPM of the package, or make one by adding the package to the build system and building it (this will automatically create an RPM). Then copy the new RPM to a temporary directory on the targets file system, and install the RPM on the running target. This works for all targets except those built with either Glibc Small or uClibc file systems. The file system must include the RPM database. In certain situations the RPM install can fail, with an incorrect architecture message. If you suspect that the message itself is in error, try using the ignorearch option:
$ rpm -ivh --ignorearch packageName.rpm

Refer to 22.6 Adding an RPM Package to a Running Target, p.270 for an example of adding an RPM.

122

11
Configuring PREEMPT_RT
11.1 Introduction 123 11.2 Enabling Real Time 123 11.3 Application Programming Considerations for PREEMPT_RT 124 11.4 Configuring the Preemption Level 124 11.5 Interrupt Service Routine (ISR) Payload Execution Context 126 11.6 Run-time Scheduler Debug Instrumentation 128

11.1 Introduction
Wind River Linux provides a conditional real-time kernel profile, preempt_rt, for certain board and file system combinations. The RT patch series is currently maintained by Steven Rostedt (see http://rt.wiki.kernel.org/index.php/Main_Page). The default scheduler for preempt_rt is CFS, which is described in F. Control Groups (cgroups).
NOTE: Conditional real-time support is not available for all boards. For further information on validated boards, refer to the BSP-kernel-filesystem matrix available on Wind River Online Support Wind River Linux also supports guaranteed real-time with the Real-Time Core product. For details on Real-Time Core, contact your Wind River service representative.

11.2 Enabling Real Time


To enable the pre-emptible real-time feature, configure your project with the preempt_rt kernel profile. This is the configure command option --enable-kernel=preempt_rt.

123

Wind River Linux User's Guide, 3.0

For example, to configure a common PC board with a standard file system and conditional real-time, enter:
$ configure --enable-board=common_pc \ --enable-kernel=preempt_rt --enable-rootfs=glibc_std

11.3 Application Programming Considerations for PREEMPT_RT


Applications running on a PREEMPT_RT or otherwise configured PREEMPT_HARDIRQS or PREEMPT_SOFTIRQS kernel need to be aware that in some cases they may be competing with kernel services running in scheduled task context. Various legacy test suites exercising privileged real-time scheduling policies at high priorities have also been found to fail, and in some cases have caused system lockup due to the changed scheduling dynamics in the kernel. These conditions are a result of kernel code which had been running in hard-exception context now running in task-scheduled context. The cause of this issue is the ability of a privileged application or test task to elevate its scheduling priority above system daemons. The potential exists for such a task to halt system scheduling if it does not relinquish the CPU. The work-around is to assure system daemons schedule with a priority greater than any application task. This may be accomplished by either a chrt of the system daemons above the expected priority range of application usage, or constraining the application to use priorities below that of system daemons.

11.4 Configuring the Preemption Level


You may configure the real-time kernel to run in one of four levels of increasingly aggressive preemption behavior. You may use Workbench, or the normal command line or GUI configuration utilities. The configuration options are within Processor type and features, and are shown as displayed by the Workbench Kernel Configuration tool in Figure 11-1. You can also use make linux.menuconfig or other tools from the command line.
NOTE: For details on which kernel features are compatible with preempt_rt, contact Wind River support.

Details on each option follow, presented in the order of least to most preemption.

124

11 Configuring PREEMPT_RT 11.4 Configuring the Preemption Level

Figure 11-1

Real-time Kernel Preemption Modes, with Complete Preemption Enabled

No Forced Preemption (Server)

The text kernel configuration entry is PREEMPT_NONE. This is the traditional Linux preemption model geared towards throughput. It will provide reasonable overall response latencies but there are no guarantees and occasional long delays are possible. This configuration will maximize the raw processing throughput of the kernel irrespective of scheduling latencies.

Voluntary Kernel Preemption (Desktop)

The text configuration entry is PREEMPT_VOLUNTARY. This configuration reduces the latency of the kernel by adding more explicit preemption points to the kernel code. The new preemption points break long non-preemptive kernel paths, minimizing rescheduling latency and providing faster application reactions, at the cost of slightly lower throughput. This offers faster reaction to interactive events by enabling a low priority process to voluntarily preempt itself during a system call. Applications run more smoothly even when the system is under load. A desktop system is a typical candidate for this configuration.

Preemptible Kernel (Low-latency Desktop)

This configuration applies to embedded systems with latency requirements in the milliseconds range.

125

Wind River Linux User's Guide, 3.0

The text configuration entry is PREEMPT_DESKTOP. This configuration further reduces kernel latency by allowing all kernel code that is not executing in a critical section to be preemptible. This offers immediate reaction to events. A low priority process can be preempted involuntarily even during syscall execution. This is similar to PREEMPT_VOLUNTARY, but allows preemption anywhere outside of a critical (locked) code path. Applications run more smoothly even when the system is under load, at the cost of slightly lower throughput and a slight run-time overhead to kernel code. (According to profiles when this mode is selected, even during kernel-intense workloads the system is in an immediately preemptible state more than 50% of the time.)

Complete Preemption (Real-Time)

This configuration applies to time-response critical embedded systems, with guaranteed latency requirements of 100 usecs or lower. The text configuration entry is PREEMPT_RT. This configuration further reduces the kernel latency by replacing virtually every kernel spinlock with preemptible (blocking) mutexes, and allowing all but the most critical kernel code to be involuntarily preemptible. The remaining low-level, non-preemptible code paths are short and have a deterministic latency of a few tens of microseconds, depending on the hardware. This enables applications to run smoothly irrespective of system load, at the cost of lower throughput and run-time overhead to kernel code. Testing indicates that with this mode selected, a system can be in an immediately preemptible state more than 95% of the time, even during kernel-intense workloads.

11.5 Interrupt Service Routine (ISR) Payload Execution Context


Historically, ISRs are executed in machine exception context as asynchronous unschedulable events with the highest system priority processing. The three kernel configuration options below allow ISR payloads to be run in schedulable task context, competing for CPU time among all other tasks, based upon scheduling policy and priority. This allows the system designer to determine application-specific scheduling parameters for interrupt payload processing.

126

11 Configuring PREEMPT_RT 11.5 Interrupt Service Routine (ISR) Payload Execution Context

Selecting PREEMPT_RT (complete preemption), automatically enables these configuration options. The migration of ISR payloads to task scheduled context is required for the locking (mutex) model. For other preemption models these configuration options are elective, and allow additional control of offloading interrupt processing from exception context to preemptive task context.
NOTE: Selection of PREEMPT_HARDIRQS and PREEMPT_SOFTIRQS either directly or with selection of PREEMPT_RT requires device drivers and other sources of hardware interrupts to comply with the changed rules in effect for this operational modespecifically through the use of standard and published interrupt API primitives. Attempts to control CPU interrupt state through other means may violate assumptions in the code, cause assertions to be generated, or cause the kernel to panic. For this reason boards which are known to function in this model are listed in the kernel feature matrix available at Wind River Online Support.

Thread Softirqs

The text configuration entry is: PREEMPT_SOFTIRQS. This option reduces the latency of the kernel by threading soft interrupts. This means that all softirqs will execute in the context of ksoftirqd. While this benefits latency, it can also reduce performance due to additional task context switching. The threading of softirqs can also be controlled using the /proc/sys/kernel/softirq_preemption run-time switch and the softirq-preempt=0/1 boot-time option.
NOTE: You will only see the *irq_preemption files if you have built a preempt-rt kernel but do not have CONFIG_PREEMPT_RT set.

Thread Hardirqs

The text configuration entry is: PREEMPT_HARDIRQS. This option reduces the latency of the kernel by threading hard irqs. This means that all (or selected), irqs will run in their own kernel thread context. While this helps latency, this feature can also reduce performance due to additional task context switching. The threading of hard irqs can also be controlled using the /proc/sys/kernel/hardirq_preemption run-time switch and the hardirq-preempt=0/1 boot-time option. Per-irq threading can be enabled and disabled using the /proc/irq///threaded run-time switch.

Preemptible RCU

The text configuration entry is: PREEMPT_RCU. This option reduces the latency of the kernel by making certain RCU sections preemptible. Normally RCU code is non-preemptible. If this option is selected, read-only RCU sections become preemptible. This helps latency, but may expose bugs due to now-naive assumptions about each RCU read-side critical section remaining on a given CPU through its execution.

127

Wind River Linux User's Guide, 3.0

11.6 Run-time Scheduler Debug Instrumentation


The following kernel configuration file options are in menuconfig or xconfig under Kernel hacking.

Debug preemptible kernel

The text configuration entry is CONFIG_DEBUG_PREEMPT. Enables the kernel to detect preemption count underflows, track critical section entries, and emit debug assertions should an illegal sleep attempt occur. Unsafe use of smp_processor_id( ) is also detected.

Wakeup latency histogram

The text configuration entry is CONFIG_WAKEUP_LATENCY_HIST. Logs all the wakeup latency timing to a histogram bucket, and factors out printk produced by wakeup latency timing.

Non-preemptible critical section latency timing

The text configuration entry is CONFIG_PREEMPT_TRACER. Measures the time spent in preemption disabled critical sections. Time units are in microseconds. The default measurement method is a maximum search, which is disabled by default and can be started during run-time by entering:
# # # # echo echo echo echo 1 1 1 0 > > > > /proc/sys/kernel/trace_use_raw_cycles /proc/sys/kernel/mcount_enabled /proc/sys/kernel/trace_enabled /proc/sys/kernel/preempt_max_latency

Note that kernel size and overhead increases with this option enabled. This option and the IRQSOFF_TRACER timing option, below, can be used together or separately.

Interrupts-off critical section latency timing

The text configuration entry is CONFIG_IRQSOFF_TRACER. Measures the time spent in interrupt disabled critical sections. Time units are in microseconds. The default measurement method is a maximum search, which is disabled by default and can be started during run-time using:
# echo 0 > /debugfs/tracing/tracing_max_latency

Note that kernel size and overhead increases with this option enabled. This option and the CONFIG__PREEMPT_TRACER option can be used together or separately. This is a default kernel option, and not specific to, or added by, the PREEMPT_RT patches.

128

11 Configuring PREEMPT_RT 11.6 Run-time Scheduler Debug Instrumentation

RT Mutex Integrity Checker

When PREEMPT_RT is configured, most spinlocks and semaphores are converted into mutexes. There still exist true spin locks and older style semaphores. There are places in the kernel that pass the lock by pointer and typecast it back. This can circumvent the compiler conversions. This option will add a magic number to all converted locks and check to make sure the lock is appropriate for the function being used.

129

Wind River Linux User's Guide, 3.0

130

12
Configuring Scalable Features
12.1 Introduction 131 12.2 BusyBox 131 12.3 Static Link Option 133 12.4 Library Optimization Option 134 12.5 Reducing Kernel Boot Time 135 12.6 Analyzing and Optimizing Boot Time 138 12.7 Analyzing and Optimizing Runtime Footprint 152

12.1 Introduction
Many features that give Wind River Linux a small footprint such as BusyBox, Static Link, and library optimization, are themselves scalable. This chapter continues directory conventions used in previous chapters: /home/user/WindRiver is referred to as installDir. The development environment consists primarily of the contents of installDir/wrlinux-3.0. The build environment is contained within the project build directory, which is under /home/user/workdir.

12.2 BusyBox
BusyBox merges tiny versions of standard Linux utilities into a single small executable. These utilities include a shell, compression utilities, a DHCP server, login utilities, archiving utilities like tar and rpm, core utilities like cat, df and ls, networking utilities like ping and tftp, system administration utilities like mount and more, and process utilities like free, ps, and kill.

131

Wind River Linux User's Guide, 3.0

These utilities have reduced functionality compared to their standard Linux counterparts, but they also have a much smaller footprint, and merging them into a single executable results in a smaller footprint still.

Configuring BusyBox

You may add or remove commands supported by the BusyBox executable in much the same way as you configure the Linux kernel. By removing commands you do not intend to use, you reduce the executables size even further.
NOTE: It is not necessary to make the file system before configuring BusyBox. The

initial make busybox.menuconfig command extracts the BusyBox source. Within the build directory, enter:
$ make busybox.menuconfig

The resulting configuration utility is shown in Figure 12-1.


NOTE: If the BusyBox menuconfig program does not appear optimally within a standard console, run it within a terminal.
Figure 12-1 BusyBox Configuration

The BusyBox menuconfig program functions in exactly the same way as the kernel menuconfig. You can access help for each command, and discard or save your changes. After making your changes, run make busybox to rebuild BusyBox. Once you make busybox, you may check the busybox.links file within build/busybox-version, to confirm that your changes were made.
NOTE: If you are using a RAM or flash file system, you will have to remake it with the make boot-image command.

132

12 Configuring Scalable Features 12.3 Static Link Option

Configuring Busybox with a Custom Layer

The following example shows how you can create a custom busybox configuration and save it to a layer. 1. Configure a project, for example:
$ configure --enable-kernel=small --enable-rootfs=glibc_small \ --enable-board=arm_versatile_926ejs

2.

Configure busybox:
$ make -C build busybox.menuconfig

Set and unset the options you want. The resulting configuration will be saved in a .config file. 3. 4. Build busybox with your new configuration:
$ make -C build busybox

Test your new configuration:


$ make fs

Deploy your kernel and file system on your hardware or in emulation. Once you have verified that you have the configuration you want, you can proceed to save it to a layer. 5. Save your configuration file and your new busybox .rpm files to a layer:
$ cp build/busybox-1.4.1/.config \ ~/layers/busybox/templates/feature/my_busybox/busybox/config

Note that the file name in the layer is config, not .config.
$ cp export/RPMS/armv5tel_vfp/busybox-*.rpm \ /home/user/layers/busybox/RPMS/glibc_small/armv5tel_vfp/

6.

You can now use the layer to recreate the busybox configuration, for example:
$ mkdir new_project $ cd new_project $ configure --enable-kernel=small \ --enable-rootfs=glibc_small \ --enable-board=arm_versatile_926ejs \ --with-layer=/home/user/layers/busybox/

Now, when you build your file system (make fs), it will build with your custom busybox configuation.

12.3 Static Link Option


Static linking is available with the uclibc_small file system only. Enable static linking with the --enable-scalable=staticlink option of configure. This statically links libraries to their binary executables, making it unnecessary to load the standard libraries, including Glibc, onto the target. Static linking offers considerable savings in the size of the file system, as long as very few executables are included. With BusyBox alone, for instance, this feature is well worth enabling if a small footprint is a priority. However, the more

133

Wind River Linux User's Guide, 3.0

applications you add, the more you duplicate standard routines. At a certain point (the exact point will depend on the your setup), static linking will take up more space than dynamic linking.

Static Link Implementation

You must configure the project build directory with the staticlink option. An example configure command is:
$ configure \ --enable-board=arm_versatile_926ejs \ --enable-kernel=small \ --enable-rootfs=uclibc_small+debug \ --enable-bootimage=flash \ --enable-scalable=staticlink

The next step is to perform a make build-all. Because static linking is only effective with very small numbers of executables, you should limit the number of applications included in the run-time system. An effective way of doing that is to use the PACKAGES_IN_FILESYSTEM environment variable when performing a make build-all. This environment variable allows you to compile only the applications you wish to install into the file system. The environment variable precedes the make build-all command. As an example, if you wished to install only the BusyBox application into your file system, your make build-all command (entered as usual within the project build directory), would be:
$ PACKAGES_IN_FILESYSTEM="setup filesystem busybox" make build-all

NOTE: The setup and filesystem applications are not optional; the linux application is also required if modules are enabled.

12.4 Library Optimization Option


Library optimization works only with glibc_small file systems. This feature optimizes Glibc and libm by removing library functions unneeded by the applications installed into the run-time file system. Library optimization also rebuilds libraries to relink them with only the object files necessary for chosen applications. You enable library optimization with the enable-scalable=mklibs option in configure. Like static linking , library optimization offers the greatest savings in run-time file system size when very few applications are included. Although library optimization may not offer as great initial savings in size as static linking, the savings should not tail off quite so rapidly as more applications are added. As with static linking, the savings realized will depend on the run-time file system.

134

12 Configuring Scalable Features 12.5 Reducing Kernel Boot Time

Implementing Library Optimization

Configure your project build directory with the --enable-scalable=mklibs option. An example configure command is:
$ configure \ --enable-board=arm_versatile_926ejs \ --enable-kernel=small \ --enable-rootfs=glibc_small+debug\ --enable-bootimage=flash \ --enable-scalable=mklibs

The next step is to perform a make build-all. The resulting libraries should be smaller than the ones you'd get without the --enable-scalable option.

12.5 Reducing Kernel Boot Time


There are too many possible configurations of Wind River Linux to specify where your specific configuration might be able to gain system startup time, but this section discusses some common areas that may require attention. Refer to 12.6 Analyzing and Optimizing Boot Time, p.138 for details on analyzing how boot time is spent on your system.

12.5.1 An Overview of the Boot Process


The time it takes to boot the kernel is only part of the time it takes to go from power-on to application start-up. The overall boot sequence looks like this: 1. Power-on resetThe CPU does not begin operations until the power and the system clocks have stabilized. This can take from tens of milliseconds to seconds. Power-on self-testDepending on application and previous state, this can take from hundreds of milliseconds to seconds. For example, you might perform full POST diagnostics on a cold boot, but minimal or no diagnostics on a warm boot. Bootloader initializationThis can include additional self-tests, device initialization, and various activities on the kernel to, for example, choose a specific kernel, load it from a device into RAM, configure the boot command and transfer control to the kernel. Kernel entryThis may include kernel image decompression and any architecture-specific setup required. Kernel initializationIncludes printk output, delay loop calibration, device probes during driver initialization, and bus initialization. Application launchThis is the kernel execve of the system init process to prepare for application processing. exec of initThe conventional init or some other application program is the first userspace process after completion of the kernel boot.

2.

3.

4. 5. 6. 7.

135

Wind River Linux User's Guide, 3.0

This discussion of kernel boot-time primarily concerns steps 4 through 6, that is, from the time the kernel begins execution until the first application process (typically /sbin/init or /init) is started.

Identifying Sources of Boot Latency

In general, with embedded devices you can take advantage of the fact that you are working with a fixed topology and so do not need to discover it each time you boot, and your application may need only limited services and resources from the variety that are available. The following discussion is by no means exhaustive, but it presents some significant sources of boot latency which may be most profitable for you to examine. The following are discussed in this section:

Kernel image decompression Delay loop calibration Resource initialization Device driver probe delay Bus enumeration IP autoconfig Console output

Kernel Image Decompression

If you can afford the space, providing an uncompressed kernel eliminates decompression time, whether that decompression is performed by the bootloader or the kernel itself. You might, for example, make an uncompressed image available in direct mapped memory to allow for execution in place (XIP).
Delay Loop Calibration

At boot time, the kernel computes the software delay loop advisor. This is a time-intensive operation that is unnecessary for a deployed, embedded application where the value is constant. Refer to for an example of how to remove the repeated calculations and determine the amount of time you have saved.
Resource Initialization

This is best addressed by removing resource generality unneeded by the embedded application. The number of pseudo TTYs, consoles, user consoles, RAM disks, and so on, should be minimized to reflect the actual resource need of the application. Some places to look at in your kernel configuration in addition to unnecessary drivers include:

CONFIG_LEGACY_PTY_COUNT CONFIG_BLK_DEV_RAM CONFIG_BLK_DEV_RAM_COUNT CONFIG_IP_PNP

136

12 Configuring Scalable Features 12.5 Reducing Kernel Boot Time

Also note that device drivers that are required by your application but have lengthy initialization times potentially may be built as modules and launched after boot at a less latency critical time. For still more aggressive timing data, initialization of VFS and other structure caches may be reduced from system calculated defaults with associated kernel boot parameters.
Device Driver Probe Delay

Because of the nature of an embedded system, device driver probes for unnecessary devices can be eliminated, and if probes are required for existing devices, the timeouts should be minimized based on only what is required for the target hardware. It may even be possible to take the more aggressive approach of dispensing with busy-wait probing for some devices altogether. You may also be able to thread device probe and enumeration operations to maximize concurrent execution times. Note that this is more experimental because the driver routines must be conducive to such threading and you will probably be required to modify the driver code. Examining additional ways to maximize parallel execution of device drivers may well be justified depending on how much latency such operations introduce into the boot time of your system.
Bus Enumeration

This presents a similar situation to device probing as discussed above. Note that for externally accessible buses enumeration is unavoidable, but for some applications it may be useful to defer it until after kernel boot by containing the enumeration functionality in kernel modules.
IP Autoconfig

In most cases this is just something you may want to watch for in the development environment, and only applies to deployed embedded applications that must get their root file system from NFS. Configuring network parameters using DHCP/BOOTP and then NFS to mount the file system can add seconds to the boot process. Providing static IP parameters at the boot prompt (ip=address) can help. If you must use an NFS file system, it may be possible to boot with an initial RAM disk and then transfer to the NFS root file system during application boot up. But the trade-off is that this requires the extra time it takes to load the RAM disk image into memory prior to kernel boot.
Console output

The kernel boot log that is sent to the console device through printk commands adds a significant contribution to boot latency. Due to the nature of printk, calling this function results in synchronous (unbuffered) data transmission that ties the boot process to the speed of the console device. This is most acute in the case of a UART serial device. Even at a rate of 115,2000 bps a single character transmission consumes approximately 87 ms. A typical boot log of 6000 characters would add over 500 ms of latency. For deployed applications the majority of kernel messages may be suppressed with the -quiet kernel command line flag or disabled by kernel configuration.

137

Wind River Linux User's Guide, 3.0

12.6 Analyzing and Optimizing Boot Time


The following sections distinguish between two phases of system boot time:

early boot timethe time from when the kernel is launched to the time the init process (usually /sbin/init) is launched late boot time the time from when the init process is launched until the last start-up script is executed.

You can collect data on both of these phases of boot time by using the bootlogger script as your init process. The script uses the Linux kernels ftrace feature to capture profiling data from the Wind River Linux boot sequence.
NOTE: ftrace is documented in prjbuildDir/build/linux/Documentation/ftrace.txt.

Collecting Boot-Time Data with bootlogger

The bootlogger script overrides the regular /sbin/init as the first process and copies the early boot time data in /debug/tracing/trace to /var/log/kernel-init.log and then configures ftrace to trace init processes. When the final init process is executed (/etc/rcS.d/S999stop-bootlogger), bootlogger copies the late boot time data to /var/log/post-kernel-init.log. The names and locations of these files are configurable in the targets /etc/bootlogger.conf file (prjbuildDir/export/dist/etc/bootlogger.conf). As a final step, bootlogger launches the regular init process.
NOTE: The bootlogger script is designed to be used in development and is not intended to be deployed in production systems.

To collect boot time data with bootlogger, do the following:


Step 1: Configure your platform project for boot logging.

To configure your platform project for boot logging, specify the feature/boottime template, for example:
$ configure --enable-board=common_pc --enable-kernel=small \ --enable-rootfs=glibc_small \ --with-template=feature/boottime

When you build your file system, you will have an /sbin/bootlogger script, an /etc/bootlogger.conf configuration file, and a stop-bootlogger script configured as the last init script to run.
Step 2: Configure your boot sequence to use bootlogger.

Configure your kernel boot command line to pass init=/sbin/bootlogger. This is typically done by passing a command to the bootloader, or as a compilable kernel option. If you are using QEMU to emulate your target, you can enter make config-target and then append init=/sbin/bootlogger to the TARGET0_QEMU_KERNEL_OPTS option, for example:
... 52: TARGET0_QEMU_BOOT_DEVICE= 53: TARGET0_QEMU_KERNEL_OPTS=clock=pit oprofile.timer=1 54: TARGET0_VIRT_UMA_START=yes

138

12 Configuring Scalable Features 12.6 Analyzing and Optimizing Boot Time

55: TARGET0_QEMU_OPTS= 56: TARGET0_VIRT_EXT_WINDOW=no 57: TARGET0_VIRT_EXT_CON_CMD=xterm -T Virtual-WRLinux -e 58: TARGET0_VIRT_CONSOLE_SLEEP=5 59: TARGET0_QEMU_HOSTNAME= 60: TARGET0_QEMU_USE_KQEMU=yes 61: TARGET0_VIRT_DEBUG_WAIT=no 62: TARGET0_VIRT_DEBUG_TIMEOUT_DEFAULT=40 Enter number to change (q quit)(s save): 53 New Value:other-options init=/sbin/bootlogger Enter number to change (q quit)(s save): s Enter number to change (q quit)(s save): q

Step 3:

Boot the target to collect the data.

Boot your target or emulation. When it has finished the complete boot sequence there will be boot logs for both the early and late phases of the boot process in /var/log on the target. The following sections describe how to use the data collected by ftrace and bootlogger.

12.6.1 Analyzing Early Boot Time


Use the ftrace-bootgraph.pl script to analyze early boot time data as described in the following procedure. 1. Configure your platform, build it (make), and then set init to /sbin/bootlogger and boot it on a target or in emulation as described in Collecting Boot-Time Data with bootlogger, p.138. Copy /var/log/kernel-init.log from the target to your development host for analysis or if you are using QEMU, you can analyze export/dist/var/log/kernel-init.log on the host. Run ftrace-bootgraph.pl on the early boot log data, for example with the -p option for percentages:

2.

3.

$ host-cross/bin/ftrace-bootgraph.pl -p export/dist/var/log/kernel-init.log Warning : Perl module SVG::TT::Graph::BarHorizontal is required for graphical output ftrace-bootgraph.pl will fall back to text text output 0.0045 sec 0.1348 % pidmap_init 0.0060 sec 0.1818 % init_rootfs 0.0033 sec 0.1006 % select_idle_routine 0.0177 sec 0.5343 % alternative_instructions 0.0034 sec 0.1039 % restart_mce 0.0261 sec 0.7869 % acpi_pic_sci_set_trigger 0.0051 sec 0.1526 % kernel_init 0.0020 sec 0.0605 % cpu_callback ... 0.0023 0.0044 0.0043 1.5470 0.0059 0.0034 sec sec sec sec sec sec 0.0689 % dm_mirror_init 0.1314 % rpcauth_init_module 0.1312 % pci_sysfs_init 46.7010 % ic_bootp_recv 0.1779 % ic_bootp_recv 0.1026 % root_nfs_parse_addr

0.1890 sec 5.7044 % Others These 76 functions account for 94 percent of the time

Total time

3.4945

139

Wind River Linux User's Guide, 3.0

ftrace-bootgraph.pl finds which functions are taking most of the time. In the example shown, the other approximate six percent of the time contains those other functions that did not take a lot of time but likely made up much of the listing. The script removes them to make the output more helpful. If you have the SVG::TT::Graph::BarHorizontal perl module installed, you will get graphics output instead of text when you run the ftrace-bootgraph.pl script. Figure 12-2 illustrates some sample graphics output. The ftrace-bootgraph.pl script produces text output from the ftrace date such as the following:
0.0044 0.0043 1.5470 0.0059 0.0034 sec sec sec sec sec rpcauth_init_module pci_sysfs_init ic_bootp_recv ic_bootp_recv root_nfs_parse_addr

Or, with the -p option:


0.0044 0.0043 1.5470 0.0059 0.0034 sec sec sec sec sec 0.1314 % rpcauth_init_module 0.1312 % pci_sysfs_init 46.7010 % ic_bootp_recv 0.1779 % ic_bootp_recv 0.1026 % root_nfs_parse_addr

For example, the last entry means the root_nfs_parse_addr kernel function took 0.0034 seconds which was .1026% of the early boot time.
Figure 12-2 Example Partial Early Boot Time Graphical Output

140

12 Configuring Scalable Features 12.6 Analyzing and Optimizing Boot Time

12.6.2 Analyzing and Optimizing Late Boot Time


After the kernel loads, it launches the init process (usually /sbin/init). Eventually, the init process spawns a login shell, starts a desktop greeter program, or launches a custom application. At this point, the system is considered to be fully booted and ready to perform its intended function. The time between when the init process is launched and when it is fully booted is the late boot time. To optimize your late boot time, you may want to address questions such as the following:

How long is the late boot time phase? What processes are consuming the CPU? How much time is spent in the idle loop? When the CPU is idle, why is it idle?

You can analyze the initialization of userspace for purposes of optimization with the ftchart script, which can produce graphic and text output based on bootlogger log files as described in the following sections.

Visualizing Late Boot Time

ftchart helps you answer you questions about late boot time by reconstructing the init process tree from kernel ftrace data. This allows you to visualize and analyze it in a number of different output formats. Further, you can selectively expand or prune the tree to achieve the desired viewthis is useful for drilling down deeper into the tree to understand where time is being spent, ignoring irrelevant details. The following section provides examples of the use of some of the ftchart output options. For details on these and all ftchart options, use the --help option:
$ prjbuildDir/host-cross/bin/ftchart --help

Presenting the Collected Data with ftchart The -o tree Option

Suppose you wanted to find out where all of your CPU time is being spent during the post-kernel init phase. First, collect a log using bootlogger and then transfer the post-kernel-init.log file to a convenient location on your development host. You might use the following command:
$ ftchart -o tree -d 3 ./post-kernel-init.log

In this command, ftchart is supplied some output presentation options, and the name of a log file produced by bootlogger. The -o tree option says to output the data as a text tree and the -d 3 option limits the tree depth to 3 levels.

141

Wind River Linux User's Guide, 3.0

Your ftchart command and output might look as follows:


$ ftchart -d 3 -o tree ./post-kernel-init.log Total Post-Kernel Boot Time: cpu time: 3.830s <idle> : pid: 0 ppid: 0 cpu time: 1565.589ms (40.9%) init : pid: 1 ppid: 0 cpu time: 10.478ms (0.3%) mingetty : pid: 2168 ppid: 1 cpu time: 1.793ms (0.0%) rc : pid: 2066 ppid: 1 cpu time: 290.577ms (7.6%) mingetty : pid: 2167 ppid: 1 cpu time: 1.975ms (0.1%) polltester : pid: 952 ppid: 1 cpu time: 98.252ms (2.6%) mingetty : pid: 2169 ppid: 1 cpu time: 2.472ms (0.1%) mingetty : pid: 2170 ppid: 1 cpu time: 1.826ms (0.0%) mingetty : pid: 2171 ppid: 1 cpu time: 2.466ms (0.1%) mingetty : pid: 2172 ppid: 1 cpu time: 2.824ms (0.1%) init : pid: 957 ppid: 1 cpu time: 1746.506ms (45.6%) ksoftirqd/0 : pid: 3 ppid: 0 cpu time: 9.692ms (0.3%) rpciod/0 : pid: 932 ppid: 0 cpu time: 28.585ms (0.7%) khelper : pid: 6 ppid: 0 cpu time: 0.054ms (0.0%) khelper : pid: 2059 ppid: 6 cpu time: 3.706ms (0.1%) khelper : pid: 2107 ppid: 6 cpu time: 46.549ms (1.2%) khelper : pid: 955 ppid: 6 cpu time: 3.818ms (0.1%) polltester : pid: 9 ppid: 0 cpu time: 0.011ms (0.0%) khubd : pid: 138 ppid: 0 cpu time: 0.621ms (0.0%) pdflush : pid: 177 ppid: 0 cpu time: 0.008ms (0.0%) nfsiod : pid: 227 ppid: 0 cpu time: 12.189ms (0.3%)

The display shows parent and child processes, with the child processes indented under their parent. For example, mingetty is a child of init. Leaf nodes are nodes that do not show any child processes under them, although they may have child process that are just not displayed due to the supplied -d option. The CPU times of leaf nodes are the sums of the CPU time of that node and all of its undisplayed children, if any. From this output, it is clear that the init process and the idle process are consuming the biggest chunks of boot time. But this is still not sufficient to give a solid optimization target, or to identify some suspect package. In addition, the output shown contains much irrelevant detail. For example, there is little need to optimize processes that take up only tiny amounts of CPU time.
The -o cpu Option

To specifically identify optimization targets, the cpu summary output is more useful:
$ ftchart -o cpu ./post-kernel-init.log Total Post-Kernel Boot Time: cpu time: 3.830s <idle> : pid: 0 ppid: 0 cpu time: 1565.589ms (40.9%) <other> cpu time: 2159.169ms (59.1%)

By default, the cpu output simply summarizes how much cpu time is spent in the idle process, and how much cpu time is spent in other processes. (The other category is not an actual process, it is the sum of all un-expanded processes.) The cpu option does not reveal any specific optimization targets but invites two big questions. One, of course, is what's going on in the other category? Another is why is there so much idle time? To address the first question, expand the interesting parts of the process tree using the -e option as described in the next section. An Example of Investigating Idle Time, p.145 discusses how to investigate the second question.

142

12 Configuring Scalable Features 12.6 Analyzing and Optimizing Boot Time

An Example of Investigating the other Category

To uncover optimization targets, use the -e (expand) option. The argument to the -e option is a comma-separated list of expand paths. An expand path is a slash-separated list representing the lineage of an interesting tree element, much like a path in a file system directory tree. Suppose you are interested in expanding the bar process which has parent process foo and a grandparent process with PID 0, the ancestor of all processes, you would supply the following expand path:
-e "foo/bar"

You could also use a wildcard to expand all child processes of foo as follows:
-e "foo/*"

Note that the root process is the scheduler or idle process (PID 0, the parent process of the userspace processes starting with PID 1). Similarly, if you wanted to expand all child processes of the root process, you could pass the following expand path:
-e "*"

Start by drilling down into the other category. Do this by expanding all children of the <idle> process as follows:
$ ftchart -o cpu -e "*" ./post-kernel-init.log Total Post-Kernel Boot Time: cpu time: 3.830s <idle> : pid: 0 ppid: 0 cpu time: 1565.589ms (40.9%) init : pid: 1 ppid: 0 cpu time: 2159.169ms (56.4%) ksoftirqd/0 : pid: 3 ppid: 0 cpu time: 9.692ms (0.3%) rpciod/0 : pid: 932 ppid: 0 cpu time: 28.585ms (0.7%) khelper : pid: 6 ppid: 0 cpu time: 54.127ms (1.4%) polltester : pid: 9 ppid: 0 cpu time: 0.011ms (0.0%) khubd : pid: 138 ppid: 0 cpu time: 0.621ms (0.0%) pdflush : pid: 177 ppid: 0 cpu time: 0.008ms (0.0%) nfsiod : pid: 227 ppid: 0 cpu time: 12.189ms (0.3%)

Now it is clear (as expected) that the init process is the busiest. Continue drilling down into the init process by iteratively changing the -e option:
$ ftchart -o cpu -e "init/*" ./post-kernel-init.log Total Post-Kernel Boot Time: cpu time: 3.830s <idle> : pid: 0 ppid: 0 cpu time: 1565.589ms (40.9%) mingetty : pid: 2168 ppid: 1 cpu time: 1.793ms (0.0%) rc : pid: 2066 ppid: 1 cpu time: 290.577ms (7.6%) mingetty : pid: 2167 ppid: 1 cpu time: 1.975ms (0.1%) polltester : pid: 952 ppid: 1 cpu time: 98.252ms (2.6%) mingetty : pid: 2169 ppid: 1 cpu time: 2.472ms (0.1%) mingetty : pid: 2170 ppid: 1 cpu time: 1.826ms (0.0%) mingetty : pid: 2171 ppid: 1 cpu time: 2.466ms (0.1%) mingetty : pid: 2172 ppid: 1 cpu time: 2.824ms (0.1%) init : pid: 957 ppid: 1 cpu time: 1746.506ms (45.6%) <other> cpu time: 2159.169ms (2.9%)

Drill deeper by building up the expand path. Specifically, choose the child process that consumes the most CPU time and add its name at the end of the expand path. Continue this process on the data from this example to arrive at the following expand path:
$ ftchart -o cpu -e "init/init/rc.sysinit/start_udev" ./post-kernel-init.log Total Post-Kernel Boot Time: cpu time: 3.830s <idle> : pid: 0 ppid: 0 cpu time: 1565.589ms (40.9%) start_udev : pid: 978 ppid: 958 cpu time: 1587.921ms (41.5%) <other> cpu time: 2159.169ms (17.6%)

143

Wind River Linux User's Guide, 3.0

It is becoming clear that udev is going to be a good place to focus. At this point, determine if it is possible to eliminate the udev package. If the answer is yes, this 41.5% chunk of boot time can be eliminated. If the answer is no, use ftchart to dig still deeper. Inspect start_udev and all of its children:
$ ftchart -o cpu -e \ "init/init/rc.sysinit/start_udev,init/init/rc.sysinit/start_udev/*" \ ./post-kernel-init.log Total Post-Kernel Boot Time: cpu time: 3.830s <idle> : pid: 0 ppid: 0 cpu time: 1565.589ms (40.9%) start_udev : pid: 978 ppid: 958 cpu time: 20.061ms (0.5%) udevsettle : pid: 1029 ppid: 978 cpu time: 2.321ms (0.1%) udevcontrol : pid: 1976 ppid: 978 cpu time: 1.610ms (0.0%) logger : pid: 1977 ppid: 978 cpu time: 0.921ms (0.0%) start_udev : pid: 979 ppid: 978 cpu time: 1.098ms (0.0%) start_udev : pid: 983 ppid: 978 cpu time: 4.033ms (0.1%) awk : pid: 987 ppid: 978 cpu time: 1.110ms (0.0%) fgrep : pid: 988 ppid: 978 cpu time: 1.211ms (0.0%) fgrep : pid: 989 ppid: 978 cpu time: 0.813ms (0.0%) mount : pid: 990 ppid: 978 cpu time: 1.175ms (0.0%) mkdir : pid: 991 ppid: 978 cpu time: 1.332ms (0.0%) mkdir : pid: 992 ppid: 978 cpu time: 1.140ms (0.0%) ln : pid: 993 ppid: 978 cpu time: 0.862ms (0.0%) ln : pid: 994 ppid: 978 cpu time: 0.715ms (0.0%) ln : pid: 995 ppid: 978 cpu time: 0.707ms (0.0%) ln : pid: 996 ppid: 978 cpu time: 0.705ms (0.0%) ln : pid: 997 ppid: 978 cpu time: 0.702ms (0.0%) ln : pid: 998 ppid: 978 cpu time: 0.693ms (0.0%) mkdir : pid: 999 ppid: 978 cpu time: 1.173ms (0.0%) start_udev : pid: 1000 ppid: 978 cpu time: 332.547ms (8.7%) cat : pid: 1008 ppid: 978 cpu time: 1.047ms (0.0%) pidof : pid: 1009 ppid: 978 cpu time: 3.344ms (0.1%) rm : pid: 1010 ppid: 978 cpu time: 0.934ms (0.0%) udevd : pid: 1011 ppid: 978 cpu time: 1146.660ms (29.9%) start_udev : pid: 1013 ppid: 978 cpu time: 1.602ms (0.0%) udevcontrol : pid: 1014 ppid: 978 cpu time: 1.934ms (0.1%) udevtrigger : pid: 1015 ppid: 978 cpu time: 57.471ms (1.5%) <other> cpu time: 2159.169ms (18.1%)

Clearly, there is lots of irrelevant detail here. Adjust the -e option to show only the two biggest chunks:
$ ftchart -o cpu -e "init/init/rc.sysinit/start_udev/start_udev,\ init/init/rc.sysinit/start_udev/udevd" ./post-kernel-init.log Total Post-Kernel Boot Time: cpu time: 3.830s <idle> : pid: 0 ppid: 0 cpu time: 1565.589ms (40.9%) start_udev : pid: 979 ppid: 978 cpu time: 1.098ms (0.0%) start_udev : pid: 983 ppid: 978 cpu time: 4.033ms (0.1%) start_udev : pid: 1000 ppid: 978 cpu time: 332.547ms (8.7%) udevd : pid: 1011 ppid: 978 cpu time: 1146.660ms (29.9%) start_udev : pid: 1013 ppid: 978 cpu time: 1.602ms (0.0%) <other> cpu time: 2159.169ms (20.4%)

Dig deeper and deeper into udevd to arrive at the following -e option and output:
$ ftchart -o cpu -e init/init/rc.sysinit/start_udev/udevd/udevd, \ init/init/rc.sysinit/start_udev/udevd/udevd/* ./post-kernel-init.log Total Post-Kernel Boot Time: cpu time: 3.830s <idle> : pid: 0 ppid: 0 cpu time: 1565.589ms (40.9%) udevd : pid: 1012 ppid: 1011 cpu time: 215.464ms (5.6%) udevd : pid: 2061 ppid: 1012 cpu time: 0.513ms (0.0%) udevd : pid: 2109 ppid: 1012 cpu time: 0.552ms (0.0%) udevd : pid: 2110 ppid: 1012 cpu time: 0.528ms (0.0%) udevd : pid: 2111 ppid: 1012 cpu time: 0.488ms (0.0%) udevd : pid: 2173 ppid: 1012 cpu time: 0.557ms (0.0%) udevd : pid: 2174 ppid: 1012 cpu time: 0.507ms (0.0%) udevd : pid: 2175 ppid: 1012 cpu time: 0.795ms (0.0%) ... udevd : pid: 2031 ppid: 1012 cpu time: 30.999ms (0.8%) udevd : pid: 2033 ppid: 1012 cpu time: 30.316ms (0.8%) <other> cpu time: 2159.169ms (29.2%)

144

12 Configuring Scalable Features 12.6 Analyzing and Optimizing Boot Time

It is now clear what's going on in udevapparently, the 29.9% of the boot time that is spent in udevd is spent spawning many processes, each of which does a tiny bit of work. This is the limit of what the ftchart tool can tell us. Now would be the time to dive into the udevd initialization code and understand why all of these processes are being spawned, and if this code can be optimized. You could repeat the steps demonstrated above to investigate what is going on for the 8.7% chunk that is consumed by start_udev and possibly identify another optimization target.

An Example of Investigating Idle Time

Now consider idle time. Identifying opportunities to reduce idle time is more difficult than reducing CPU usage. The reason is that idle time does not have a single root cause. That is, any given idle interval will likely have many processes waiting for many resources. Choosing which process to optimize and how to optimize it without introducing idle time elsewhere is not exactly simple. Further, it is unlikely that there are long stretches of pure idle time that can be optimized away; instead, the idle time is probably made up of many small fragments. That said, two popular techniques for reducing idle time are eliminating unnecessary delays and exploiting opportunities for parallelism. The following example shows how ftchart can help identify opportunities for applying these two techniques.
Eliminating Unnecessary Delays

The first technique involves identifying processes that sleep electively. These are called lazy sleepers. In principle, if these delays can be shortened or eliminated, idle time can be reduced. Consider the following output. In this command, ftchart reports the lazy sleepers (-o lazy) and limits the output to lazy sleepers with more than 0.3 seconds of sleep time (-t 0.3):
$ ftchart -o lazy -t 0.3 ./post-kernel-init.log Total Post-Kernel Boot Time: 3.803s Total Post-Kernel Idle Time: 1.801s (47.37%) <idle>(0) init(1) init(959) rc.sysinit(960) start_udev(977) udevsettle : pid: 1026 ppid: 977 Total Sleep Time: 1153.067ms (10.90% CPU idle, 99.75% elective) Application requested delay: 1150.222ms (99.75%) (10.92% CPU idle) RPC operation: 2.845ms (0.25%) (0.00% CPU idle) <idle>(0) init(1) init(959) rc.sysinit(960) start_udev(977) udevd(1010) udevd(1011) udevd(1866) modprobe : pid: 1867 ppid: 1866 Total Sleep Time: 422.856ms (56.72% CPU idle, 97.70% elective) Kernel space requested delay: 413.143ms (97.70%) (58.06% CPU idle) RPC operation: 4.836ms (1.14%) (0.00% CPU idle) Page fault: 3.222ms (0.76%) (0.00% CPU idle) Closing a file: 1.633ms (0.39%) (0.00% CPU idle) Unknown reason: 0.022ms (0.01%) (0.00% CPU idle) <idle>(0) khubd : pid: 138 ppid: 0 Total Sleep Time: 2061.569ms (64.74% CPU idle, 11.40% elective) Waiting for USB hub events: 1805.415ms (87.57%) (63.23% CPU idle) Kernel space requested delay: 234.956ms (11.40%) (77.03% CPU idle) Waiting for urb from USB: 21.198ms (1.03%) (56.84% CPU idle)

145

Wind River Linux User's Guide, 3.0

In the first block of data, the first line after the totals shows the parents of the process of interest. This helps identify exactly which instance of a process is interesting. Next comes the name of the process itself. In this case it's udevsettle. Next, the Total Sleep Time for udevsettle is about 1.2 seconds. While this process is sleeping, the CPU is idle for 10.92% of the time. Also, 99.75% of the Total Sleep Time is elective. The lines that follow the Total Sleep Time represent a break down of the idle time by reason in decreasing order. As you can see, the major reason for this delay is Application requested delay. Shortening the application-requested delay in udevsettle would allow operations that depend on udevsettle to proceed. In principle, these operations could use the idle time resulting in part from udevsettle's delay. Whether or not this strategy is realistic depends on the specific reason for that delay and you would have to investigate this. Similar investigations could be applied to the other lazy sleepers.
Increasing Parallelism

Opportunities for increasing parallelism are best identified graphically. For this purpose, ftchart has the png output feature. This output feature plots a horizontal bar graph whose x axis is time and whose bars are the processes of interest.
NOTE: The png output included in this section is available in larger images in the installation in installDir/wrlinux-3.0/layers/wrll-analysis-1.0/tools/ftchart/src/.

To generate a basic top-level view of activity, generate the output with no arguments:
$ ftchart -o png tests/post-kernel-init.log

See Figure 12-3 for the results of this command.


Figure 12-3 Top-level png Output Example

The output appears in the current directory as ftchart.png. It shows various intervals of green, gray, and red for each process as follows:

greenCPU runtime graysleep time redelective sleep time

146

12 Configuring Scalable Features 12.6 Analyzing and Optimizing Boot Time

When the idle process is green, the CPU is idle. By default, all of the processes are grouped into the other bar at the bottom of the graph. This bar is the output of all of the unexpanded processes overlaid. Naturally, it is cluttered with more output than can be understood by inspection. It's time to drill down into this other category and tease out the relevant data. Now turn your focus to the second half of the boot period, which contains plenty of idle time, and may present some opportunities for parallelism. To drill down one layer deeper, expand all of the top-level child processes as with the cpu output (-e "*").
$ ftchart -o png -e "*" tests/post-kernel-init.log

See Figure 12-4 for the results of this command.


Figure 12-4 An Example of Expanded Output

You can immediately identify a thick band of red in khubd and init near the 9.75 seconds mark. Also, this band of red is accompanied by a similar green band for the idle process. Drilling deeper would reveal that this is the same idle time attributed to udevsettle in the lazy output analysis above. Because you know that the idle time in this region is due to elective sleeping, move on further to the right. As expected, the interesting process to drill into is the init process. But first, note that all of the child processes for an expanded parent (caused by the -e * option) are overlaid with their parent; they do not go into the other category. This is why the view for the init process is somewhat cluttered. Drilling down into the init process will help clarify things:
$ ftchart -o png -e "init/*" tests/post-kernel-init.log

See Figure 12-5 for the results of this command.

147

Wind River Linux User's Guide, 3.0

Figure 12-5

An Example of Expanding init

First, note that all of the unexpanded output has ended up in the other bar. By inspecting this bar, you can evaluate whether or not some important activity for the time interval of interest is being ignored. If so, you must tune the -e option. In this case, however, very little is happening in the other category during the interval of interest. Two observations emerge from this graph. First, the polltester process is spending almost all of its time in elective sleep.You can "prune" processes such as these from the analysis using the --prune (-p) option.
NOTE: polltester is a simple polling example and is not something that is interesting to optimize. It is used in these examples simply to demonstrate the prune option, and is not provided with Wind River Linux.

The next observation is that brief mingetty processes are not interesting, partially because they do not use the CPU, and partially because they are so small they are not good optimization targets. Eliminate these from the view and put them in the other category by setting a --threshold (-t) option. Applying these two revisions to the command line generates a cleaner picture:
$ ftchart -t 0.1 -o png -e "init/*" -p "init/polltester" \ tests/post-kernel-init.log

See Figure 12-6 for the results of this command.

148

12 Configuring Scalable Features 12.6 Analyzing and Optimizing Boot Time

Figure 12-6

An Example Using the Threshold Option

This view shows that much of what happens in the last half of the boot sequence happens in the rc process. It also shows that much of this time is spent idle. Perhaps if the processes launched by the rc process are not all contending for the same resources, they can be launched in parallel. To identify these processes, tune the --expand (-e) option. You may want to use the tree output option described in Visualizing Late Boot Time, p.141 to help develop your expand option. Skipping some intermediate drilling steps, you can arrive at the following command. Tuning the --threshhold (-t) option brings some interesting processes out of the other category and back into the expanded view:
$ ./ftchart -t 0.01 -o png -e "init,init/init,init/rc,init/rc/*" \ -p "init/polltester" tests/post-kernel-init.log

See Figure 12-7 for the results of this command.

149

Wind River Linux User's Guide, 3.0

Figure 12-7

An Example of Expanding Other Processes

From this output, some details of the launch scripts for the various services are clear. Many of them, such as the sshd, xinetd, and sendmail scripts, appear to spend much time sleeping. Perhaps these scripts could be launched in parallel to eliminate some idle time. At this point, you would investigate whether this makes sense, or whether launching these processes must be deferred for some reason. If the processes can be launched in parallel, you could make that change, re-profile the boot process, and repeat this analysis to determine if an improvement has been made. In these cases, an experiment is probably more revealing than more analysis. In short, try something and see what happens, because it can be hard to predict what the impact of change may be. However, as an alternative to experimentation, you could also use ftchart to get information about why these processes sleep. This can be done using ftchart's idle output. This is textual output that summarizes the reasons why a processes sleeps, much like the lazy output option. Here's an idle output example for the sshd launch script:
$ ftchart -t 0.1 -o idle -e "init/rc/S55sshd,init/rc/S55sshd/*" \ tests/post-kernel-init.log Total Post-Kernel Boot Time: 3.803s Total Post-Kernel Idle Time: 1.801s (47.37%) S55sshd : pid: 2115 ppid: 2067 Total Sleep Time: 299.046ms (77.40% CPU idle, 0.00% elective) Waiting for a process to die: 286.026ms (95.65%) (79.16% CPU idle) RPC operation: 5.791ms (1.94%) (3.26% CPU idle) Reading from a pipe: 5.320ms (1.78%) (74.29% CPU idle) Fork() system call: 1.001ms (0.33%) (0.00% CPU idle) Writing a page to disk: 0.908ms (0.30%) (98.13% CPU idle) sshd : pid: 2119 ppid: 2115 Total Sleep Time: 259.961ms (75.58% CPU idle, 0.00% elective) Page fault: 141.668ms (54.50%) (91.26% CPU idle) Loading kernel module: 58.550ms (22.52%) (54.87% CPU idle) RPC operation: 45.756ms (17.60%) (46.81% CPU idle)

150

12 Configuring Scalable Features 12.6 Analyzing and Optimizing Boot Time

Writing a page to disk: Fork() system call:

13.935ms (5.36%) (97.96% CPU idle) 0.052ms (0.02%) (0.00% CPU idle)

This reveals that the S55sshd launch script is mainly waiting for a process to die, which points to child processes as the fundamental reason why the process is sleeping. However, sshd, a child process of the launch script, seems to be waiting mainly on page faults, loading a kernel module, and performing RPC operations. Assuming that these cannot be easily optimized away, perhaps something else can be placed in parallel with them. Consider the sendmail launch process:
$ ftchart -t 0.05 -o idle -e "init/rc/S80sendmail,init/rc/S80sendmail/*" \ tests/post-kernel-init.log Total Post-Kernel Boot Time: 3.803s Total Post-Kernel Idle Time: 1.801s (47.37%) S80sendmail : pid: 2141 ppid: 2067 Total Sleep Time: 313.646ms (75.17% CPU idle, 0.00% elective) Waiting for a process to die: 295.782ms (94.30%) (78.44% CPU idle) RPC operation: 10.072ms (3.21%) (27.37% CPU idle) Unknown reason: 3.459ms (1.10%) (0.55% CPU idle) Fork() system call: 2.353ms (0.75%) (0.00% CPU idle) Writing a page to disk: 1.286ms (0.41%) (77.29% CPU idle) Writing data to TTY: 0.562ms (0.18%) (0.00% CPU idle) sigprocmask system call: 0.064ms (0.02%) (0.00% CPU idle) Page fault: 0.027ms (0.01%) (0.00% CPU idle) Sending TCP/IP data: 0.022ms (0.01%) (0.00% CPU idle) NFS operation: 0.013ms (0.00%) (0.00% CPU idle) Reading from a pipe: 0.006ms (0.00%) (0.00% CPU idle) makemap : pid: 2142 ppid: 2141 Total Sleep Time: 125.242ms (84.28% CPU idle, 0.00% elective) Page fault: 92.589ms (73.93%) (88.83% CPU idle) RPC operation: 19.016ms (15.18%) (61.68% CPU idle) Writing a page to disk: 8.724ms (6.97%) (79.06% CPU idle) Unknown reason: 2.971ms (2.37%) (93.40% CPU idle) NFS operation: 1.942ms (1.55%) (97.73% CPU idle) newaliases : pid: 2145 ppid: 2141 Total Sleep Time: 82.490ms (73.30% CPU idle, 0.00% elective) Page fault: 47.746ms (57.88%) (94.79% CPU idle) RPC operation: 25.467ms (30.87%) (24.88% CPU idle) Writing a page to disk: 4.436ms (5.38%) (97.88% CPU idle) Unknown reason: 2.915ms (3.53%) (96.26% CPU idle) NFS operation: 1.762ms (2.14%) (97.56% CPU idle) Sending data over socket: 0.164ms (0.20%) (0.00% CPU idle)

It seems that, as in the case with S55sshd, S80sendmail is mainly waiting for another process. The biggest chunk of this wait time is taken up by makemap and newalias. These children in turn spend most of their sleep time waiting for page faults to be handled and RPC operations. So, assuming that the sshd and sendmail processes do not have to be run serially for any reason, they could be launched in parallel. This may allow some page faults for sendmail to be handled while sshd loads that kernel module. On the other hand, both of these processes spend much of their time sleeping on page faults. Putting the processes in parallel may not reduce the total time spent waiting for page faults.

151

Wind River Linux User's Guide, 3.0

12.7 Analyzing and Optimizing Runtime Footprint


The following discussion refers to a Workbench feature that has some general usefulness from the command line as described here. For more on using this feature with Workbench see Wind River Workbench by Example, Linux Version.

12.7.1 Querying the RPM Installation Database


You can use the rpm_query.sh script in your platform project to query the RPM installation database. The script both serves as an example of how you can write scripts to query the database, and can also be used as is to determine such things as the package file lists and sizes of packages produced by your builds. View the prjbuildDir/scripts/rpm_query.sh bash script itself for details on customizing the script to meet your needs. Examples of how to perform a query aimed at the RPM installation database using traditional RPM as well as the more in-depth dependency checking of the smart package manager are provided.
NOTE: smart provides more extensive dependency checking. See http://smartpm.org for more information.

Script Usage

To see the usage syntax of the sample script, use the --usage options, for example:
$ cd prjbuildDir $ scripts/rpm_query.sh --help scripts/rpm_query.sh [-1|--rpm-sizes] [-2|--rpm-files] [-3|--smart-requires-provides] (optional) [ --package] <package> [ --usage]

Querying Package Sizes

To get a list of all packages with their packages sizes use the -1 or --rpm-sizes options, for example:
$ scripts/rpm_query.sh --rpm-sizes Package : libgcc Size : 48544 Package : setup Size : 434210 Package : filesystem Size : 0 Package : wrsv-ltt Size : 1284722 ...

152

12 Configuring Scalable Features 12.7 Analyzing and Optimizing Runtime Footprint

Querying for Package File Lists

You can get a list of the files contained in a package with -2 or --rpm-files, for example:
$ scripts/rpm_query.sh --rpm-files Package : libgcc List of files in rpm : /lib/libgcc_s.so.1 Package : setup List of files in rpm : /etc/aliases /etc/bashrc /etc/csh.cshrc /etc/csh.login /etc/environment /etc/exports /etc/filesystems /etc/fstab /etc/group /etc/gshadow /etc/host.conf /etc/hosts /etc/hosts.allow /etc/hosts.deny /etc/inputrc /etc/motd /etc/mtab /etc/passwd /etc/printcap /etc/profile /etc/profile.d /etc/protocols /etc/securetty /etc/services /etc/shadow /etc/shells /usr/share/doc/setup-2.6.14 /usr/share/doc/setup-2.6.14/uidgid /var/log/lastlog ...

Smart Querying for Dependencies

Use a smart query to find the dependencies required or provided by packages:


$ scripts/rpm_query.sh --smart-requires-provides Package : libgcc Loading cache... Updating cache... ######################################## [100%] libgcc-4.3_49-1_WR3.0zz@i686 Provides: libgcc = 4.3_49-1_WR3.0zz@i686 Required By: glibc-2.8-1_WR3.0zz@i686 (libgcc = 4.3_49-1_WR3.0zz@i686) libgcc-devel-4.3_49-1_WR3.0zz@i686 (libgcc = 4.3_49-1_WR3.0zz@i686) libgcc_s.so.1 Required By: beecrypt-4.1.2-12_WR3.0zz@i686 (libgcc_s.so.1) db4-cxx-4.6.21-5_WR3.0zz@i686 (libgcc_s.so.1) fam-2.7.0-1_WR3.0zz@i686 (libgcc_s.so.1) libstdc++-4.3.2-1_WR3.0zz@i686 (libgcc_s.so.1) libusb-0.1.12-15_WR_3.0zz@i686 (libgcc_s.so.1) mesa-libGLU-7.0.1-5_WR3.0zz@i686 (libgcc_s.so.1) mysql-5.0.45-1_WR3.0zz@i686 (libgcc_s.so.1) pcre-7.3-3_WR3.0zz@i686 (libgcc_s.so.1)

153

Wind River Linux User's Guide, 3.0

xerces-2.8.0-1_WR3.0zz@i686 (libgcc_s.so.1) xorg-x11-server-Xorg-1.3.0.0-24_WR3.0zz@i686 (libgcc_s.so.1) libgcc_s.so.1(GCC_3.0) Required By: beecrypt-4.1.2-12_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) db4-cxx-4.6.21-5_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) fam-2.7.0-1_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) libstdc++-4.3.2-1_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) libusb-0.1.12-15_WR_3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) mesa-libGLU-7.0.1-5_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) pcre-7.3-3_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) xerces-2.8.0-1_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) xorg-x11-server-Xorg-1.3.0.0-24_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.0)) libgcc_s.so.1(GCC_3.3) Required By: libstdc++-4.3.2-1_WR3.0zz@i686 (libgcc_s.so.1(GCC_3.3)) libgcc_s.so.1(GCC_3.3.1) libgcc_s.so.1(GCC_3.4) libgcc_s.so.1(GCC_3.4.2) libgcc_s.so.1(GCC_4.0.0) libgcc_s.so.1(GCC_4.2.0) Required By: libstdc++-4.3.2-1_WR3.0zz@i686 (libgcc_s.so.1(GCC_4.2.0)) libgcc_s.so.1(GCC_4.3.0) libgcc_s.so.1(GLIBC_2.0) Required By: libstdc++-4.3.2-1_WR3.0zz@i686 (libgcc_s.so.1(GLIBC_2.0)) mysql-5.0.45-1_WR3.0zz@i686 (libgcc_s.so.1(GLIBC_2.0)) xorg-x11-server-Xorg-1.3.0.0-24_WR3.0zz@i686 (libgcc_s.so.1(GLIBC_2.0))

12.7.2 Getting a Footprint Snapshot


The following tool is best used from Workbench but details on the script used are provided here. By saving certain files in your project build directory, you can create a snapshot of the files that determine the current size of your runtime footprint. You can use this snapshot, for example, to evaluate which files you might want to remove to reduce size, or refer back to this snapshot when you find a later configuration has removed too many files. The files that determine the size of your footprint are pkglist and changelist.xml. pkglist contains the list of packages that are installed, and changelist.xml is typically modified by Workbench to configure the file system layout. Use the export-footprint.sh script to save or restore the build system files that are responsible for the size of the target file system. The script provides basic usage information:
$ cd prjbuildDir $ scripts/export-footprint.sh --help [-i|--import] [-e|--export] [-s|--save] <dir> [-s|--save] <dir> (optional) [-p|--project] <dir> (optional) [-n|--note] "Message"

When you export your footprint, the script copies pkglist and filesystem/changelist.xml from your project build directory to the save directory. The project build directory is the prjbuildDir you are in, or you can specify a project build directory with -p or --project. Specify the save directory with the -s or --save option.

154

12 Configuring Scalable Features 12.7 Analyzing and Optimizing Runtime Footprint

The script also creates an XML file called export-footprint.xml in the save directory. This is used to store the optional note and is used by the import to validate that the export had been successful. When you import a footprint, pkglist.in and changelist.xml are copied from the save directory, specified by -s or --save, to the appropriate location in the project directory, specified by -p or --project, or the prjbuildDir you are in. The existence of the export-footprint.xml file is used to validate that a successful export had previously occurred.

155

Wind River Linux User's Guide, 3.0

156

13
Patch Management
13.1 Introduction 157 13.2 Patch Principles and Workflow 158 13.3 The Quilt Patching Model 160 13.4 git and the Kernel 165 13.5 Kernel Patching with scc 180

13.1 Introduction
This chapter introduces various patch management concepts to help in understanding the patch model used by Wind River Linux. For an example of how to use the Workbench patch manager GUI, refer to Wind River Workbench by Example, Linux Version.

Patching Models in Wind River Linux

Wind River Linux uses two open-source methods of patching code. LDAT, the Wind River Linux build system, uses the open-source quilt patching model.Wind River use of the quilt command is dicussed in 13.3 The Quilt Patching Model, p.160. The Wind River Linux kernel is now managed as a git tree, and patching makes use of associated tools such as git and guilt, as described in 13.4 git and the Kernel, p.165.

157

Wind River Linux User's Guide, 3.0

13.2 Patch Principles and Workflow


There are two main principles Wind River Linux uses in applying patches:

Wind River Linux keeps its source code pristine. Patches are only applied to project code, when building a project. Patch lists are rigorously maintained.

Patch workflow for Wind River developers follows this pattern: 1. 2. 3. 4. Product designers first decide on wherewhich template or layerto insert the patch. The individual developer configures a project for the specific product, specifying the relevant layer or template in the configure command. The individual developer then works locally, developing new code and new patches to extend existing code. The developer then validates their local work with the central code base before folding back changes and patches. The more general the layer in which the patch is placed, the greater the scope of testing required to justify the acceptance of these changes. Automated test tools and procedures for the individual contributor help in keeping the code base correct. After successful validation, the developer checks in the changes.

5.

13.2.1 Applying and Resolving Patches


During patch development, apply patches within a project created for that purpose.
Simple Reject Resolutions

Simple reject resolutions include resolving path names, fuzz factor, whitespace, and patch reverse situations. Some hunk rejects can be resolved by simple adjustments, including:

Leading Path Namesthe leading path directory names in the patch may not match the directory names of your targets. By removing some or all of the patch's leading path names, you may then match the local environment. Fuzz Factoreach hunk has a leading and following number of lines around changes to provide a validating context for the hunk. If these leading or following lines do not exactly match the target file, the so-called "fuzz factor" can be loosened from an exact match (0) to a looser match (> 0). White spacesometimes the only difference in the leading and following context lines is in the exact whitespace. The patch apply can be adjusted to ignore white space differences when attempting to apply the patch. Patch Reversalsometimes the patch file was created backwards, meaning that it reflects the differences from the new version to the original, instead of the normal direction of the original to the new version. Reversing the patch will fix this and allow the patch to apply.

158

13 Patch Management 13.2 Patch Principles and Workflow

Preserving the Patch File, Fixing the Source

If a patch almost, but not quite applies, it can sometimes be fixed by adjusting the source target so that the context matches what the patch is looking for. After the patch is resolved, you can then create a new correct patch file based on the difference from the original target and the resolved target, and then throw away the original patch file. If the patch file must be maintained exactly as it was received, this is the preferred method. After rejects are resolved in this manner, you can always introduce an intermediate patch that takes the source to this adjusted state, allowing the original source and the acquired patch to be preserved, if that is required or desired.
Preserving the Source File, Fixing the Patch

Alternatively, you can adjust the patch file itself. This is more complicated because it involves modifying the patch file using the patching syntax. This method is preferred if the patch file is unlikely to be externally updated, and thus a localized version is acceptable. It also removes the need for any intermediate patch, as described in the previous section, or the undesirable situation of a patch to a patch.
Placing Unresolved Rejects into Files

Some rejects require study and so cannot be immediately resolved using the above methods. You should be able to accept the patch hunks that apply cleanly, and preserve a copy of the hunks that do not. These reject hunks can be saved to a file for analysis.
Placing Unresolved Rejects into the Source (Inline)

Alternatively, you may wish to place the rejected hunks directly in the target source file, so that they can be seen within the context in which they do (or should) apply. This reduces the potential clutter of multiple reject files (which might otherwise be lost or forgotten).

Deploying Patches

Kernel patches and package patches can be deployed in:

custom layers custom templates the installed development environment.

Wind River suggests that custom patches be deployed within a custom template or layer, thereby leaving the development environment intact. For more information and examples, see chapters 9. Configuring the Kernel, and 10. Adding Packages.

159

Wind River Linux User's Guide, 3.0

13.3 The Quilt Patching Model


Quilt is a general-purpose patching mechanism that you can use whenever you are working with patches. The Wind River Linux build system uses it for patching SRPM packages as described in this section. Quilt is especially useful for dealing with a series of patches and with patches that contain multiple files. Open source SRPM packages typically contain the package source as well as multiple patches to be applied to that source to produce the binary in an RPM package. In addition, you may be modifying the source to make your own changes. The proper way to modify the source is to add one or more patches to the SRPM (rather than modifying original SRPM source files or patches). This keeps your changes distinct. Use of the quilt tool facilitates your work with patches. In the following example, we will patch an SRPM with quilt using some typical quilt commands, and then integrate that patch into the build system.

13.3.1 Patching SRPMs with Quilt


SRPM packages have a spec file that manages how a package is built on the host, and how it installs in the target file system. (The spec files are discussed in more detail in 10. Adding Packages and 22. Examples of Adding Packages). The basic build sequence is as follows: 1. 2. 3. 4. configure compile install pack into binary RPM package

During the patch phase, the SRPM package's source is patched by the Wind River Linux Quilt-based patch system. To patch a source file within the SRPM, you must do the following: 1. 2. 3. Create a new top patch file to hold the changes. Save that patch file in the installation or layer. Register the new patch file in the package's spec file.

In the following procedure shows how to create a simple patch that patches two source files in the mktemp package.

Configure Your Environment and Project

Configure a project as follows: 1. For this procedure, use a glibc_small file system:
$ cd prjbuildDir $ configure --enable-kernel=standard \ --enable-rootfs=glibc_small \ --enable-board=common_pc

160

13 Patch Management 13.3 The Quilt Patching Model

2.

Set up your Wind River Linux environment for quilt (the following command lines assume an sh-style shell):
$ export QUILT_PATCHES=wrlinux_quilt_patches

By default, quilt assumes patches are in a subdirectory named patches, so this variable overrides the default and states that the patches subdirectory will be wrlinux_quilt_patches.
$ export QUILT_PC=.pc

This is the name of a temporary working directory used by quilt.


$ export WRLINUX_USE_QUILT=yes

Specify the quilt patching model.


$ export PATH=$PATH:prjbuildDir/host-cross/bin

This includes the path to quilt and other Wind River-supplied host tools. 3. Add the mktemp package (it is part of the installed development environment) and proceed as far as the patch phase of building the mktemp package:
$ make -C build mktemp.addpkg $ make -C build mktemp.patch

Note that prjbuildDir/build/mktemp-version/ now contains the files and subdirectories with the unpacked and patched source.

Create a New RPM Patch

Create a new multi-file patch on the top of quilts patch stack with the following procedure. 1. 2. Change directory to prjbuildDir/build/mktemp-version/.
$ cd build/mktemp-version

In the packages build directory, start a new patch with a descriptive name, for example:
$ quilt new mktemp-version-my_custom.patch Patch mktemp-version-my_custom.patch is now on top

Your new patch is now one of two in quilts patches directory (wrlinux_quilt_patches):
$ quilt series patches_links/installDir/wrlinux-3.0/layers/wrll-wrlinux/dist/mktemp/patches/ mktemp-wr-integration.patch mktemp-version-my_custom.patch

(Refer to 10.3.2 Older Method of Adding SRPMs, p.114 for information about pkg-wr-integration.patch.) Your patch is on top, meaning changes you make now will apply to it:
$ quilt top mktemp-version-my_custom.patch

161

Wind River Linux User's Guide, 3.0

3.

Edit the files you want to include in your patch. In this example you make minor changes to the README file and a source file:
$ quilt edit BUILD/mktemp-version/README $ quilt edit BUILD/mktemp-version/mktemp.c

NOTE: quilt edit file uses the editor set in your EDITOR environment variable, or vi if none is set. As an alternative to using quilt edit file, you can use quilt add file and then edit the file as you normally would.

For the purposes of this procedure, you could, for example, add some text to the README file and modify the Usage statement in the mktemp.c file. 4. Your current working patch now has two files:
$ quilt files BUILD/mktemp-version/README BUILD/mktemp-version/mktemp.c

5.

Before you save the patch, confirm that your source changes work by building the package:
$ cd prjbuildDir $ make -C build mktemp

You can, for example, look at the build/mktemp-version/BUILD/mktemp-version/mktemp executable to see that it contains your patched usage statement:
$ strings build/mktemp-version/BUILD/mktemp-version/mktemp | grep Usage Usage: %s [-V] | [-dqtu] [-p prefix] [template] [my patch test message]

Repeat these steps until your changes build successfully. 6. Regenerate (refresh) the patch so that it includes your successful changes to the files:
$ cd build/mktemp-version $ quilt refresh Refreshed patch mktemp-version-my_custom.patch

Create a Layer and Save Your Patch

1.

Create a layer that you will use to store your patches and related files. For example, create a layer called mod_mktemp with the corresponding command_name/ and patches/ directories under dist:/
$ mkdir -p $HOME/layers/mod_mktemp/dist/mktemp/patches/

2.

Add a makefile to the layer. In this case you can just copy the existing one from the development environment:
$ cp installDir/wrlinux-3.0/layers/wrll-wrlinux/dist/mktemp/Makefile \ $HOME/layers/mod_mktemp/dist/mktemp/

3.

Save your patch to the layer:


$ cd prjbuildDir/build/mktemp-version $ cp wrlinux_quilt_patches/mktemp-version-my_custom.patch \ $HOME/layers/mod_mktemp/dist/mktemp/patches/

4.

Edit the patch file so that paths will match the context of the patch when it is applied:
$ editor $HOME/layers/dist/my_mktemp/patches/mktemp-version-my_custom.patch

162

13 Patch Management 13.3 The Quilt Patching Model

Remove the initial paths and add a suffix to the original file name for backup. Differences between the patch before you edit it and after you edit are pointed out in bold text in the following example: Before:
--+++ ... --+++ a/BUILD/mktemp-1.5/README b/BUILD/mktemp-1.5/README a/BUILD/mktemp-1.5/mktemp.c b/BUILD/mktemp-1.5/mktemp.c

After:
--+++ ... --+++ mktemp-1.5/README.orig mktemp-1.5/README mktemp-1.5/mktemp.c.orig mktemp-1.5/mktemp.c

You now have a patch to apply to the package source, but you must modify the spec file to include it.

Copy and Modify the Existing Spec File

To inform the build system of your new patch, you must modify the package.spec file. This file registers the patches to be applied to the SRPM package. 1. Copy the spec file to your layer:
$ pwd prjbuildDir/build/mktemp-version/ $ cp SPECS/mktemp.spec $HOME/layers/dist/my_mktemp/

2.

Modify the spec file to add an entry for the patch:


$ editor $HOME/layers/dist/my_mktemp/mktemp.spec

List your patch(es) with a Patch number statement and then include them with a %patch macro. Wind River patches start with number 500, so if the spec file already includes Wind River patches you might start your patch numbers with the next available number, for example 504. Alternatively you may want to start your own numbering sequence, say in the 600s. For this example add the following to the mktemp.spec file somewhere before the %prep section:
Patch600: mktemp-version-my_custom.patch

And add the following after the %prep and before the %build to include it in the prepatch section of the spec file:
%patch600 -p1 -b .my_custom

In this typical example, the -p1 parameter means ignore the first directory name from each file name in the patch file, and the -b flag means generate a backup of the file before patching it. Also note that the %patch macro will automatically prefix the package name and version number and suffix the .patch extension so they are not included. 3. Add any other patches and sources that you created for this package to your layer:
$ cp SOURCES/your_patch_or_source_files $HOME/layers/dist/my_mktemp/patches/

163

Wind River Linux User's Guide, 3.0

Testing Your Patches

Now that you have the spec and patches files in place, you can test your patch setup. 1. Create a new project that includes the layer you created:
$ configure --enable-kernel=standard --enable-rootfs=glibc_small\ --enable-board=common_pc --with-layer=$HOME/layers/mod_mktemp/

2.

Add the mktemp package:


$ make -C build mktemp.addpkg

Note that you could also make this step part of your layer with a pkglist.add file. 3. Specify the distclean target to the make command. This will start the package's build directory with a clean slate.
$ pwd prjbuildDir $ make -C build mktemp.distclean

NOTE: You do not have to use the package.distclean target when you first

perform this procedure, but you may be re-iterating this procedure until the patch works correctly. You should use the package.distclean target before each re-iteration. 4. Specify the patch target. This will apply the spec file patch, then the spec file will apply the new custom patch.
$ make -C build mktemp.patch

If there are errors, use the error messages to fix the respective patch file(s) that you saved. For example, confirm the registered and listed file name spellings, the syntax in the spec file, the patch ordering, and the before/after file name entries in the source patch file. Repeat the package distclean and patch rules until all error are resolved. 5. By stopping the package build at the patching stage, you can view your patched source to see that your patches were applied, for example:
$ editor build/mktemp-version/BUILD/mktemp-version/README $ editor build/mktemp-version/BUILD/mktemp-version/mktemp.c

or
$ diff -Nur build/mktemp-version/BUILD/mktemp-version/README.my_custom \ build/mktemp-version/BUILD/mktemp-version/README

You should see that the changes you made to the source files have been applied. 6. You can now finish building your package:
$ make -C build mktemp

Or just build the file system:


$ make fs

164

13 Patch Management 13.4 git and the Kernel

Any project configured with your new layer will contain the patches for the mktemp package. To keep from rebuilding the package each time, you could build it once and then copy the mktemp* RPMs to your layer, for example:
$ make -C build mktemp $ mkdir -p $HOME/layers/mod_mktemp/RPMS/glibc_small/i686/ $ cp prjbuildDir/export/RPMS/i686/mktemp* \ $HOME/layers/mod_mktemp/RPMS/glibc_std/i686/

13.4 git and the Kernel


The development of the Wind River Linux 3.0 kernel focused on specific goals designed to do the following:

Deliver the kernel in a git tree. In the previous organization, patches were spread around multiple directories, and that provided no easy way to tell which directories had been incorporated or applied. Deliver Wind River additions on top of the base kernel.org tree in a seamless fashion. This lets you browse file history and see Wind River changes as well as previous core kernel.org changes all together in a continuous fashion. Create a history-clean, branched, and tagged git repository for transparent access to the logically divided features that comprise the 3.0 kernel.
NOTE: A history-clean git repository is one in which features are introduced in completion and do not include development history. Without this, you might also see, for example, test patches being applied and then reverted, and 50 patches fixing minor issues for the feature. The history-clean repository is a set of clearly-defined chunks that introduce functionality.

Use a single git repository to contain all the kernel types, features, and BSPs using branches and tags to give clear boundaries to each of these. Leverage community best practices and workflow around git source management.

13.4.1 An Overview of gits Role in the Kernel


The kernel in Wind River Linux 3.0 is constructed and managed as a git repository. Patches to the kernel are now represented by commits in the kernel repository. This has multiple effects:

The git tree presents a uniform interface to modifications to the upstream kernel. Major features are tagged, branched, and presented in a clean manner. In other words, git integrates Wind River, partner, and other patch sources seamlessly with the kernel.org git history. Wind River Linux kernels are built directly from the git repository, which has been previously constructed, by checking out the appropriate BSP branch and compiling the kernel. This means that there are no patch failures when using the constructed tree, and:

165

Wind River Linux User's Guide, 3.0

Multiple boards and kernel combinations are present in a single git tree, significantly streamlining the maintenance of multiple board installations. Differences between two BSPs is easily extracted by git, no longer requiring a recursive diff on two independent source trees. The kernel source and build directories are kept separate. When coupled with the integration of multiple board configurations, a single install can easily build and maintain multiple kernel variants. Because the kernel is built directly from a git repository, any end-user changes to the source files during development are automatically tracked by git and can be committed, exported, and saved with a git-based workflow.

The 3.0 kernel continues to leverage the advantages of having pristine source plus patches and blends it into a git repository. The 3.0 kernel uses git to provide patch and configuration sharing by using common branches as the base of more specific configurations. Using git in the 3.0 kernel means that the git workflow and tools can be leveraged to enhance development, seamlessly integrate with the external developer community, and employ distributed source management.

The Kernel Build Workflow

The high level kernel build workflow with Wind River Linux version 3.0, is as follows: 1. 2. 3. Clone a fully patched, branched and annotated git repository into a build directory. Checkout the BSP or kernel type branch. Any additional patches and config files are detected and layered onto the appropriate leaf branches in the tree as additional commits. But the core patches are not used because they have already been captured as commits, and hence the tree is often not patched at all. Build the kernel.

4.

These step are normally performed automatically by the build system. See also The Kernel Lifecycle and Developer Workflow, p.170.

The kernel-cache

Unlike previous releases of Wind River Linux, the focus of the changes to the kernel is not the patches themselves but is instead the constructed git repository. Wind River maintains a repository that contains the patches and encodes the information required to construct the kernel git repository. The checkout of this internal repository that was used to create the pre-constructed git tree is provided as a reference. There should be no need for you to directly manipulate anything in it. The same repository contains all the configuration fragments for the Wind River Linux kernel. This repository is called the kernel-cache, in the sense that it is the

166

13 Patch Management 13.4 git and the Kernel

permanent store for the patches used to construct and re-construct the git repository. You do not directly manipulate the kernel-cache, which is captured in the Wind River 3.0 kernel tree itself. This means that it is completely optional. The kernel-cache is processed by the build system to generate a meta-series that describes the steps required to create a fully branched, tagged and history-clean git repository. While Wind River maintains a master kernel-cache, you can create multiple kernel-caches, and use them to construct additional kernel trees. Using a custom kernel-cache allows patches to be shared and included between kernel-caches. This means that an add-on kernel-cache can reference the embedded Wind River cache and modify how it is used to construct a kernel tree. This would be an example of an optional, and more complex use case of a power user who is maintaining several BSPs with a shared feature set.

The Kernel Source Tree

The patches and other sources that are used to create the Wind River Linux source tree are grouped by functionality. These groupings translate to a tagged and branched git tree as shown in Figure 13-1.
Figure 13-1 The Wind River Linux Source Tree

kernel.org wrs_base standard feature (a) feature (b) tag bsp1-standard bsp2-standard cgl (and other kernel types) cgl feature c,d,e tag bsp1-cgl bsp2-cgl bsp3-cgl The branches shown in Figure 13-1 are as follows:

wrs_basebranches from kernel.org at a defined point, for example at version 2.6.27.15. standardcommon kernel functionality for all boards is created in this branch. feature (a), (b)tagged and separated features in the standard branch (for example, lttng, yaffs2.) bsp1,2-standardBSP branches at the top of standard. Any board-specific changes are contained in these branches.

167

Wind River Linux User's Guide, 3.0

cglenhanced kernel type that branches at the top of standard; therefore, it inherits standards features and adds new features. bsp1-cgl...bsp3-cglbsp branch at the top of the cgl kernel. Although a separate branch from the -standard BSPs, the same patches used for the -standard, -cgl, and between all cgl boards are used to construct the branch, so the BSPs are identical in board-specific functionality. The tags represent the completion of feature additions for a given kernel type on the branch of that respective kernel type.

13.4.2 Starting to Use git


The following serves as a general introduction to the use of git with Wind River Linux. Because git is an open-source, community-maintained tool, much information is available elsewhere. See, for example, http://git.or.cz/gitwiki/GitDocumentation for overall git documentation, and see http://git-users.googlegroups.com/web/gitfrombottomup.pdf for a good introduction to git concepts and usage. In addition, http://git.kernel.org/ provides links to a tutorial, overview, and more. Although the Wind River 3.0 kernel uses many custom additions in order to create this fully-populated git tree from thousands of changesets, nothing has been done to break the community git workflow, which remains fully supported.

Types of Commands

There are a few broad categories for the types of commands available:

Determine out what has changed, and to have a look at the patches: git whatchanged, git branch, git checkout

Apply patches to the git tree: git fetch, git pull, git am, kgit import, kgit meta, git apply, git rebase, guilt push, guilt pop, guilt refresh

Send changes for upstream inclusion: git commit, kgit export, git request-pull, git format-patch, git send-email

Note that, in addition, there are some build targets that you can use to manipulate the meta series:

linux.rescc: regenerate the meta series used to construct a tree. linux.reconfig: reconfigure the kernel from the config file fragments.

168

13 Patch Management 13.4 git and the Kernel

Tools Overview

A specific set of toolscan be used to s work with the Wind River Linux 3.0 kernel. These are summarized in this section.
NOTE: You can add prjbuildDir/host-cross/bin to your path to use the tools. The tools are linked from the host tools layer, for example:
prjbuildDir/host-cross/bin/git -> /home/user/WindRiver/wrlinux-3.0/layers/wrll-host-tools/host-tools/bin/git

git

git and the many commands that compose the toolset are used to manage the low-level details of the kernel git tree. git is used in a standard manner and Wind River follows the best practices of the kernel community.
guilt

Use guilt to track the patches that created the kernel git tree. guilt is a community add-on to git and adds the ability to manage a series of patches directly in a git repository. guilt provides the ability to maintain git branches in a manner similar to quilt and raw patches. The use of guilt gives the ability to manipulate commits as units or building blocks and to keep them contained and refreshed without using git internals directly.
scc

The patch management system of Wind River Linux version 2.0 (called smudge) has evolved to meet the demands of creating and managing the kernel in a git repository. The engine that meets those requirements in Wind River Linux version 3.0 is the series config compiler, called scc. scc unifies the information required to fully describe a kernels features. It processes feature descriptions (.scc files) that contain patches, branching, tagging, and other manipulations. In its most basic use case, you can think of an scc file as equivalent to a patches.list file, or the series file of quilt. scc works in a modular manner to compile each individual feature and link them into a script. When run, that script produces a meta-series that describes everything required to construct a git repository. The construction and generation of a meta series is one phase in creating a kernel git repository. There is a secondary phase that interprets the meta series to build the tree. The kgit tools (described below) process the meta-series and construct the 3.0 kernel git repository. If you are just using the pre-generated git tree as is, and only layering your own changes on top of that, then you will likely never be using the full functionality of scc that is deployed during a complete tree generation. See 13.5 Kernel Patching with scc, p.180 for more on scc.

169

Wind River Linux User's Guide, 3.0

kern_tools and kgit

The kern_tools tools are a set of scripts written by Wind River to create and manipulate the kernel git repository in a standard way. They provide the ability to import, export and manage the commits that comprise the 3.0 kernel tree. The kern-tools are:

kgit: dispatches to sub-kgit commands. Also used to identify the type of a source repository. kgit-clean: checks tree consistency and can optionally remove old branches. kgit-import: imports patches and features in many formats. Interfaces with git, guilt and the Wind River Linux git tree structure. kgit-publish: takes a Wind River Linux kernel git tree and converts it into a tree that can be shared or used for build system integration. kgit-scc: wrapper around scc, used for .scc file searching and for tree construction. Not normally run manually and is part of the kernel build system. kgit-checkpoint: converts the files that track the Wind River Linux kernel repository's internal structure into a commit. When the checkpoint is restored, the tree is available for development. kgit-config: saves and reads configuration values specific to the Wind River Linux kernel git repository. kgit-init: initializes the base of a Wind River Linux kernel git repository. This is not normally run manually and is part of the kernel build system. kgit-pull: wrapper around git pull. kgit-classify: manipulates the feature descriptions that are used to construct a Wind River Linux kernel git tree. kgit-export: exports configuration and patches from the Wind River Linux kernel git tree. kgit-meta: interprets a meta series to construct a set of branches, patches, and tags in a Wind River kernel git repository. kgit-rebase: produces a rebase report that indicates which patches should be propagated between branches.

The Kernel Lifecycle and Developer Workflow Step 1: The Kernel Source

The majority of the patching of the 3.0 kernel is already completed for you, and only additions to the default Wind River patches are performed on the fly. This is due to the fact that the 3.0 kernel git repository is constructed by applying patches on top of the kernel.org base and effectively capturing the patches as git commits. The patches and configuration files that were used to create the repository are captured within the repository itself. They can be found in the kernel source tree under the wrs/patches directory and are maintained on a per-branch basis.

170

13 Patch Management 13.4 git and the Kernel

For example, to see a list of the patches that make up the standard branch, look under linux/wrs/patches/standard/links/path_to_patches. Note that this is not the best way to see what changes are in a particular branch of the constructed treegit whatchanged and other git commands are more effective and are described below. It is still useful to describe the processing required to create the constructed repository, since that leads to the branching strategy and is the same process required when adding new features to the kernel. As discussed in The kernel-cache, p.166, the source for the kernel git repository is called the kernel-cache. The kernel-cache is processed by scc to generate the meta series used to construct the tree. The patches and configuration within the kernel-cache are organized in a similar manner to the organization of Wind River Linux 2.0, with the significant difference of integrated kernel configuration and patching.
NOTE: Although the kernel-cache can be found within the kernel tree itself, it is mainly informational and you should rarely (if ever) directly modify it.

Wind River organizes kernel modifications into logical categories to ease maintenance and visibility of changes. The actual on-disk organization is not important, but the representation as branches, tags, and commits in the constructed tree is important.
Construction of the git Repository

The Wind River kernel git repository is constructed by processing feature descriptions in a top-down manner. At the top of the tree are the leaf nodes, which represent features that do not have sub branches, and when checked out can be compiled into a valid kernel. In general, BSPs form the leaf nodes of the kernel tree. When constructing a tree, the leaf nodes are found, processed by scc, and used to construct the git repository. Leaf nodes are feature descriptions (in .scc files) that include sub-features and configuration data and are the entry point for .scc processing. The high-level phases of tree construction are: 1. scc compiles the leaf nodes. This means that all included kernel features are compiled, kernel configuration data is logged, and scripts are created to represent each leaf node. Transforms, patches, and conditionals are processed and compiled into the final script for later execution. The leaf node scripts are executed to produce a set of meta-series. Patch transforms and substitutions are performed at this point. Each meta-series is interpreted by kgit-meta. At this point branches are created, patches are converted into git commits, and tags are applied to the tree. Kernel configuration data is copied into the kernel tree structure for later use. The tree is checkpointed and published. Checkpointing captures the state of the tree, the patch to commit mappings, and anything else required to build or manipulate the tree. Publishing makes the tree available for use by the build system.

2. 3.

4.

171

Wind River Linux User's Guide, 3.0

The result is the fully-branched and tagged kernel git tree that you use. The layout and organization used to construct the base tree is not meant to be modified by youit is meant to be extended, which is described in the next step. The published, fully-patched kernel git repository is placed as a bare clone in wrll-linux-2.6.27/git/default_kernel and is processed automatically by the kernel build system.
NOTE: A bare clone is one in which only the git repository data is present, not the actual files. So you will not see any source, although it is internally represented by git. Step 2: The Kernel Tree Extension

At this point, any templates, profile additions, or other command line specified features are processed and used to extend the kernel git tree. (Kernel tree extension is the equivalent of the version 2.0 kernel patching phase.) Tree extension is done by processing the kernel feature descriptions that were requested, comparing against those which were used to construct the kernel git tree, and then applying any extensions to those features to the existing tree. The feature descriptions used to build the kernel tree are stored in linux/wrs/cfg/kernel-cache. The first part of this phase, creates a local clone of the default_kernel repository (mentioned in Step 1) into the local kernel source directory. In this phase, scc is invoked in a light-weight manner to re-compile the existing kernel feature descriptions and any add-on kernel features that have been passed with templates, profiles, or on the command line. Once the existing features and add-on features are linked into the executable, a new meta series is generated. The new meta-series is processed to detect differences from the constructed tree. Extensions are normally kernel configuration changes, or patches added to the end of the BSP branch. This matches standard git workflow as git commits are added in chronological order. The content used to construct the Wind River common branches, such as wrs_base or standard should not normally be modified. If they are modified, any sub-branches would have to be rebased in order to pick up the changes in the common branch.
NOTE: Patches (in the form of new commits) are layered on top of a parent commit.

This parent commit represents the state of the whole tree at the time of the new child commit. The context and content of a patch depends on the content of the associated parent files. If you rebase (that is, try to apply your patch to a different parent commit), then you will have to fix any context or content issues that may arise. The rebase is a manual process, but is detected during tree extension and reported to you. The reason this rebase is required is that any changes to the common branch will be after the branch point for the BSP node. If the BSP is to see the change, it must have its branch point updated to the new end-of-branch or new commit ID. At this point the tree is ready to build.

172

13 Patch Management 13.4 git and the Kernel

Step 3:

Building the Kernel

To configure and build a kernel, the proper BSP branch is checked out and used. The branches for valid BSPs follow the naming convention of bsp-kernel_type. Only valid board and kernel type combinations are captured in the constructed tree. This is done automatically by the kernel build system and nothing needs to be done by the developer. See 9. Configuring the Kernel for details on how the kernel is configured. Note that the kernel source and build directories are separated. The linux source is in linux/ while the build directory is linux-board-kernel-build/. This means that switching to a different board or kernel combination can be accommodated in a single build directory.
Step 4: Kernel Development

You can perform iterative development on the kernel tree once it has been configured. Wind River does not enforce a particular development style and any workflow may be adopted. Due to the tight integration with git, Wind River recommends kernel.org-style workflows. Development should be done on the BSP branch starting as follows:
$ git checkout bsp-kernel

To see the changes in a particular branch, use:


$ git whatchanged branch

To see how Wind River categorized the patches for a particular feature, use:
$ kgit classify ls $ kgit classify cat feature_name

To see all the tags and branches in the tree, use:


$ git tag -l git branch

To check tree consistency, use:


$ kgit clean -c

Step 5:

Saving patches

Be careful to export any changes you make to the kernel in the board build directory (prjbuildDir/build), because the kernel source directory in a board build is a clone of a master kernel repository and the entire board build is transient by nature. It will be lost, for example, with a make linux.distclean. There are many ways to ensure that development is not lost:

git format-patch Commit changes to the local tree and export them with git format-patch as follows:
$ git checkout branch

Identify commits of interest. If the tree was tagged before development:


$ git format-patch -o save_dir tag

173

Wind River Linux User's Guide, 3.0

If no tags are available:


$ $ $ $ $ git git git git git format-patch -o save_dir format-patch -o save_dir whatchanged # identify format-patch -o save_dir format-patch -o save_dir HEAD^ # last commit HEAD^^ # last 2 commits last commit commit_id rev_list

On the next tree construction you can manually apply the changes with git am, or you can add them to a custom kernel-cache or template to have them automatically applied.

git push Commit changes to the local tree and export them with git push as follows:
$ git push ssh://userid@upstream/repository mybranch:remote_branch

Or you can even use a pull request


$ git-request-pull start_commit url end_commit

NOTE: If you rewrite the history, or reconstruct or rebase git commits, if they use the same branch names or tags you have what is called a non-fast forward situation. This means that the two commit trees do not share the same structure and must be rewritten to perform an update. Depending on the source repository, there can be problems performing pushes if the tree is being constructed in a non-fast forward manner.

kgit and export to a cache Commit changes to the local repository, classify them, and directly export them to a custom kernel cache.
$ kgit import -t treeish start ... end_commit

Note the start patch and end patch names and optionally classify the changes.
$ kgit export -p start ... end_patch dir

make Again, commit any changes, and ensure they are in the guilt series and classified.
$ make linux.export export_dir=path_to_layer

13.4.3 Examples
The following sections demonstrate some ways to use git and associated tools with Wind River Linux.
NOTE: The following examples assume that prjbuildDir/host-cross/bin/ is in your

path.

Adding a Patch to the Kernel

scc has been designed to work in a top-down fashion to provide explicit control of kernel patching and configuration. It is best to understand the kernel implementation and use the provided tools, rather than bolting-on patches and configuration files from external templates.

174

13 Patch Management 13.4 git and the Kernel

That being said, there is a way to do this and to keep the mechanism for explicit control.
Technique #1: Using a Template

1. 2.

Create a template with a linux/ subdirectory, just as for previous releases:


$ mkdir templates/features/my_feature/linux

In that directory place your feature description, your patch, and configuration files (if required):
$ ls templates/feature/my_feature/linux version.patch my_feature.scc my_feature.cfg

The .scc file describes the patches, configuration files, and where in the patch order the feature should be inserted:
patch version.patch kconf non-hardware my_feature.cfg

3. 4.

Configure your build with the new template by supplying the --with-template=features/my_feature option to the configure command line. Build the kernel:
$ make linux

Technique #1a: An Alternate Method

If you do not require a full template, you can place a .scc file at the top of the build (prjbuildDir), along with configuration files and patches. The build system will pick up the .scc file and add it to the patch list automatically.
Technique #2: kernel-cache with the BSP Name Duplicated

1.

At the top of a layer, create a kernel cache. The build system will recognize any directory of the name kernel-*-cache as a kernel cache. For example, do the following:
$ cd my_layer $ mkdir kernel-temp-cache

2.

Make a directory with the BSP:


$ mkdir kernel-temp-cache $ mkdir kernel-temp-cache/my_feat

3.

Create the .patch, .cfg, and .scc files in kernel-temp-cache/my_feat instead of in templates/feature/my_feature/linux as you did in Technique #1: Using a Template, p.175. Configure the build with the feature added to the kernel type by passing --with-kernel=standard+my_feat/my_feature.scc to the configure command line. Build the kernel:
$ make linux

4.

5.

Technique #2a: An Alternate method

If your feature name overrides the name of a similar feature in the core kernel-cache, you can re-use the original version by including it. This allows a BSP to be overridden in a kernel-cache while continuing to include the original BSP configuration and patches.

175

Wind River Linux User's Guide, 3.0

This is similar to Duplicating Other Template Names, p.69 except that instead of templates, this is done with .scc files. You create a feature.scc and include in it the statement include feature.scc, where feature is the same feature name as one in the kernel-cache.
Technique #3: git

Heres how you can use git:


$ cd linux $ git checkout bspname-kernel_name

Then:
$ git-am patch

or
$ kgit-import -t patch patch $ cd .. $ make linux

Patch Management

The constructed kernel trees are comprised of branches, each of which was constructed from a distinct and separate patch series. To determine which patches were used to construct a branch, do the following:
$ $ $ $ make linux.devprep cd linux git checkout branch #for example, standard guilt applied

Typically, it is better to just use the commits to look at what built a branch:
$ git whatchanged branch

If you need to refresh a patch:


$ git checkout branch $ guilt applied

Locate a patch of interest and then:


$ guilt pop patch_right_above_patch_of_interest

Make changes to the tree and finally:


$ guilt refresh

NOTE: The patch can be exported with kgit export -p top outdir at this point.
$ guilt push -a

You have now re-written the history and changed the commit IDs for all the patches that make up the series. No dependent branch will see those changes, since they have branched off the old commit ID of the patch you just refreshed. To make changes visible to other branches, you must propagate the change:
$ git checkout child_branch $ guilt rebase parent_branch

This removes all patches and commits that are currently applied, creates a branch at the commit ID, and then re-applies all patches. Continue this up the chain to the leaf branch.

176

13 Patch Management 13.4 git and the Kernel

BSP Example

The following example illustrates the bootstrap of a BSP. Perform these steps before each of the following techniques: 1. Create the required board template files to configure a build with the new BSP, for example, mylayer/templates/board/my_bsp/config.sh. (See the Wind River Linux BSP Developers Guide for more information on BSP files.) Configure a build. Clone the default_kernel tree:
$ make linux.unpack $ make linux.devprep

2. 3.

Technique #1: git

1.

Create the BSP branch from the appropriate kernel type:


$ cd linux

The naming convention for auto-build is bsp-kernel_type, for example:


$ git checkout -b my_bsp-standard standard

2.

Make changes, import patches, and so on:


$ guilt init

wrs/patches/my_bsp-standard has now been created to manage the branches patches.

Option #1: Edit files, guilt import.


$ guilt new extra-version.patch $ vi Makefile $ guilt refresh

Add a header:
$ guilt header -e

Describe the patch using best practices, as in Example 13-1:


Example 13-1 Header Example From: John Doe <john.doe@windriver.com> Adds an extra version to the kernel Modify the main EXTRAVERSION to show our bsp name Signed-off-by: John Doe <john.doe@windriver.com>

Option #2: Import patches.


$ git am patch

or
$ git apply patch $ git add files $ git commit -s

or
$ kgit import -t mbox mbox $ kgit import -t dir path_to_directory_with_series $ kgit import -t patch patch

177

Wind River Linux User's Guide, 3.0

3. 4.

Configure the board, and save relevant options:


$ make ARCH=arch menuconfig

Save the configuration changes for reconfiguration:


$ mkdir wrs/cfg/cache/my_bsp $ vi wrs/cfg/cache/my_bsp/my_bsp.cfg

5.

Classify the patches:


$ kgit classify create kernel-foo-cache/my_bsp/my_bsp $ kgit classify -v mv extra-version.patch my_bsp $ kgit classify cat my_bsp

6. 7.

Edit the category:


$ kgit classify ed my_bsp

Link the configuration to the patches, and add this to the category:
scc_leaf ktypes/standard my_bsp-standard kconf hardware my_bsp.cfg

8.

Export:
$ kgit export -v -b my_bsp-standard -x links \ -p all -c my_bsp path_to_layer

9.

Test build:
$ cd .. $ make linux TARGET_BOARD=my_bsp kprofile=my_bsp use_current_branch=t

Assuming the patches have been exported to the correct location, future builds will now find the board, apply the patches to the base tree, and make the relevant branches and structures. In addition, the special build options will no longer be required.
Technique #2: kernel-cache

1. 2. 3. 4. 5. 6.

Create board template as in Technique #1: git, p.177. Create a kernel-name-cache in a layer. Manually create the directory to hold the .scc and .cfg files for the BSP (see Technique #1: git, p.177 for the example). Add patches to the BSP directory, and add them to the .scc file with the patch directive. Make linux.patch:
$ make linux.patch

Resolve any patch conflicts and allow the board to build.

Although this technique seems easier, it does not leverage the existing kernel.org workflow, and requires patches to be applied and resolved in place, exported, and then work continues. The first technique allows a BSP to be started on an existing tree and worked on in place.

178

13 Patch Management 13.4 git and the Kernel

Patch Merge

Merge patches as follows: 1. 2. Checkout a branch:


$ git checkout branch # typically a BSP branch

Merge the patches a. Single patch:


$ git am mbox $ git apply patch $ kgit import -t patch patch

b.

Multiple patches:
$ git am mbox $ kgit import -t dir dir

If you use kgit -t dir, you can use a patch resolution cycle such as this to locate rejects and resolve options:
$ wiggle --replace path_to_file path_to_reject $ guilt refresh

(wiggle helps resolve patch failures by using word-wise comparisons see prjbuildDir/host-cross/share/man/man1/wiggle.1.) Or use manual resolution:
$ git add files $ git commit -s

or
$ git apply --reject .dotest/0001 $ git add files $ git am --resolved

or use your merge tool of choice. 3. Continue series:


$ kgit import -t dir dir

or
$ git am --continue

4.

Export patches:
$ kgit export -p first_patch...last_patch dir

or
$ git format-patch last_commit^ -o dir

or
$ git push ..

You can also import changes with git pull, git fetch or rebase, and so on. You should use git practices for resolving conflicts in this case, and merge commits done with the results.

179

Wind River Linux User's Guide, 3.0

Sharing a Kernel

Once a tree has been constructed, built, and the changes deemed acceptable, you can reproduce the build without exporting the patches or reconstructing the tree:
$ make linux $ kgit publish -a linux linux.published

You can now place the output directory linux.published in the kernel layer (wrll-linux-2.6.27/git) as the default_kernel repository or you can push it to a remote server. Once pushed, subsequent calls to make linux check out the previously constructed branch and build the kernel.
NOTE: You can push changes directly to remote trees without publishing.

13.5 Kernel Patching with scc


The scc script is the logic that controls the selection of the kernel patches passed to the build system during the kernel patching phase. The following describes basic scc functionality. The patches are largely self documenting. The .scc files document the overall patch strategy for the kernel or feature. The patches themselves have a header that describes the specifics of the patch. Normally all interactions with scc are handled by the build system and it should rarely be invoked from the command line. The rich feature set of scc is primarily used in constructing the git tree from the kernel cache. Typical end users will, at most, simply list some of their custom add-on patches and configuration changes in a simple .scc file they create in their project or custom template.

Kernel Patching Design Philosophy

Unlike other packages in the build system, the kernel is not single purpose or targeted at a particular piece of hardware. It must perform the same tasks and offer the same APIs across many architectures and different pieces of hardware. The key to managing a feature-based patching of the linux kernel is to remove both the distributed control of the patches (subdirectory-based patches.list files) and hand editing of the patch files. Replacing these two characteristics with script-based patch list generation and a method to control and describe the desired patches with a top-down approach eases the management of kernel patching complexity. Additionally, a direct mapping between BSPs and profiles can be easily made, increasing maintainability. The scc script has been implemented to control the process of patch list generation and feature-based patching.

180

13 Patch Management 13.5 Kernel Patching with scc

In the most simple example, scc files look very similar to the patches.list of earlier releases. Once notable difference is that the metadata concerning the license, source and reviewers of the patch are contained inside the patch itself and not in the scc file. This information can be in the scc file, but only as a secondary source of information.

scc Facilities

scc provides the following facilities:

Top down, feature-based control of patches which allows a feature and profile-based global view of functionality and compatibility to dictate which patches should be applied. It also allows feature- and architecture-specific patch context modifications to be created by each individual feature. Feature inheritance and shared patches means that each feature may explicitly include the features and inherit their patches. Each feature can then modify the inherited patches list and substitute slightly different patches to work in their context. This allows the sharing and reuse of patches by only changing the minimum amount and context of existing feature patches. Allows upstream, feature-based patches to be logically grouped and used in many different patch stacks. This allows isolation and combination testing of features and allows a single set of patches to be used in multiple platforms. Modifications to a feature patch set are contained in the modifying top level feature's directory, leaving the original patch in it's pristine form. These are called patch context mods and can be architecture-, platform-, or feature-based. Patch context mods can be identified by the name of the original patch which they are based plus a suffix of the feature name which required the modification of the original patch.

Associates kernel configuration directly with the patches that comprise a kernel feature. Direct mapping of published kernel feature compatibility profiles to named patch stacks.

scc Files

scc files are small, sourced shell scripts. Not all shell features should be used in these scripts, and in particular no output should be generated, because the script is interpreted by the calling framework. You can use conditionals and any other shell commands, but you should be careful to use only basic, standard commands. A feature script may denote where it should be located in the link order. This is only used by scripts that are not being included by a parent or entry point script and that you wish to be executed. The available sections are INIT, MAIN, and FINAL. Denote the section names in a .scc file as follows:
# scc.section section_name

Any variable passed to scc with the -D=macro is available in individual feature scripts. To see what variables are available, locate the invocation of scc and search for defines.

181

Wind River Linux User's Guide, 3.0

The following built-in functions are available:

dirChanges the current working patch directory, and subsequent calls to patch use this as their base directory. patchOutputs a patch to be included in the feature's patch set. Only the name of the patch is supplied, and the path is calculated from the currently set patch directory. patch_triggerIndicates that an action should be triggered and performed on a patch. The syntax is:
patch_trigger condition action target_patch_name

The condition can be:

archa comma-separated list of architectures or all. plata comma-separated platform list or all.

The action can be:

excludeUse only in exceptional situations where a patch cannot be applied for certain reasons (architecture or platform). When the trigger is satisfied the patch will be removed from the patch list. includeUse to include a patch only for a specific trigger. Like exclude, this should only be used when necessary. It takes one argumentthe patch to include. transformModifies the patches in the patch set based on a sed substitution format: / match/replace/. Multiple transforms can be applied in a single feature or across many features. ctx_modIndicates that a base patch has context modifications due to different patch stacks using a common feature. The base patch is almost always the pristine upstream patch and the ctx_mods are context changes to allow the patch to apply in multiple stacks. ctx_mods This takes one argumentthe base_patch name to modify as it appears in the common feature. The ctx_mod patch is found in the directory of the feature adding the trigger and must have the name dictated by the condition indicated in the trigger. If platforms or architectures have been indicated in the conditional, the patch takes the form: base_patch.archs. Where archs is an underscore-separated list of architectures matching the comma separated list used in the conditional If all is the arch or plat trigger, the context patch takes the form base_patch.feature_with_the_trigger. A context patch should be version controlled, but not hand edited, and regenerated when required.

includeIndicates that a particular feature should be included and processed in order. There is an optional parameter after feature_name to indicate that the order of processing should not be used and a feature must be included after feature feature_name. Include paths are relative to the root of the directories passed with -I.

182

13 Patch Management 13.5 Kernel Patching with scc

Note that changing the default order of large feature stacks by forcing a different order with after can result in a significant work effort in order to rebase the patches of the features, if they are touching the same source files.

excludeIndicates that a particular feature should not be included even if an include directive is found. The exclude must be issued before the include is processed. set_kernel_versionTakes a new kernel version as its argument. This allows a feature to change the effective kernel version and allows other features to test this value with the KERNEL_VERSION variable. check_boardTests if a particular board is being patched. This allows a feature to change the patches on a board-specific basis. Logical actions should be based on the return value $?. A 1 indicates that the current board matches the test value, a 0 means that a different board is being patched.

scc File Examples

The following presents some examples on the use of scc. Note that you can get detailed help with scc --help=scc.
Specifying a Leaf Node

This is a BSP branch with no child branches, hence is a leaf on the tree (with comments):
# these are optional, but allow standalone tree construction define WRS_BOARD name define WRS_KERNEL kern_type define WRS_ARCH arch scc_leaf ktypes/standard common_pc-standard # ^ ^ # +-- parent + branch name include common_pc.scc # ^ # +--- include features shared across all kernel types for this BSP

This file reflects common_pc-standard.scc, that is, the common_pc BSP with kernel type standard.
Specifying a Normal Node

Configuration files and patches are specified as shown (with comments):


# +---- name of file to read # v kconf hardware common_pc.cfg # ^ ^ # | +-- 'type: hardware or non-hardware # | # +--- kernel config # patches patch 0002-atl2-add-atl2-driver.patch patch 0003-net-remove-LLTX-in-atl2-driver.patch patch 0004-net-add-net-poll-support-for-atl2-driver.patch

183

Wind River Linux User's Guide, 3.0

Specifying Transforms

This section presents examples of various transforms.

The following changes the order of pending includesif the passed feature is detected, the first feature is included after it:
include features/rt/rt.scc after features/kgdb/kgdb

The above also changes the order of existing branches.

The following prevents the named feature from ever being included:
exclude features/dynamic_ftrace/dynamic_ftrace.scc

The following causes it to inherit the standard kernel:


include ktypes/standard/standard

The following changes the named patches in the series into patch_name.patch.feature_name, where the substituted patch is in this directory:
patch_trigger arch:all ctx_mod dynamic_printk.patch patch_trigger arch:all ctx_mod 0001-Implement-futex-macros-for-ARM.patch

The following unconditionally excludes a patch:


patch_trigger arch:all exclude ftrace-fix-ARM-crash.patch

NOTE: If a transform (such as exclude or include...after) changes the patch content

or patch order within a feature, or feature order, then it will trigger an auto branch from the point of the last feature it shares in common with the pre-generated branch.

184

PA R T I II

Deploying your Platform Project


14 15 16 17 18 19 20 Simulated Deployment with QEMU ......................................... 187 Network Server Configuration ................................................ 199 Deploying Your Board from a Network ................................... 205 Deploying Your Board with PXE ............................................. 215 Stand-Alone Deployment With Flash Devices ....................... 221 Stand-Alone Deployment to Disk ............................................ 227 Deploying SELinux ................................................................... 241

185

Wind River Linux User's Guide, 3.0

186

14
Simulated Deployment with QEMU
14.1 Introduction 187 14.2 Deployment 187 14.3 Configuration 189 14.4 QEMU Example: Deploying initramfs 193

14.1 Introduction
QEMU is a processor simulator for supported boards. (Refer to your Release Notes for a list of supported boards.) Using QEMU for simulated deployment, no actual target boards are required, and there are no networking preliminaries. QEMU and Workbench are compatible both in User Mode and Kernel Mode. QEMU deployment, for the supported boards, offers a suitable environment for application development and architectural level validation. User-space and kernel binaries are compatible with the real hardware.

Internals

When started, QEMU runs in a pseudo-root environment and starts the NFS server with alternate RPC ports. The simulated target is given a hard-coded IP address of 10.0.2.15, and localhost is visible from the simulated target as 10.0.2.2.

14.2 Deployment
The Getting Started provides an example of how to deploy a QEMU target for user mode debugging. You can also use QEMU to perform kernel mode debugging (KGDB) of supported Wind River Linux targets as described in this section.

187

Wind River Linux User's Guide, 3.0

Once you have built a platform project for one of the QEMU-supported boards and then built the file system (make fs), you can start an instance of QEMU for that target. Note that after a make fs, the pre-built kernel is automatically copied to the project build directorys export subdirectory. The QEMU simulator loads and executes the kernel found within the export subdirectory, and NFS-mounts the export/dist subdirectory as its root file system. The following example assumes you have built a platform project for one of the supported boards (the example uses the ARM Versatile AB-926EJS platform). When you have created the platform project you can start QEMU from the command line, load the KGDB kernel module, and then connect the debugger from Workbench as shown in the following procedure. In this example, the KGDBOE agent was set to start up automatically then you boot the simulated target; otherwise you are required to manually start this agent. 1. Enter make start-target in your project build directory. For example:
$ cd /home/user/WindRiver/workdir/arm_versatile $ make start-target

2. 3.

If you have built a platform with a small file system just press ENTER, otherwise provide the user name root, password root to log in. At the root prompt, load the KGDB Ethernet (kgdboe) module as follows:
# modprobe kgdboe kgdboe=@/,@10.0.2.2/

The module is loaded.


NOTE: Refer to Wind River Workbench by Example, Linux Version for details on loading Ethernet as well as Serial KGDB target modules on physical targets.

Accessing the Simulation

You can now use the simulator in various ways.


From the Command Line

Note that in the configuration information given above, the usual host ports are mapped to new port numbers so that you can access the features through the new port numbers. For example, KGDB is usually accessed at port 6443, but you used port 4445 when you connected in the previous procedure. Telnet port 23 has been mapped to port 4441, and ssh port 22 has been mapped to port 4440. You can access the running simulation through those ports with the appropriate tools. For example, from another terminal window on the same host, you could use ssh to log in to the running simulation with the following command:
$ ssh -p 4440 root@localhost

From Workbench

1.

You can now use Workbench to connect the debugger to the QEMU target. In Workbench, right-click in the Remote Systems view and select New > Connection, then expand the Wind River Linux folder and select Wind River Linux KGDB Connection. Click Next.

188

14 Simulated Deployment with QEMU 14.3 Configuration

2. 3. 4.

Select Linux KGDB via Ethernet and click Next. For Remote Host Settings enter the name localhost and change the Port to 4445. Click Next. For Kernel image, browse to the location of your exported kernel image that contains symbols. This is the vmlinux-symbols file and is contained in the export/ subdirectory below your project directory. For example, your path might look something like this:
/home/user/workdir/arm_versatile/export/arm_versatile_926ejs-vmlinux-symb ols-WR2-0ap_standard

Click OK and click Next twice until you are at the Object Path Mappings screen. 5. Click Add on the Object Path Mappings screen to add the path to your exported file system. Leave the target path blank and browse to the export/dist host path under your project build directory. For example, it might be;
/home/user/workdir/arm_versatile/export/dist

Click OK and then click Finish. 6. You now have a WRLinuxKGDB_localhost target connection in your Remote Systems view. Select it and click the green connection icon. After a few moments the connection is made. If you have identified the correct symbols file in step 4, the kgdb.c source should be displayed in the editor. Expand the debug context in the Debug View and you will see that System Context is Stopped. The terminal window where you launched QEMU will be frozen. Select the operating system in the Debug View and click the green Resume button to continue system processing. You can now continue to debug the QEMU target with Workbench. For more information on kernel mode debugging, refer to Wind River Workbench by Example, Linux Version. To disconnect, click the red Disconnect icon in the Remote Systems view. You can stop the QEMU simulator by entering CTRL-A x in the terminal window.

7.

14.3 Configuration
At a terminal, and within the project build directory, you may enter an interactive menu to change default QEMU configurations by entering:
$ make config-target

The menu, with its numbered default configuration values, looks similar to the following:
===QEMU and or User NFS Configuration=== 1: TARGET_QEMU_BOOT_TYPE=usernfs 2: NFS_EXPORT_DIR=/home/user/WindRiver/workspace/common_pc_prj 3: NFS_MOUNTPROG=21111 4: NFS_NFSPROG=11111 5: NFS_PORT=3049

189

Wind River Linux User's Guide, 3.0

6: TARGET_QEMU_BIN=qemu 7: TARGET_QEMU_AUTO_IP=yes 8: TARGET_QEMU_USE_STDIO=yes 9: TARGET_QEMU_BOOT_CONSOLE=ttyS0 10: TARGET_QEMU_GRAPHICS=no 11: TARGET_QEMU_KEYBOARD=en-us 12: TARGET_QEMU_PROXY_PORT=4442 13: TARGET_QEMU_PROXY_LISTEN_PORT=4446 14: TARGET_QEMU_DEBUG_PORT=1234 15: TARGET_QEMU_AGENT_RPORT=udp:4444::17185 16: TARGET_QEMU_KGDB_RPORT=udp:4445::6443 17: TARGET_QEMU_TELNET_RPORT=tcp:4441::23 18: TARGET_QEMU_SSH_RPORT=tcp:4440::22 19: TARGET_QEMU_MEMSCOPE_RPORT=tcp:5698::5698 20: TARGET_QEMU_PROFILESCOPE_RPORT=tcp:5678::5678 21: TARGET_QEMU_KERNEL=bzImage 22: TARGET_QEMU_INITRD= 23: TARGET_QEMU_HARD_DISK= 24: TARGET_QEMU_CDROM= 25: TARGET_QEMU_BOOT_DEVICE= 26: TARGET_QEMU_KERNEL_OPTS= 27: TARGET_QEMU_OPTS= Enter number to change (q quit)(s save):

There should not normally be a need to change these default configurations. The CTRL+A c command allows you to enter and exit the QEMU monitor, which provides commands from within the simulation. For example:
root@localhost:/root> CTRL+A c (qemu) help help|? [cmd] -- show the help commit device|all -- commit changes to the disk images (if -snapshot is used) or backing files info subcommand -- show various information about the system state q|quit -- quit the emulator . . . (qemu) CTRL+A c root@localhost:/root>

Ending the Simulation

You can quit the simulation from the (qemu) prompt with quit, or from the simulator command prompt (root@localhost:/root>) with CTRL+A x.

Command Line Options

Use make start-target TOPTS=option on the command line to see various options you can pass when starting a simulation. Use -h to display the available options:
$ make start-target TOPTS="-h" Usage ./scripts/config-target.pl [Options] <command> Options: -c Use text console -gc Use graphics console -p Use telnet proxy as console -i # Increment the remote port offsets by # typically used when starting moren than one target -d Extra script debug output -w Wait until debugger attaches to QEMU -x Use an external console defined by TARGET_VIRT_EXTERNAL_CONSOLE

190

14 Simulated Deployment with QEMU 14.3 Configuration

and go into the background Output the target start command which you could use to start a debugger with -m # Number of megs of RAM to use on the target -su Use "su -c" instead of "sudo" for root access -t Use tuntap -cd <iso_file> Boot from CD (QEMU Only) -disk <disk_image> Boot kernel with disk image -cow <cow_file> COW file for (UML Only) -no-kqemu Do not use the kqemu accerator -o Commands: start stop nfs-start nfs-stop net-start net-stop kqemu-start kqemu-stop allstop config

Start target, NFS server and proxy (if needed) Stop the target and NFS server... Start the NFS server Stop the NFS server Start the network server (TUN/TAP) Stop the network server (TUN/TAP) Load the KQEMU kernel module unload the KQEMU kernel module Stop target, NFS server and proxy Display or change the default configuration

For example, if another QEMU session is running on your host, you can start a second QEMU session by choosing different ports. The -i option does this by automatically incrementing port numbers by the specified amount:
$ make start-target TOPTS=-i 2

To boot a CDROM image (.iso file) in QEMU, enter:


$ make start-target TOPTS=-cd prjbuildDir/export/image.iso

(See 19. Stand-Alone Deployment to Disk for more on creating and booting .iso images.) You can also boot a hard disk image:
$ make start-target TOPTS="-disk Hard_Disk_Image"

You can also combine options, for example:


$ make start-target TOPTS="-i 2 -cd prjbuildDir/export/image.iso"

to increment the port count and boot a .iso image.

Enabling TUN/TAP Networking

TUN and TAP are virtual network kernel drivers used to implement network devices that are supported entirely in software, making them ideal for use with a QEMU deployment. TAP, short for network tap, simulates an Ethernet device and works with layer 2 packets such as Ethernet frames. TUN, short for network tunnel, simulates a network layer device. It works with layer 3 packets, such as IP packets. Once enabled, TAP creates a network bridge while TUN provides the routing. You can use TUN/TAP networking to configure a network on your host that connects to the QEMU target simulation. If you wish to connect two or more QEMU simulations for testing and debugging, TUN/TAP lets you specify each simulations networking parameters.

191

Wind River Linux User's Guide, 3.0

Enabling TUN/TAP from Workbench NOTE: Configuring TUN/TAP networking on the host requires root privileges

you can start the emulation as the root user, or start it as another user and you will be prompted for the root password. If you used Workbench to create the QEMU target connection, TUN/TAP is enabled by default. It is possible to make changes to the default settings when you create a new target connection or from the Target Connection Properties dialog. The default settings include:

TARGET_TAP_DEV: The device number of the software network tap. The

default setting is auto, but you may specify a number for the tap. For example, tap0, tap1, and so on.

TARGET_TAP_UID: The user ID name of the tap device. The default setting is

auto.

TARGET_TAP_IP: The IP address of the tap interface. The default setting is

auto.

TARGET_TAP_ROOTACCESS: The root access command for starting or

making changes to TAP settings. The default setting is sudo, but su -c is also acceptable.

TARGET_TAP_HOST_DEV: The host ethernet interface. The default is eth0.

NOTE: You must configure the TUN/TAP interface once for each system boot. To access these settings in the New Target Wizard:

1. 2. 3. 4.

In Workbench, select the New Connection button in the Remote Systems window to launch the New Connection Wizard. Select the connection type. Since we are accessing TUN/TAP settings for a QEMU deployment, choose Wind River QEMU Connection, then click Next. In the New Connection dialog, QEMU Simulator Configuration section, make changes as necessary to the default TUN/TAPsettings. Continue the wizard in accordance with your target connection requirements.

To access these settings for an existing target connection:

1. 2. 3.

In the Remote Systems window, right-click on the QEMU target connection you want to make changes on, then click Properties. In the Target Connection dialog, QEMU Simulator Configuration tab, make changes as necessary to the default TUN/TAP settings. Click OK to save the settings.

Enabling TUN/TAP from the Command Line

Enter the following at the command line:


$ make net-start TOPTS="-t"

192

14 Simulated Deployment with QEMU 14.4 QEMU Example: Deploying initramfs

NOTE: This command must be run as root, or with the -su option from the command line. If you are not logged in as root, sudo will automatically run and prompt you for the root password.

When your simulation is running, view the routing information on the simulation:
root@localhost:/root> route Kernel IP routing table Destination Gateway 192.168.200.0 * default 192.168.200.1 root@localhost:/root>

Genmask 255.255.255.0 0.0.0.0

Flags Metric Ref U 0 0 UG 0 0

Use Iface 0 eth0 0 eth0

and on the host:


host_$ route Kernel IP routing table Destination Gateway 192.168.200.15 * 192.168.200.0 * 190.0.2.123 * default gateway-02 host_$

Genmask 255.255.255.255 255.255.255.0 255.255.255.0 0.0.0.0

Flags UH U U UG

Metric 0 0 0 0

Ref 0 0 0 0

Use 0 0 0 0

Iface tap0 tap0 eth0 eth0

For example, the 192.0.2.0 through 192.0.0.24 IP block is assigned as "test net" for use in documentation and example code. It is often used in conjunction with domain names example.com or example.net in vendor and protocol documentation. Addresses within this block should not appear on the public Internet. Note that 192.168.200.1 is assigned to the host and 192.168.200.15 is assigned to the target. Network applications on the host, for example, may now access the target at 192.168.200.15.

14.4 QEMU Example: Deploying initramfs


Linux 2.6 and greater kernels contain a gzipped cpio format archive, which is extracted into the root file system when the kernel boots up. Once it extracts, the kernel checks to see if the root file system contains a file named init. If it does, the kernel executes this init file as PID 1. This init process is responsible for loading the rest of the system, including locating and mounting the real root device (if any). If the root file system does not contain an init file after the embedded cpio archive is extracted into it, the kernel uses older code to locate and mount a root partition, then execute some variant of /sbin/init out of that. initramfs is a method of having files available at boot time without having them in a persistent mountable file system. It's linked in the kernel file when compiling the kernel. So we can boot the target board only using the kernel with initramfs.

193

Wind River Linux User's Guide, 3.0

Building and Running initramfs

To build and run initramfs, perform the following steps: 1. Configure a BSP. Since initramfs is designed for small file system, use either glibc_small or uclibc_small to configure a BSP. Using glibc_std or glibc_cgl may increase the kernel size and possibly introduce boot issues as a result. Configure your project by specifying a board, kernel, and file system. For example, enter the following command to specify the ARM Versatile 926ejs board with a standard kernel and small file system:
$ installDir/wrlinux-3.0/wrlinux/configure \ --enable-board=arm_versatile_926ejs --enable-kernel=standard \ --enable-rootfs=glibc_small --enable-jobs=5

2.

Build the kernel boot image with initramfs. From the project build directory, enter the following command on a single line:
$ make boot-image BOOTIMAGE_FSTYPE=initramfs BOOTIMAGE_TYPE=flash

This creates a bootable file system in the prjbuildDir/export/dist directory that includes initramfs in the kernel. The file system is in export/dist/, for example:
README* bin/ boot/ dev/ etc/ home/ lib/ media/ mnt/ opt/ proc/ root/ sbin/ selinux/ srv/ sys/ tmp/ usr var/

The prjbuildDir/export/arm_versatile_926ejs-initramfs file contains the initramfs-enabled kernel. 3. Run the initramfs-enabled kernel with QEMU. Since initramfs contains the file system, it is not necessary to identify a root file system for QEMU. Enter the following command from the project build directory, all on a single line, to boot the kernel using QEMU:
$ ./host-cross/bin/qemu-system-arm -nographic -k en-us \ -kernel ./export/arm_versatile_926ejs-initramfs -net user \ -net nic,macaddr=52:54:00:12:34:56 -M versatileab -nortclk \ -append "console=ttyAMA0,115200 ip=dhcp rw highres=off UMA=1"

Once the kernel boots, a shell displays in initramfs. For a list of built-in commands, type help.

Switching the file system from initramfs

To aid in your development process, it may be necessary to switch from using an initramfs root file system to a hard disk root file system. The following procedure provides instructions for switching root file system from initramfs to a hard disk root file system using QEMU. In this process, you create an ext2 file to emulate a hard disk for QEMU. 1. Configure and build a common_pc initramfs image using the following command, entered on a single line, from the project build directory:
$ installDir/wrlinux/configure --enable-board=common_pc \ --enable-kernel=standard --enable-rootfs=glibc_small --enable-jobs=5

You should substitute the path to your Wind River Linux install directory for the installDir in the example.

194

14 Simulated Deployment with QEMU 14.4 QEMU Example: Deploying initramfs

2.

Create the initramfs boot image, using the following command, entered from the project build directory:
$ make boot-image BOOTIMAGE_FSTYPE=initramfs BOOTIMAGE_TYPE=flash

Run the initramfs-enabled kernel with QEMU. Enter the following command from the project build directory, all on a single line, to boot the kernel using QEMU:
$ ./host-cross/bin/qemu- -nographic -k en-us -kernel ./export/common_pc-initramfs -net user -net \ nic,macaddr=52:54:00:12:34:56 -M versatileab -nortclk \ -append "console=ttyAMA0,115200 ip=dhcp rw highres=off UMA=1"

Once the kernel boots, a shell displays in initramfs. For a list of built-in commands, type help. Execute the following command in the initramfs shell to see which root file system is mounted:
# mount

The following displays to indicate that the root file system resides in initramfs:
rootfs on / type rootfs (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,mode=600) tmpfs on /dev/shm type tmpfs (rw)

Since the file system does not return any reference to a hard disk, for example /dev/sda, the file system resides in initramfs. 3. To transfer the initramfs session to a hard disk-emulated session, you must create an ext2 image file for emulating the hard disk in QEMU. To do this, perform the following steps: a. Create the image file by entering the following command in the QEMU session terminal:
$ dd if=/dev/zero of=image.ext2 bs=20M count=1

b. c.

Format the image file as ext2 by entering the following command:


$ mkfs.ext2 -F image.ext2

Copy the file system to this file using the following commands:
$ mkdir tmp_root $ sudo mount -t ext2 -o loop image.ext2 tmp_root $ sudo tar jxvf export/common_pc-glibc_small-standard-dist.tar.bz2 \ -C tmp_root/ $ sudo umount tmp_root

4.

Rebuild the initramfs kernel to add the necessary programs to busybox which are required to switch from the initramfs QEMU session to the hard disk one. These programs include switch_root and mdev. To aid in this process, we provide a sample busybox config file: busybox.config. To add the programs to busybox, perform the following steps: a. Copy installDir/wrlinux-3.0/samples/initramfs_busybox.config to build/busybox-1.11.1/.config file using the following command:
$ cp busybox.config build/busybox-version/.config

b.

Perform a make command using the original busybox configuration, then again to rebuild the kernel with the new configuration file using the following commands:
$ make -C build busybox.oldconfig $ make -C build busybox.rebuild

195

Wind River Linux User's Guide, 3.0

c.

Rebuild the kernel and file system using the following command:
$ make fs

This completes the kernel and file system rebuild necessary to add the required programs to busybox. 5. Create the init script to run the switch_root command from init. Note that init is the PID1 process. Run the following command from the project build directory in a terminal, or use a text editor to create the init file and move to the prjbuildDir/export/dist directory.
$ cat << EOF > export/dist/init #!/bin/sh mount -a touch /etc/mdev.conf mdev -s mount /dev/sda /mnt echo -e "\n switch initramfs to /dev/sda.........\n" exec switch_root /mnt /sbin/init EOF

6. 7.

Change permissions on the new init file:


$ chmod u+x export/dist/init

Rebuild an initramfs kernel using the following commands:


$ make -C build kprofile=+features/initramfs linux.reconfig $ make -C build linux.initramfs

The result creates a new initramfs kernel in the /export directory titled: common_pc-bzImage-WR3.0zz_standard. You will use this kernel to run the new QEMU hard disk session. 8. Run the kernel to see how to switch from initramfs to hard disk file system. We provide the ext2 file image.ext2 to QEMU to emulate hard disk. Enter the following command from the project build directory to begin the QEMU session:
$ ./host-cross/bin/qemu -nographic -k en-us \ -kernel ./export/common_pc-bzImage-WR3.0zz_standard \ -net user -net nic,macaddr=52:54:00:12:34:56 \ -nortclk -append "console=ttyS0,115200 ip=dhcp \ rw highres=off UMA=1" -hda image.ext2

The QEMU session begins. Once the load process completes, the following message displays in the terminal:
------------------snip-------------------------------------------Freeing unused kernel memory: 5820k freed EXT2-fs warning: mounting unchecked fs, running e2fsck is recommended switch initramfs to /dev/sda......... init started: BusyBox v1.11.1 (2009-01-05 14:51:26 CST) starting pid 931, tty '': '/etc/init.d/rcS' Welcome to Wind River Linux Please press Enter to activate this console. starting pid 935, tty '': '-/bin/sh' # ------------------snip--------------------------------------------

196

14 Simulated Deployment with QEMU 14.4 QEMU Example: Deploying initramfs

9.

Once the shell is up and running, you can execute a mount command to verify that the root file system is located on the hard disk, for example:
# mount rootfs on / type rootfs (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,mode=600) tmpfs on /dev/shm type tmpfs (rw) /dev/sda on / type ext2 (rw,errors=continue)<------ on hard disk proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,mode=600) tmpfs on /dev/shm type tmpfs (rw)

The sixth line from the top indicates /dev/sda on / type ext2, verifying that we have indeed loaded the new hard disk-based QEMU session. You can stop the QEMU simulator by entering CTRL-A x in the terminal window.

197

Wind River Linux User's Guide, 3.0

198

15
Network Server Configuration
15.1 Introduction 199 15.2 Configuring DHCP 201 15.3 Configuring TFTP 202 15.4 Configuring NFS 203

15.1 Introduction
When you deploy Wind River Linux on a networked board, the boot loader on the board gets a kernel and file system from the network. This requires a properly configured boot loader and network server setup. This chapter describes how to configure your network server(s) to supply the kernel and file system to your board through its network connection. It assumes you have built a file system and have either built a kernel or are using the default kernel provided when you built your platform project. Refer to 16. Deploying Your Board from a Network for a discussion of board boot loader configuration for network deployment.

Boot Process Overview

If you are booting your target board over the network you will typically use the following resources in this order: 1. 2. 3. 4. a bootloaderthis is software on the board that you configure to access the network appropriately. an IP configurationyou can configure an IP network address into your bootloader, or you may get your IP address from the network. a kernel to boota network server provides a kernel for download. a root file system to mountthe downloaded kernel mounts the root file system from the network.

199

Wind River Linux User's Guide, 3.0

See 16. Deploying Your Board from a Network for some examples of boot loader configuration for network deployments. Board-specific details for the boot loaders are provided in the README files in your prjbuildDir/READMES directory.
NOTE: Boot loader and network configuration are somewhat different for boards that use the PXE boot protocol. Refer to 17. Deploying Your Board with PXE for details on network booting with PXE.

Network Services During Boot

The typical network deployment boot process described in this chapter uses network servers as follows: 1. 2. 3. The boot loader on the board gets its IP address, either locally or from the network. If from the network, a DHCP server supplies the IP address. With its IP address, the boot loader connects to a TFTP server and downloads a compressed kernel file. The boot loader uncompresses and boots the kernel, which takes control and then mounts its root file system from an NFS server on the network.

This chapter provides details on how to configure the DHCP, TFTP, and NFS servers. These servers and your development host may be physically one machine, or may be different machines (see Figure 15-1).
Figure 15-1 Embedded Development in a Networked Environment One or More Machines Provide Services

Development Host

DHCP Server
Ethernet

TFTP Server

NFS Server

Serial Line

Target Board

Network Configuration on Different Hosts

Different network servers provide different GUI and command-line tools for network service configuration. Configuration file specifics may also vary. This chapter can only make suggestions on how to configure the different services refer to your server documentation for specifics on your host and services.
NOTE: You will typically need root (superuser) privileges when configuring network services.

200

15 Network Server Configuration 15.2 Configuring DHCP

Setting Target and Server Host Names

You may want to map your target and server IP addresses to host names for ease of reference. For example, you could configure your servers /etc/hosts file to include both the targets and servers host name and IP address. An example is:
192.168.10.1 server1.lab.org server1 192.168.10.2 target7.lab.org target7

To set the same information on the target, insert this information into prjbuildDir/filesystem/fs/etc/hosts before you build the file system. The resulting file system will include the hosts file when downloaded from the server.

15.2 Configuring DHCP


If you are configuring your board with a static IP address, you do not need to configure a DHCP server on your network to provide an address. If you want the network to supply your board with an IP address at boot time, follow the instructions in this section to configure a DHCP server. On the DHCP/BOOTP server, you must configure the dhcpd.conf configuration file, and you must create a dhcpd.leases file if one does not already exist, as described in this section.

The DHCP Configuration File

The DHCP configuration file is /etc/dhcpd.conf. A sample file is presented below. Example 15-1 is a basic example of this file for a DHCP server called server1.lab.org. The servers IP address is 192.168.10.1. The configuration file identifies server1.lab.org as the TFTP server and the target is assigned a static IP addresses. In this example the DHCP server is the Internet Software Consortiums (ISC) DHCP, version 3.0.1. Refer to the documentation for your DHCP server for specific configuration file settings.
Example 15-1 The dhcpd.conf File

Notice that the targets static IP address is within the DHCP servers subnet, but outside the range of the dynamic IPs.
# Sample /etc/dhcpd.conf file authoritative; ddns-update-style ad-hoc; default-lease-time 21600; max-lease-time 21600; option option option option routers 192.168.10.1; subnet-mask 255.255.255.0; broadcast-address 192.168.10.255; domain-name lab.org;

201

Wind River Linux User's Guide, 3.0

option domain-name-servers 192.168.10.1; # Subnet and range of IP addresses for dynamic clients subnet 192.168.10.0 netmask 255.255.255.0 { range 192.168.10.3 192.168.10.40; } host server1.lab.org { hardware Ethernet XX:XX:XX:XX:XX:XX; fixed-address 192.168.10.1; } }

The DHCP Leases File

The DHCP server will not start without an empty leases file being created. If it has not been created already, enter the following within the /var/lib/dhcp directory:
# touch dhcpd.leases

This creates an empty file that can be used by the DHCP server. Alternatively, create an empty dhcpd.leases file with an editor.

Starting the DHCP Server

After configuring the /etc/dhcpd.conf file and after creating the leases file, start the server using a GUI or command-line tool. For example, for Red Hat Linux, you could enter (as root):
# service dhcpd start

You may want to configure the DHCP service to start when the server boots.

15.3 Configuring TFTP


You can provide your board with a kernel at boot time by configuring a TFTP server on your network. When the board boots, it contacts the TFTP server and the server and board negotiate the download of the kernel from the server to the board.

Making the Kernel Available for Download

The default TFTP download directory is typically tftpboot. If a download directory for TFTP is not already created, you must create it. Refer to your server documentation for the name of your TFTP download directory and for instructions if you want to change the default. For example, using the command line you could copy the kernel to the TFTP download directory as follows:
# cd prjbuildDir/export # cp -L *uImage* /tftpboot/uImage

202

15 Network Server Configuration 15.4 Configuring NFS

This copies the kernel from your export directory to the file with the shorter name (for convenience) of uimage in the TFTP download directory. The -L option covers both cases of whether it is a prebuilt kernel, or a symlink to a kernel you have explicitly built.

The TFTP Configuration File

For many Linux systems, the TFTP server is automatically started upon request with inetd or xinetd. The following provides some general instructions for enabling the TFTP server with xinetd. Refer to your system documentation for details on how to enable TFTP. With xinetd, the TFTP configuration file is /etc/xinetd.d/tftp. In Red Hat, TFTP is disabled by default. You can enable it by changing the:
disable = yes

line in its configuration file, to:


disable = no

Alternately, you can avoid a manual edit by using the setup program at the command line to enable the service. After enabling TFTP, remember to restart xinetd (for example, with the service command on Red Hat systems).

15.4 Configuring NFS


NFS can provide your board with a root file system when the target kernel boots. NFS exports file systems (directories) to other machines on the network. Refer to your host documentation for details on installing and enabling NFS if it is not already available.

Making the Root File System Available for Export

You can export any directory you choose to the network with NFS. This section assumes you have created an export directory in your home directory, for example /home/user/export. Copy and uncompress the compressed run-time file system file to the NFS export directory. For example, you could use the command line as follows:
# cd /home/user/export # tar -xjvpf prjbuildDir/export/*dist.tar.bz2

203

Wind River Linux User's Guide, 3.0

Configuring /etc/exports

The NFS configuration file is a plain-text file, /etc/exports. You must configure it to export the run-time file system to the target. For example, if your target had the IP address of 192.168.10.2, the /etc/exports file might appear as shown in Example 15-2.
Example 15-2 An Example /etc/exports File

/home/user/export 192.168.10.2/255.255.255.0(rw,sync,no_subtree_check,no_root_squash)

This makes /home/user/export available for mounting to the machine with network address 192.168.10.2 only. After changing the /etc/exports file, reload the service with:
# exportfs -ra

Finally, restart NFS. On Red Hat Linux systems you may use the service command as follows: # service nfs restart Or use the appropriate GUI tool for your system.

204

16
Deploying Your Board from a Network
16.1 Introduction 205 16.2 Configuring a Serial Connection to the Board 206 16.3 Example Network Deployments with RedBoot 207 16.4 Example Ramdisk Deployment with U-Boot 213

16.1 Introduction
Wind River Linux supports network deployment with NFS, ramdisk, and three boot images suitable for flash RAM. Not all methods can be employed on all boards, Refer to Wind River Online Support and your boards README files for specifics on your board. Refer to 15. Network Server Configuration for details on setting-up NFS, DHCP, and TFTP network services. This chapter covers the following deployment methods:

JFFS2this is the Journaling Flash File System, version 2. CRAMFSthis is the Compressed ROM File System. YAFFSthis file system is designed specifically for NAND flash chips. Ramdiskthe file system downloaded to RAM and mounted as a ramdisk (/dev/ram0).

In addition, the platform supports stand-alone deployment with the kernel and a ramdisk, JFFS2, or CRAMFS image in flash memory. For details, see 18. Stand-Alone Deployment With Flash Devices.

205

Wind River Linux User's Guide, 3.0

This chapter assumes that RedBoot (or another suitable boot loader) has already been installed onto the target board.
NOTE: You must refer to the README for your target as the instructions are target-specific and this chapter can only provide examples. You can find the README file in installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/board/boardname, or in prjbuildDir/READMES after running configure.

This chapter continues directory conventions used in previous chapters: /home/user/WindRiver is referred to as installDir. The development environment consists primarily of the contents of installDir/wrlinux-3.0. The build environment is contained within the project build directory, which is under /home/user/workdir. As a board example, the chapter uses the Freescale I.MX31 ADS (fsl_imx31ads) built within the project build directory prjbuildDir.

16.2 Configuring a Serial Connection to the Board


Configure a terminal emulator to connect to the board over a serial connection so that you can set boot loader parameters. This section describes two such emulatorsthe Workbench Terminal view, and the command-line utility cu which is configured with Unix to Unix Copy (UUCP).

Setting up the Workbench Terminal

Within the Workbench Terminal view, click the Settings icon, and set the port and baud rate.

Setting-up cu and UUCP

On the server, you must edit two configuration files within /etc/uucp/ to reflect your serial port and baud rate. Edit the port file to reflect your serial ports device name and baud rate, as in this example:
port serial0_38400 type direct device /dev/ttyS0 speed 38400 hardflow false

Similarly, edit the sys file, for example:


system S0@38400 port serial0_38400 time any

You can find instructions on each boards serial port device name and baud rate in the boards README file.

206

16 Deploying Your Board from a Network 16.3 Example Network Deployments with RedBoot

You can now open the serial terminal at any console with cu, for example as follows:
# cu S0@38400

To disconnect, type the escape character (~), followed by a period.

16.3 Example Network Deployments with RedBoot


RedBoot is an open source boot loader designed for embedded systems. It is one of several boot loaders that are used on boards supported by Wind River. Consult the readme file associated with the board you are using to determine which bootloaders are supported for your board, and how to use them. This section only provides some examples and explanation and is not a replacement for the board readme file.

16.3.1 Deploying with Flash


The following sections show how to deploy JFFS2, CRAMFS, and YAFFS flash file systems using the example of the Freescale I.MX31 ADS board with the RedBoot boot loader. These instructions are specific to the Freescale I.MX31 ADS. Each target BSP README has unique instructions for using the flash file system(s) supported by the specific target. Refer to 16.4 Example Ramdisk Deployment with U-Boot, p.213 for an example flash deployment using a different board and boot loader. JFFS2 is a log-structured, journaling flash file system, operating directly on the flash chip, without a flash translation layer. It is well-suited for battery-driven consumer devices which may often be shut down uncleanly. CRAMFS is a compressed, read-only file system. Although it does not have many of the features of JFFS2, it is more energy efficient. CRAMFS consumes up to 1.7 times less flash energy consumption than JFFS2 on file sizes less than or equal to 100KB. YAFFS is a journaling file system with features such as error correction, verification, and garbage collection designed specifically for NAND flash chips.
NOTE: A necessary preliminary before building any flash file system is to use the

configure option --enable-bootimage=flash when first building the run-time system.


Configuring RedBoot

Enter help at a Redboot prompt for a list of the commands available to you. Configure RedBoot using fconfig to set your default TFTP host and interface options. The boot instructions in the following examples assume that eth0 has a valid address and a default TFTP server has been configured as described in 15.3 Configuring TFTP, p.202.

207

Wind River Linux User's Guide, 3.0

Deploying with JFFS2

JFFS2 capability is included in the product in support of specific board releases that have drivers supporting NOR or NAND flash devices. The following examples use the Freescale I.MX31 ADS (fsl_imx31ads) boardrefer to the README file for your board for board-specific instructions.
Booting JFFS2 Root File System (NOR)

With the NOR flash enabled, the fsl_imx31ads target supports JFFS2 as a root file system. 1. Configure your project, for example:
$ configure --enable-board=fsl_imx31ads --enable-kernel=small \ --enable-rootfs=glibc_small+debug --enable-bootimage=flash

2. 3.

Build the file system:


$ make fs

Rebuild the kernel configured with these options:


CONFIG_JFFS2_FS=y CONFIG_JFFS2_FS_DEBUG=0 CONFIG_JFFS2_FS_WRITEBUFFER=y CONFIG_JFFS2_ZLIB=y CONFIG_JFFS2_RTIME=y

You could, for example, put the options in a filename.cfg file in your project build directory, reference it in an SCC file in your project build directory, and then enter make -C build linux.reconfig in your project build directory. 4. 5. Generate the boot image:
$ make boot-image BOOTIMAGE_FSTYPE=jffs2

From the Redboot prompt on the target, enter the following:


RedBoot> load -r -b 0x01008000 -h 192.168.10.1 fsl_imx31ads-jffs2 RedBoot> fis create jffs2 RedBoot> fis list

The fis list command will show the list of RedBoot partitions, for example:
RedBoot> fis list ... Read from 0x07ee0000-0x07eff000 at 0xa1fe0000: . Name FLASH addr Mem addr Length RedBoot 0xA0000000 0xA0000000 0x00040000 kernel 0xA0100000 0x00100000 0x001A0000 root 0xA0300000 0x00100000 0x01220000 cramxipfs 0xA1520000 0x01008000 0x003A0000 jffs2 0xA18C0000 0x01008000 0x00700000 FIS directory 0xA1FE0000 0xA1FE0000 0x0001F000 RedBoot config 0xA1FFF000 0xA1FFF000 0x00001000

Entry point 0x00000000 0x00100000 0x00100000 0x01008000 0x01008000 0x00000000 0x00000000

Counting from 0 the JFFS2 partition in this example is partition 4. 6. Load and execute the kernel that was configured with JFFS2 support:
RedBoot> load -r -b 0x01008000 -h 192.168.10.1 zImage RedBoot> exec -c "console=ttymxc0,115200 root=/dev/mtdblock4 rootfstype=jffs2 rw ip=dhcp"

208

16 Deploying Your Board from a Network 16.3 Example Network Deployments with RedBoot

Booting JFFS2 Root File System (NAND)

The following procedure shows how to use JFFS2 with NAND flash. 1. Configure your project, for example:
$ configure --enable-board=fsl_imx31ads --enable-kernel=small \ --enable-rootfs=glibc_small+debug --enable-bootimage=flash

2. 3.

Build the file system:


$ make fs

Rebuild the kernel configured with these options:


CONFIG_JFFS2_FS=y CONFIG_JFFS2_FS_DEBUG=0 CONFIG_JFFS2_FS_WRITEBUFFER=y CONFIG_JFFS2_ZLIB=y CONFIG_JFFS2_RTIME=y

You could, for example, put the options in a filename.cfg file in your project build directory, reference it in an SCC file in your project build directory, and then enter make -C build linux.reconfig in your project build directory. 4. Generate the boot image:
$ make boot-image BOOTIMAGE_FSTYPE=jffs2

then copy export/fsl_imx31ads-jffs2 to the /tmp directory of the NFS-exported root file system. 5. 6. Enable the NAND flash from the RedBoot prompt.
RedBoot> factive nand

Boot to an NFS root file system that includes the mtd-utils. The NAND flash is statically defined for 4 partitions:
# cat dev: mtd0: mtd1: mtd2: mtd3: mtd4: mtd5: mtd6: mtd7: mtd8: mtd9: /proc/mtd size erasesize name 00040000 00020000 "RedBoot" 001a0000 00020000 "kernel" 01220000 00020000 "root" 003a0000 00020000 "cramxipfs" 0001f000 00008000 "FIS directory" 00001000 00008000 "RedBoot config" 00020000 00004000 "IPL-SPL" 00400000 00004000 "nand.kernel" 01600000 00004000 "nand.rootfs" 065e0000 00004000 "nand.userfs"

mtd0-5 are NOR flash partitions, mtd6-9 are the NAND partitions. 7. Erase the flash and then write the image to NAND partition 8:
# flash_eraseall /dev/mtd8 # nandwrite -p /dev/mtd8 /tmp/fsl_imx31ads-jffs2

8.

Check that the file system is valid by mounting it:


# # # # mkdir /mnt/jffs2 mount -t jffs2 /dev/mtdblock8 /mnt/jffs2 ls -l /mnt/jffs2 umount /mnt/jffs2

9.

Reboot to RedBoot, activate NAND, load and execute the kernel:


RedBoot> factive RedBoot> load -r RedBoot> exec -c rootfstype=jffs2 nand -b 0x01008000 -h 192.168.10.1 zImage "console=ttymxc0,115200 root=/dev/mtdblock8 rw ip=dhcp"

209

Wind River Linux User's Guide, 3.0

Deploying with CRAMFS

Linear and standard CRAMFS root file systems are supported for specific board releases that have drivers supporting NOR flash devices. The following examples use the Freescale I.MX31 ADS (fsl_imx31ads) boardrefer to the README file for your board for board-specific instructions. (Board README files are located in installDir/wrlinux-3.0/layers/wrll-wrlinux/templates/board/boardname/. The appropriate READMEs for your project are copied into the READMES/ subdirectory when you configure a new project.)
Booting Linear CRAMFS Root File System

With the NOR flash enabled the fsl_imx31ads target supports Linear CRAMFS XIP, also referred to as Application XIP.
NOTE: The RedBoot bootloader does not support executing a kernel directly from flash (Kernel XIP).

1.

Configure your project, for example:


$ configure --enable-board=fsl_imx31ads --enable-kernel=small \ --enable-rootfs=glibc_small+debug --enable-bootimage=flash

2. 3.

Build the file system:


$ make fs

Rebuild the kernel configured with these options:


CONFIG_CRAMFS=y CONFIG_CRAMFS_LINEAR=y CONFIG_CRAMFS_LINEAR_XIP=y CONFIG_ROOT_CRAMFS_LINEAR=y

You could, for example, put the options in a filename.cfg file in your project build directory, reference it in an SCC file in your project build directory, and then enter make -C build linux.reconfig in your project build directory. 4. Generate the boot image:
$ make boot-image BOOTIMAGE_FSTYPE=cramxipfs

Copy the resulting zImage file (renamed simply zImage in this example) to your TFTP download directory. 5. From the Redboot prompt on the target, enter the following:
RedBoot> load -r -b 0x01008000 -h 192.168.10.1 fsl_imx31ads-cramxipfs RedBoot> fis create cramxipfs

The fis create command assumes it's arguments from the last loaded file.
RedBoot> fis list

The fis list command will show the list of RedBoot partitions. Note the Flash address of the cramxipfs partition which is in the first column. 6. 7. Load the kernel that was configured with Linear Cramfs support:
RedBoot> load -r -b 0x01008000 -h 192.168.10.1 zImage

Using the Flash address reported by fis list, exec the kernel. In this example the Flash address is 0xA152000. Yours will likely be different.
RedBoot> exec -c "console=ttymxc0,115200 root=/dev/null rootfstype=cramfs rootflags=physaddr=0xA1520000 ip=dhcp"

210

16 Deploying Your Board from a Network 16.3 Example Network Deployments with RedBoot

Booting Cramfs Root File System (NOR)

With the NOR flash enabled the fsl_imx31ads target supports standard CRAMFS as a root file system. 1. Configure your project, for example:
$ configure --enable-board=fsl_imx31ads --enable-kernel=small --enable-rootfs=glibc_small+debug --enable-bootimage=flash

2. 3.

Build the file system:


$ make fs

Rebuild the kernel configured with these options:


CONFIG_CRAMFS=y # CONFIG_CRAMFS_LINEAR is not set

You could, for example, put the options in a filename.cfg file in your project build directory, reference it in an SCC file in your project build directory, and then enter make -C build linux.reconfig in your project build directory. 4. Generate the boot image:
$ make boot-image BOOTIMAGE_FSTYPE=cramfs

Copy the resulting zImage file (renamed simply zImage in this example) to your TFTP download directory. 5. From the Redboot prompt on the target, enter the following:
RedBoot> load -r -b 0x01008000 -h 192.168.10.1 fsl_imx31ads-cramfs RedBoot> fis create cramfs RedBoot> fis list

The fis list command will show the list of Redboot partitions. For example:
RedBoot> fis list ... Read from 0x07ee0000-0x07eff000 at 0xa1fe0000: . Name FLASH addr Mem addr Length RedBoot 0xA0000000 0xA0000000 0x00040000 kernel 0xA0100000 0x00100000 0x001A0000 root 0xA0300000 0x00100000 0x01220000 cramxipfs 0xA1520000 0x01008000 0x003A0000 cramfs 0xA18C0000 0x01008000 0x00700000 FIS directory 0xA1FE0000 0xA1FE0000 0x0001F000 RedBoot config 0xA1FFF000 0xA1FFF000 0x00001000

Entry point 0x00000000 0x00100000 0x00100000 0x01008000 0x01008000 0x00000000 0x00000000

Counting from 0 the CRAMFS partition in this example is partition 4. 6. Load and execute the kernel that was configured with CRAMFS support:
RedBoot> load -r -b 0x01008000 -h 192.168.10.1 zImage RedBoot> exec -c "console=ttymxc0,115200 root=/dev/mtdblock4 rootfstype=cramfs ip=dhcp"

Deploying with YAFFS

YAFFS capability is included in the product in support of specific board releases that have drivers supporting NAND flash. The following example uses the Freescale I.MX31 ADS (fsl_imx31ads) board and YAFFS2refer to the README file for your board for board-specific instructions.

211

Wind River Linux User's Guide, 3.0

YAFFS Root File System (NAND)

The fsl_imx31ads has small block NAND and so will support YAFFS as shown in the following example. 1. Configure your project, for example:
$ configure --enable-board=fsl_imx31ads --enable-kernel=small \ --enable-rootfs=glibc_small+debug --enable-bootimage=flash

2. 3.

Build the file system:


$ make fs

Rebuild the kernel configured with these options:


CONFIG_YAFFS_FS=y CONFIG_YAFFS_YAFFS1=y # CONFIG_YAFFS_DOES_ECC is not set CONFIG_YAFFS_YAFFS2=y CONFIG_YAFFS_AUTO_YAFFS2=y # CONFIG_YAFFS_DISABLE_LAZY_LOAD is not set CONFIG_YAFFS_CHECKPOINT_RESERVED_BLOCKS=10 # CONFIG_YAFFS_DISABLE_WIDE_TNODES is not set # CONFIG_YAFFS_ALWAYS_CHECK_CHUNK_ERASED is not set CONFIG_YAFFS_SHORT_NAMES_IN_RAM=y

You could, for example, put the options in a filename.cfg file in your project build directory, reference it in an SCC file in your project build directory, and then enter make -C build linux.reconfig in your project build directory. 4. 5. Generate the boot image:
$ make boot-image BOOTIMAGE_FSTYPE=yaffs

Copy export/fsl_imx31ads-glibc_small-small-dist.tar.bz2 (or export/fsl_imx31ads-glibc_small-small-dist.tar.bz2) to an accessible location on the target's NFS root file system and boot the target. Make sure the mtd-utils are included in the NFS file system and busybox includes the tar applet with -j support (see Configuring BusyBox, p.132). On the booted target:
# flash_eraseall /dev/mtd9 # mkdir /mnt/yaffs # mount -t yaffs /dev/mtdblock9 /mnt/yaffs

6.

This will create an empty YAFFS file system. 7. Uncompress the file system onto the device:
# # # # cd /mnt/yaffs tar jxvf /tmp/fsl_imx31ads-glibc_small-small-dist.tar.bz2 . cd / umount /mnt/yaffs

8.

Load and execute the kernel that was configured with YAFFS support:
RedBoot> factive RedBoot> load -r RedBoot> exec -c rootfstype=yaffs nand -b 0x01008000 -h 192.168.10.1 zImage "console=ttymxc0,115200 rw ip=dhcp root=/dev/mtdblock9 rw"

212

16 Deploying Your Board from a Network 16.4 Example Ramdisk Deployment with U-Boot

16.4 Example Ramdisk Deployment with U-Boot


The following example uses U-boot and a ti_omap2430sdp target to mount the file system in RAM on /dev/ram0. Refer to your board readme file for information on the correct boot loader and arguments for your board.

Create the initrd Image

1.

Configure your project, for example:


$ configure --enable-board=ti_omap2430sdp --enable-kernel=small \ --enable-rootfs=glibc_small --enable-bootimage=flash

This configuration uses the uclibc_small file system to make it small enough for a RAM disk image. 2. 3. Build the file system:
$ make fs

Rebuild the kernel configured with these options:


CONFIG_BLK_DEV_RAM=y CONFIG_BLK_DEV_RAM_COUNT=16 CONFIG_BLK_DEV_RAM_SIZE=16384 CONFIG_BLK_DEV_INITRD=y

You could, for example, put the options in a filename.cfg file in your project build directory, reference it in an SCC file in your project build directory, and then enter make -C build linux.reconfig in your project build directory. 4. Create initrd, the ramdisk image. Within your project build directory, enter:
$ make boot-image BOOTIMAGE_TYPE=flash BOOTIMAGE_RAM0SIZE=8192

Note that the ramdisk size is dependent on the size of the underlying file system image as well as available ramdisk. For uclibc_small, 8MB should be more than enough. (In Workbench you could create a custom build target for your preferred ramdisk size.) 5. Copy the resulting images within export to the TFTP directory:
$ cd export # cp *initrd.gz /tftpboot/initrd.gz $ cp *uImage* /tftpboot/uImage

Configure U-Boot

Enter help at a U-Boot prompt to see the commands available to you. Use the setenv command to set environment variables, printenv to view them, and saveenv to save them. Set the U-Boot environment as follows:
bootdelay=5 baudrate=38400 bootfile=uImage ipaddr=192.168.10.2 serverip=191.168.10.1 bootargs root=/dev/ram0 rw console=ttyS0,115200n8 initrd=0x80600000,8M ramdisk_size=8192 stdin=serial stdout=serial stderr=serial verify=n

213

Wind River Linux User's Guide, 3.0

Deployment

Perform the following steps at the U-Boot console: 1. 2. 3. Enter the following to load the initrd image into RAM:
# tftpboot 0x80600000 ti_omap2430sdp-initrd.gz

Enter the following to load the kernel into RAM:


# tftpboot 0x80000000 uImage

Enter the following to boot the kernel:


# bootm

Note that the kernel must be loaded after the initrd, otherwise U-Boot will assume the kernel start address is 0x80600000 instead of 0x80000000. Press ENTER to activate the console. There is no root password.

214

17
Deploying Your Board with PXE
17.1 Introduction 215 17.2 Preparing the Downloaded Files 216 17.3 Configuring DHCP for PXE 217 17.4 Setting up and Booting the Target 218

17.1 Introduction
You can configure the Pre-boot Execution Environment (PXE) boot loader on most IA32 boards with Wind River Linux board support packages (BSPs). This chapter describes a typical development example of bringing up a board using PXE, TFTP, and NFS. For DHCP and PXE boot, three separate servers are required:

DHCP NFS TFTP

The Syslinux package, which contains the PXELinux boot loader, is also required. The TFTP and PXELinux packages must be installed.

Process Overview

A PXE boot-enabled NIC supports the Bootstrap Protocol (BOOTP). This protocol, provided by a DHCP server, allows a diskless target to obtain its own IP address, the IP address and name of a server, and the name of the boot loader file on that server that it can download to boot. Booting the target server follows these steps: 1. The targets PXE-enabled NIC broadcasts its MAC address, requesting an IP address from a BOOTP/DHCP server.

215

Wind River Linux User's Guide, 3.0

2.

The DHCP/BOOTP server, configured with the MAC address of the target and other options, returns the targets IP address, along with the name of the TFTP server and the name of the PXELinux boot loader file, which resides on the TFTP server. The target downloads, using TFTP, the PXELinux boot loader, which provides the name of the Linux kernel image to load. The PXELinux boot loader downloads the kernel. The target runs the kernel, which rediscovers its IP address from the DHCP server. The DHCP server provides the location for the NFS root file system; the kernel mounts it and completes system initialization.

3. 4.

17.2 Preparing the Downloaded Files


As mentioned in previous chapters, you must copy the kernel and root file system to their download and export directories. The default TFTP download directory is /tftpboot. In this example, the NFS export directory for the root file system is /home/nfs/export. You may configure TFTP and NFS to use the same directory if you prefer.

The PXELinux Boot Loader File

The PXELinux boot loader file is pxelinux.0. This file is part of the Syslinux package. Installing Syslinux installs pxelinux.0 into the /usr/lib/syslinux directory; it must be copied to the TFTP download directory, by default /tftpboot.

The PXELinux Configuration File

The PXELinux configuration file resides in the /tftpboot/pxelinux.cfg directory. There can be separate configuration files for separate targets. To enable this, a filename convention is used that identifies a configuration file by its specific targets hardware type and MAC address, or its IP address. The following example demonstrates how the PXE bootloader searches for the correct configuration file. The example assumes that the PXE bootloader is looking for the configuration file for the scenarios target.lab.org, which has been assigned an IP address of 192.168.10.2, and which has an Ethernet card with a MAC address of 00-20-ED-6E-82-3D. First, the bootloader will look for a configuration file corresponding to its MAC address, with the first two digits representing its ARP code. This filename, all in lowercase, would be: 01-00-20-ed-6e-82-3d (Note the 01- preceding the MAC address.)

216

17 Deploying Your Board with PXE 17.3 Configuring DHCP for PXE

If that filename cannot be found in the /tftpboot/pxelinux.cfg directory, the bootloader will search for a file named after its IP address in hexadecimal. The filename for this example, all in uppercase, would be:
C0A80A01

Not finding that, the bootloader will search for files in the following order:
C0A80A0 C0A80A C0A80 C0A8 C0A C0 C

Finally, not finding any of these files, it will look for a file named default. In this scenario, the default filename is used. Both the file and its directory must be created:
# mkdir /tftpboot/pxelinux.cfg # touch /tftpboot/pxelinux.cfg/default

The configuration file is plain text. Example 17-1 is a configuration file for this scenario.
Example 17-1 The PXELinux Configuration File default netboot prompt 1 display pxeboot.msg timeout 300 label netboot kernel bzImage append ip=dhcp root=/dev/nfs nfsroot=/home/nfs/export

As can be seen, PXELinuxs configuration file is similar to the LILO configuration file. bzImage represents the kernels actual filename. It has been given the label netboot, which is also the default kernel to load.

17.3 Configuring DHCP for PXE


An example dhcpd.conf file is shown below. Note that this is the same as the dhcpd.conf file in The DHCP Configuration File, p.201, but with PXE additions marked by comments and in bold.
# Sample /etc/dhcpd.conf file authoritative; ddns-update-style ad-hoc; default-lease-time 21600; max-lease-time 21600; option routers 192.168.10.1;

217

Wind River Linux User's Guide, 3.0

option option option option

subnet-mask 255.255.255.0; broadcast-address 192.168.10.255; domain-name lab.org; domain-name-servers 192.168.10.1;

# Next two lines PXE boot additions allow booting; allow bootp; # Subnet and range of IP addresses for dynamic clients subnet 192.168.10.0 netmask 255.255.255.0 { range 192.168.10.3 192.168.10.40; } host server1.lab.org { hardware Ethernet XX:XX:XX:XX:XX:XX; fixed-address 192.168.10.1; } # Next section PXE boot static IPs for the target; an example MAC address # (Ethernet address) is provided. host target.lab.org { hardware ethernet 00:20:ED:6E:82:3D; fixed-address 192.168.10.2; next-server 192.168.10.1; filename pxelinux.0; option root-path 192.168.10.1:/home/nfs/export; }

In this case, dhcpd.conf has been configured to support BOOTP, and the PXE target is configured with a static IP address and supplied the following:

fixed address is the address of the PXE server. filename provides the file name of the PXE file in /tftpboot to download, in this case pxelinux.0. next-server is the address of the NFS server. option root-path provides the path on the NFS server for the exported PXE files.

Restart your DHCP server after making the changes.

17.4 Setting up and Booting the Target


The target must be configured to use PXE boot. You can then boot the target and troubleshoot any boot problems that appear.

Configuring PXE Boot On the Target

Setting up the target requires that network boot using PXE is enabled. This is generally done within the CMOS setup routine. Configure the boot parameters and sequence in your BIOS to enable the PXE boot loader and boot from it first (or only).

218

17 Deploying Your Board with PXE 17.4 Setting up and Booting the Target

Booting the Target

When your target boots you should see the target go through the following sequence: 1. 2. 3. 4. 5. 6. broadcast MAC address and receive IP address download PXE Boot Loader and configuration file download bzImage boot bzImage get IP address again mount NFS file system

If you cannot get through the first two steps in the sequence above, verify your dhcpd.conf file settings. If you cannot download the bzImage file, verify that your TFTP server is enabled and xinetd has been restarted. If your bzImage boots but cannot mount the file system, verify that the NFS daemon (nfsd) is running and that the targets root file system exists in /usr/nfs/export.

219

Wind River Linux User's Guide, 3.0

220

18
Stand-Alone Deployment With Flash Devices
18.1 Introduction 221 18.2 Process Overview 222 18.3 Preliminaries 222 18.4 Setting up Hosts 222 18.5 Stand-alone Deployment with a Ramdisk 223 18.6 Stand-alone Deployment with JFFS2 224 18.7 Stand-alone Deployment with CRAMFS 225

18.1 Introduction
You can use Wind River Linux for stand-alone deployment of supported target boards by loading the kernel and its ramdisk or flash image into flash memory. In other words, after initial setup, it is no longer necessary to download either the kernel or the file system from the network. This chapter covers the three methods supported. 1. 2. 3. Ramdiskthe file system is mounted as a ramdisk (/dev/ram0). JFFS2this is the Journaling Flash File System, version 2. CRAMFSthis is the Compressed ROM File System.

This chapter builds on chapter 15. Network Server Configuration and frequently references material in that chapter. This chapter assumes that the boot loader has already been installed on the target board. !
CAUTION: The ARM Versatile AB-926EJS will not correctly flash Wind River flash

file systems with the U-Boot supplied by the manufacturer. The U-Boot must be upgraded to version 1.1.3.

221

Wind River Linux User's Guide, 3.0

This chapter continues directory conventions used in previous chapters: /home/user/WindRiver is referred to as installDir. The development environment consists primarily of the contents of installDir/wrlinux-3.0. The build environment is contained within the project build directory, which is under /home/user/workdir. As a board example, the chapter uses the ARM Versatile AB-926EJS, built within the project build directory arm_versatile.

18.2 Process Overview


Stand-alone booting follows these steps on the target: 1. 2. 3. 4. The target is available over a serial line and Ethernet. The bootloader on the target copies the Linux kernel from flash memory into RAM. The boot loader decompresses the kernel. The target runs the kernel, which then mounts the file system and completes system initialization.

18.3 Preliminaries
In the deployment examples in this chapter, it is assumed that you have already done the following:

created the kernel image and file system you wish to use set up the bootloader environment for your file system configured networking as described in 15. Network Server Configuration.

18.4 Setting up Hosts


You may boot the target without a DHCP server. In this case you rely on peer-to-peer networking to communicate with your target. To make this more robust, both the servers and the targets /etc/hosts files should include both the targets and servers hostname and IP address. An example is as follows:
192.168.10.1 server1.lab.org server1 192.168.10.2 arm_versatile.lab.org arm_versatile

This information can be inserted into the targets hosts file by editing the prjbuildDir/filesystem/fs/etc/hosts file, before building your file system.

222

18 Stand-Alone Deployment With Flash Devices 18.5 Stand-alone Deployment with a Ramdisk

18.5 Stand-alone Deployment with a Ramdisk


To deploy a standalone ramdisk image, you must load the image into flash memory and then boot it as described in this section.

Loading the Ramdisk Image

First, load the ramdisk image (initrd) into flash using the following procedure. 1. Load the ramdisk into RAM. At the U-Boot console, enter:
# tftp 0 initrd.gz.uboot

NOTE: When tftp is done loading, it will give you the number of bytes transferred, and the hex equivalent. This is important information you will need in further steps, and later when booting the target. An example of the output is:
Bytes transferred = 4400918 (432716 hex)

2. 3. 4.

Unprotect enough flash RAM for the initrd image:


# prot off 36000000 +432716

Erase the area of flash you have unprotected:


# erase 36000000 +432716

Perform a byte copy of the initrd image from RAM to flash.


# cp.b 0 36000000 432716

Next, load the kernel into flash, following these steps: 1. 2. Load the kernel into RAM. At the U-Boot console, enter:
# tftp 0 uImage

Make a note of the hex number of bytes transferred, in this example, 10F514. Unprotect just enough flash for the kernel image. Make sure the address you use does not interfere with the initrd image you have already loaded into flash:
# prot off 34060000 +10F514

3. 4.

Erase the area of flash you have unprotected:


# erase 34060000 +10F514

Perform a byte copy of the kernel from RAM to flash.


# cp.b 0 34060000 10F514

Both the initrd image and the kernel are now loaded into non-volatile flash memory.

Booting the Target

Booting the target is a two-stage procedure. 1. First, copy the ramdisk image from flash into RAM. At this stage you will need the flash address you copied the initrd to, as well as its size in hex:
# cp.b 36000000 800000 432716

223

Wind River Linux User's Guide, 3.0

2.

Next, use the bootm command to boot the kernel, with options indicating the kernels location in flash, and the initrds location in RAM:
# bootm 34060000 800000

18.6 Stand-alone Deployment with JFFS2


First, copy the JFFS2 image into flash by following the steps in sectionDeploying with JFFS2, p.208, for your boot loader.
U-Boot Procedure

For the U-Boot loader, load the kernel into flash following these steps: 1. Load the kernel into RAM. At the U-Boot console, enter:
# tftp 0 uImage

NOTE: When tftp is done loading, it will give you the number of bytes transferred, and the hex equivalent. This is important information which you will need in step 2. An example of the output is:
Bytes transferred = 1111316(10F514 hex)

2.

Unprotect just enough flash for the kernel image. Make sure the address you use does not interfere with the JFFS2 image you have already loaded into flash:
# prot off 34060000 +10F514

3. 4.

Erase the area of flash you have unprotected:


# erase 34060000 +10F514

Perform a byte copy of the kernel from RAM to flash.


# cp.b 0 34060000 10F514

Booting the Target

Both your kernel and JFFS2 file system are now loaded into non-volatile flash memory. Boot the target with U-Boot as follows:
# bootm 34060000

Simplifying Your Network and U-Boot Environment

It is not necessary to run a DHCP server with a target configured for stand-alone deployment with JFFS2. You may turn it off entirely, or just comment out the targets host declaration in the dhcpd.conf file. You may also simplify the U-Boot environment. The following is an example of one environment what would suffice:
baudrate=38400 bootfile=uImage ethaddr=00:02:F7:00:10:39

224

18 Stand-Alone Deployment With Flash Devices 18.7 Stand-alone Deployment with CRAMFS

bootargs=root=/dev/mtdblock1 rootfstype=jffs2 noinitrd mem=128M console=AMA0 mtdparts=phys_mapped_flash:128K(u-boot),16M@0x2000000(jffs2),5012K@0x1980000( cramfs) ip=192.168.10.2:192.168.10.1:192.168.10.1:255.255.255.0 bootcmd=bootm 34060000 bootdelay=5 stdin=serial stdout=serial stderr=serial verify=n

NOTE: The bootargs line, above, has wrapped; the three lines should be entered as one.

In this example, once the target is switched on it will bring up U-Boot, wait five seconds for your intervention, then automatically boot the Linux kernel and JFFS2 file system.

18.7 Stand-alone Deployment with CRAMFS


Copy the CRAMFS image into flash by following the steps in Deploying with CRAMFS, p.210 for your bootloader.
U-Boot Procedure

For U-Boot, load the kernel into flash following these steps: 1. Load the kernel into RAM. At the U-Boot console, enter:
# tftp 0 uImage

NOTE: When it is done loading, it will give you the number of bytes transferred, and the hex equivalent. This is important information which you will need in step 2. An example of the output is:
Bytes transferred = 1111316(10F514 hex)

2.

Unprotect just enough flash for the kernel image. Make sure the address you use does not interfere with the CRAMFS image you have already loaded into flash:
# prot off 34060000 +10F514

3. 4.

Erase the area of flash you have unprotected:


# erase 34060000 +10F514

Perform a byte copy of the kernel from RAM to flash.


# cp.b 0 34060000 10F514

Booting the Target

Both your kernel and CRAMFS file system are now loaded into non-volatile flash memory. Boot with U-Boot as follows:
# bootm 34060000

225

Wind River Linux User's Guide, 3.0

Simplifying Your Network and U-Boot Environment

It is not necessary to run a DHCP server at all with a target configured for stand-alone deployment with CRAMFS. You may turn it off entirely, or just comment out the targets host declaration in the dhcpd.conf file. The U-Boot environment may also be simplified. The following environment is one example of what would suffice:
baudrate=38400 bootfile=uImage ethaddr=00:02:F7:00:10:39 bootargs=root=/dev/mtdblock2 noinitrd mem=128M console=AMA0 mtdparts=phys_mapped_flash:128K(u-boot),16M@0x2000000(jffs2),5012K@0x1980 000(cramfs) ip=192.168.10.2:192.168.10.1:192.168.10.1:255.255.255.0 bootcmd=bootm 34060000 bootdelay=5 stdin=serial stdout=serial stderr=serial verify=n

NOTE: The bootargs line, above, has wrapped; the three lines should be entered as one.

In this example, once the target is switched on it will bring up U-Boot, wait five seconds for your intervention, then automatically boot the Linux kernel and CRAMFS file system.

226

19
Stand-Alone Deployment to Disk
19.1 Introduction 227 19.2 Server-Based Installation of Wind River Linux 227 19.3 Booting Standalone with LinuxLive 230 19.4 Creating ISO and USB Flash Drive Images 238

19.1 Introduction
This chapter describes two methods to install Wind River Linux on a server hard disk and then boot it. The first method, 19.2 Server-Based Installation of Wind River Linux, p.227, is based entirely on Wind River Linux and provides for a flexible configuration in which you can specify different Wind River Linux file systems for the installation and the boot. The second method, 19.3 Booting Standalone with LinuxLive, p.230, uses LinuxLive to boot the server which you configure and then install Wind River Linux. (See http://www.linux-live.org/ for details on LinuxLive.)

19.2 Server-Based Installation of Wind River Linux


NOTE: This feature is available with the x86 architectures only.

Using the Wind River Linux build system you can create an ISO image to burn to a CD or DVD, and then use that CD or DVD to boot up the target, format the local disk, and install the runtime on the disk. At that point, you can remove the CD or DVD and boot the target directly from the local disk. You can also test your build using QEMU as shown in the procedure in this section.

227

Wind River Linux User's Guide, 3.0

There are two ways to perform the configurationeither self-contained in a single build directory; or in two build directories, one for the runtime to install on the target, and one for the install CD itself. Two build directories allows you to boot the server with a different operating system than the one you will install on it. The related options for the options for the configure command:
--enable-bootimage=iso --with-template=feature/installer --with-installer-target-build=otherbuilddir

Use the first two options together to build the installer software and create an ISO image. Use the third option only if you are creating a separate build directory.
Using a Self-Contained Installation

In the self-contained installation, the build creates a /RPMS directory in the root file system, where it puts all the RPMS that will be used to install the runtime on the target. The difference between the build types is just a question of where those RPMs come fromeither this build, or from another build.
Using a Separate Installation Build Directory

The --with-installer-target-build option is how you specify where to pick up the RPMs to be used for the target. There is no building or even checking of the build directory, it just picks up whatever RPMs are in otherbuilddir/export/RPMS. So, you first build everything in otherbuilddir, and then build things in your project build directory. If you dont specify the --with-installer-target-build option, the build system will use whatever RPMs are in export/RPMS in your project build directory.

An Example of a Self-Contained Server Installation

In the following procedure, you configure and build a self-contained server installation. To test the installation, you can use QEMU to create, configure, and boot the installation from a virtual disk as shown.

Configuring and Building the Server Install

Use the following commands to configure and build a .iso image of the server installation:
$ configure --enable-board=install_x86 --enable-kernel=standard \ --enable-rootfs=glibc_small --enable-bootimage=iso \ --with-template=feature/installer $ make boot-image

In this example, you will use QEMU on the host, so you will boot the .iso image directly from the export/ directory. If you wanted to burn the image to CD/DVD-ROM, you could insert a CD/DVD-ROM and enter make boot-image-burn.

228

19 Stand-Alone Deployment to Disk 19.2 Server-Based Installation of Wind River Linux

Booting and Installing

After building the file system and boot image, you can test it with the procedure in this section which uses QEMU to create, install to, and then boot from a virtual disk.
Step 1: Create the virtual disk.

Use the qemu-img host tool to create and size the virtual disk, placing it in an accessible location such as /tmp:
$ host-cross/bin/qemu-img create -f qcow hd0.vdisk 1000M

Step 2:

Boot the ISO image and install Wind River Linux.

1.

Boot the .iso image you create in Configuring and Building the Server Install, p.228:
$ make start-target TOPTS=" -no-kernel -cd export/install_x86-boot.iso \ -disk hd0.vdisk -gc"

NOTE: Press CTRL-ALT at any time to exit from the boot window. Click in the window to return control to it.

2.

Press F1 when prompted and specify where you want to install the software. In this example, enter:
boot: linux-c

Press ENTER when prompted. 3. 4. ! Choose the disk you want to format for in the installation. In this example, accept the default hda by pressing ENTER. Accept defaults by pressing ENTER or enter alternatives for the prompts that follow.
WARNING: You will be prompted when you are about to format the disk. If

you enter Yes, you will lose any data on the disk. In this example you are just formatting the virtual disk you created, so it is not a concern, but care must be taken when installing to a servers hard disk. Press ENTER to continue. 5. You can modify the package selection offered, or select N for the pre-selected packages. Then enter y to install the selected packages.

When the installation is complete, you would remove the install media such as a CD-ROM from the server. Because you are using a virtual disk, just close the QEMU window.
Step 3: Boot the installed disk.

Now boot from the disk that you installed Wind River Linux on in the previous step. In this example, the installation was performed on the virtual disk you created in Step 1. To boot from that virtual disk, enter:
$ make start-target TOPTS="-no-kernel -disk hd0.vdisk -gc"

Press ENTER, select the operating system you want, and press ENTER to boot.

229

Wind River Linux User's Guide, 3.0

19.3 Booting Standalone with LinuxLive


Wind River Linux supports stand-alone deployment to hard disk for target boards that support hard disks. The following example shows how to configure a common PC to boot Wind River Linux from hard disk. You must first prepare the hard disk on the target and then install the Wind River Linux file system and kernel. The example shows how you can transfer the file system and kernel to the target on a CD-ROM, a USB disk device, or through a target connection. The following summarizes the basic steps required to boot Wind River Linux from the hard disk on a target. Each step is covered in more detail later in the example.
Step 1: Create a platform project for your file system and kernel.

Create a common PC platform project and build the file system, using Workbench or the command line. You can accept the default kernel or build a new one. This is the kernel and file system you will install on the hard disk of the target. You may optionally create a bootable CD-ROM as well.
NOTE: If you create a bootable CD-ROM in your platform project, you can also use it to transfer a file system and kernel to the target. If you use one of the other boot methods, you must transfer the file system and kernel to the target separately. Step 2: Boot the target.

You can boot the target with the CD-ROM you created with your platform project, or you can use some other bootable CD-ROM such as the freely-available Gparted-LiveCD or Partition Magic. All of these allow you to boot the target and then partition and format the hard disk on the target.

Step 3:

Prepare the hard drive on the target.

Use the bootable CD-ROM to partition, format, and mount the hard disk on the target. !
WARNING: Any pre-existing data on the hard drive of the target will be lost when you perform this procedure. Copy the kernel and file system to the target.

Step 4:

To transfer the file system and kernel from the export/ directory in your platform project to the hard disk on your target, you can:

copy the kernel and file system from the Wind River CD-ROM. transfer files with a portable drive such as a USB keychain drive. make a network connection to the development host and download the kernel and file system.

Each of these methods is described in separate sections of this example.

230

19 Stand-Alone Deployment to Disk 19.3 Booting Standalone with LinuxLive

Step 5:

Create the target file system.

Uncompress the file system to the hard disks root, and place the kernel in the hard disks boot directory.
Step 6: Configure your boot menu.

In this example the disk is formatted for a single operating system and you configure the boot menu to boot it.
Step 7: Boot the target.

Reboot the target (without the CD-ROM) to boot from hard disk.

Before You Begin

Before you build your project, consider how you plan to proceed:

Are you going to create a CD-ROM in your platform project? You can create a self-sufficient CD-ROM and typically do not need to perform any additional kernel configuration.

Are you going to use a USB portable drive to transfer files between the two machines? Be sure you have configured in support for the file system used by the USB device. For example, if the device is formatted for the VFAT file system, add that support to the kernel.

Are you going to connect your development host and target by Ethernet? You may need to add kernel options to support the kind of Ethernet device your target uses. For example, if your target hardware uses the Real Tek 8190 Ethernet device, enable that support in the kernel.

You can use the Workbench Kernel Configuration tool, or make menuconfig from the command line, to add kernel configuration options.
NOTE: Even if you use the command line to create and build your platform projects, you can still take advantage of Workbench tools. Import your existing platform project directory into Workbench (under File > Import > Wind River Linux). You can then, for example, double-click on the Kernel Configuration icon in your project, and use that tool to manipulate kernel options.

19.3.1 Creating a Platform Project


This section describes how to create a platform project and then build the file system and kernel for use on the target. As is noted in the procedure, some of the steps apply only if you want to create your own bootable CD-ROM. As explained in step 1 above, you can use various methods to transfer the file system and kernel to your target. The advantage of using the Wind River CD-ROM is that you can also place the file system and kernel intended for the target on it, so you will not have to transfer files using a USB drive or target network connection.

231

Wind River Linux User's Guide, 3.0

NOTE: If you want to use the CD-ROM to transfer the file system and kernel, you must place them in the Wind River Linux development environment. You may not wish to disturb a pristine development environment, or may not have permission to write to it. If that is the case, use the USB or network methods to transfer the file system and kernel to the target.

The following procedure creates the kernel and file system for the hard disk on the target and, optionally, a bootable CD image. 1. Set up the build environment and run the configure script a. If you are going to create a bootable CD-ROM, you must enable an ISO image, for example:
$ configure --enable-board=common_pc \ --enable-kernel=standard+squashfs \ --enable-rootfs=glibc_std \ --enable-bootimage=iso

Note that you must add the +squashfs argument with the kernel specification, and include the --enable-bootimage=iso option. t
NOTE: If you are using Workbench, configure your platform project with the KERNEL: squashfs template and add the option --enable-bootimage=iso.

[Note n that adding the template feature/bootimage_iso does not do it. b. If you are going to transfer the file system and kernel using a USB or network device, you do not need to build the ISO image. You could use the following configure command:
$ configure --enable-board=common_pc \ --enable-kernel=standard \ --enable-rootfs=glibc_std

2. 3.

Build the file system:


$ make fs

Rebuild the kernel:


$ make -C build linux.rebuild

If you are not building the bootable CD-ROM, skip to 19.3.2 Preparing the Target's Hard Drive, p.233. 4. Copy the *.dist.tar.bz2 and *bzImage* files from your prjbuildDir/export directory to installDir/wrlinux-3.0/layers/wrll-host-tools/host-tools/lib/linux-live/cd-root . Create the ISO image by running:
$ make boot-image

5.

This command creates an ISO image within the export directory. 6. Wind River supports QEMU on the common PC platform, so you can test that your .iso image is bootable by booting it with QEMU as follows:
$ make start-target TOPTS=-cd prjbuildDir/export/common_pc-boot.iso

Log in as user root, password root. Enter CTRL+A, X to exit.

232

19 Stand-Alone Deployment to Disk 19.3 Booting Standalone with LinuxLive

7.

Burn the ISO image onto a CD using a CD authoring tool available in your host environment. For example, in the GNOME environment, right-click on the .iso file and select Write to Disk.

You can now boot a stand-alone PC from the .iso file that is on the CD-ROM.

19.3.2 Preparing the Target's Hard Drive


The following procedure assumes that the target has one IDE hard drive, and that it is unpartitioned. The example used is a 30 GB hard drive, to be partitioned with one Linux partition and a swap partition. The particular values you see displayed will differ from those shown here depending on the size of the disk partitions you are creating.
NOTE: This procedure shows how to use the fdisk command on the Wind River CD-ROM to partition the driveyou may prefer to use another tool such as Gparted-LiveCD or Partition Magic. Step 1: Use fdisk to partition the hard drive on the target.

1.

Enter the fdisk command with the device name of the drive you are going to format. For example, at the console on the target, enter the following:
root@localhost:/root> fdisk /dev/hda

In this case, the hard drive on the target is device /dev/hda. 2. Examine your current partition table with the p command in fdisk:
Command (m for help): p Disk /dev/hda: 30.0 GB, 30005821440 bytes 16 heads, 63 sectors/track, 58140 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Device Boot Start End Blocks Id System

Command (m for help):

In this case, there are no existing partitions.


NOTE: If your disk is already partitioned, delete the existing partitions with the d command in fdisk.

! 3.

WARNING: Any pre-existing data on the hard drive of the target will be lost

when you perform this procedure. Enter n to create a new partition:


Command (m for help): n Command action e extended p primary partition (1-4)

4.

Enter p to create a primary partition:


p Partition number (1-4):

5.

Enter 1 to create partition 1 (/dev/hda1):


Partition number (1-4): 1 First cylinder (1-58140, default 1):

233

Wind River Linux User's Guide, 3.0

6.

Enter a number of cylinders for the size of your primary partition. Since you are only creating one partition, this is the majority of the disk space and the remainder is used for swap space.
First cylinder (1-58140, default 1): ENTER Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-58140, default 58140): 50000 Command (m for help):

7.

Create a second partition (/dev/hda2) to use as swap space:


Command (m for help):n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (50001-58140, default 50001): ENTER Using default value 50001 Last cylinder or +size or +sizeM or +sizeK (50001-58140, default 58140): ENTER Using default value 58140 Command (m for help):

8.

Change the type of the second partition to swap space (type 82):
Command (m for help): t Partition number (1-4): 2 Hex code (type L to list codes): 82 Changed system type of partition 2 to 82 (Linux swap / Solaris) Command (m for help):

9.

Write your new partition table to disk:


w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks.

10.

Leave the CD-ROM in the drive and reboot the target:


root@localhost:/root> reboot

(You can ignore the messages about the loopback device during reboot.)
Step 2: Format and mount the hard drive and swap.

1. 2.

Format the swap space as follows:


root@localhost:/root> mkswap /dev/hda2

After rebooting, the hard drive is automatically mounted on /mnt/hda1. Verify this with the df command as follows:
root@localhost:/root> df -h Filesystem Size Used Avail Use% Mounted on tmpfs 363M 0 363M 0% /dev/shm /dev/hda1 24G 679M 22G 3% /mnt/hda1 root@localhost:/root>

234

19 Stand-Alone Deployment to Disk 19.3 Booting Standalone with LinuxLive

3.

The swap space is also active. Check it with the free command as follows:
root@localhost:/root> free -m total used Mem: 724 72 -/+ buffers/cache: 20 Swap: 4006 0 root@localhost:/root> free 652 704 4006 shared 0 buffers 10 cached 41

4.

Format the main Linux partition, by first unmounting, then formatting it, as follows:
root@localhost:/root> umount /mnt/hda1 root@localhost:/root> mkfs -t ext3 /dev/hda1

5.

Remount the main Linux partition as follows:


root@localhost:/root> mount /mnt/hda1

19.3.3 Placing the File System and Kernel on the Hard Disk
This section describes how you can use the Wind River CD-ROM, a USB disk, or a network connection to transfer the kernel and compressed file system to the target. Perform one of these procedures and then proceed to 19.3.4 Configuring Target System Files and Booting, p.237.

Copying from the Wind River CD-ROM

If you created a bootable CD-ROM that contained the file system and kernel for the target (see 19.3.1 Creating a Platform Project, p.231) you can now install the file system and place the kernel in the installed file system. 1. Change directory to the hard disk root (/mnt/hda1) and uncompress and extract the file system from the current RAM disk root directory:
root@localhost:/mnt/hda1> tar jxvpf /boot/*dist.tar.bz2

(Some permission or time stamp setting errors may cause a concluding error message that may be ignored.) 2. Copy the kernel from the current RAM disk to the hard disks boot directory:
root@localhost:/mnt/hda1> cp /boot/*bzImage* /mnt/hda1/boot/bzImage

(Note that this shortens the name to bzImage for convenience.) 3. Configure system files as described in 19.3.4 Configuring Target System Files and Booting, p.237.

Copying from a USB Disk

In the following example, you use a USB keychain disk that has been formatted for the VFAT file system to transfer files from the development host to the target. Note that you may need to perform these commands as the root user. 1. Insert a formatted USB memory device into a USB port on the development host.

235

Wind River Linux User's Guide, 3.0

2.

Verify the USB device is mounted. Many hosts will mount it automatically for you. If it is not mounted, mount it, for example:
# mount -t vfat /dev/sdc1 /media/KINGSTON

3.

Copy the kernel and compressed file system to the USB device. Drag and drop them using a GUI, or use the command line, for example:
# cd prjbuildDir/export # cp *bzImage* /media/KINGSTON # cp *dist.tar.bz2 /media/KINGSTON

4. 5.

Unmount the USB device through a GUI menu choice or on the command line:
# umount /media/KINGSTON

Insert the USB device in the target that is running from CD-ROM. You may see a message such as the following:
scsi 3:0:0:0: Direct-Access Kingston DataTraveler 2.0 1.00 PQ: 0 ANSI: 2 SCSI device sda: 8089600 512-byte hdwr sectors (4142 MB) sda: Write Protect is off SCSI device sda: 8089600 512-byte hdwr sectors (4142 MB) sda: Write Protect is off sda: sda1 sd 3:0:0:0: Attached scsi removable disk sda sd 3:0:0:0: Attached scsi generic sg0 type 0

This message provides the device name sda. If you booted with the device already inserted, look for similar lines in the dmesg output. 6. Make a mount point and mount the USB device on the target, for example:
root@localhost:/root> mkdir /mnt/usbdisk root@localhost:/root> mount -t vfat /dev/sda1 /mnt/usbdisk

7.

Be sure you are in the hard disks root directory (for example /mnt/hda1) and then uncompress and extract the file system:
root@localhost:/root> cd /mnt/hda1 root@localhost:/mnt/hda1> tar jxvpf /mnt/usbdisk/*dist.tar.bz2

8.

Copy the kernel to the boot directory:


root@localhost:/root> cd /mnt/hda1 root@localhost:/mnt/hda1> cp /mnt/usbdisk/*bzImage* boot/bzImage

(Note that this shortens the name to bzImage for convenience.) After you have copied the kernel and file system from the USB device to /dev/hda1 on the target, you can configure system files as described in 19.3.4 Configuring Target System Files and Booting, p.237.

Downloading from a Network Host

To make a network connection from the target to the host, you must use a boot device on the target that includes networking tools. The bootable CD-ROM you made in your platform project or other widely available bootable CD-ROMs or floppies offer networking support. This example assumes that the host and target are on the same subnet, and that the host will accept an sftp or ftp connection. First configure basic networking on the target, and then copy the necessary files from the host.

236

19 Stand-Alone Deployment to Disk 19.3 Booting Standalone with LinuxLive

1. 2.

Configure your Ethernet connection with your target address, for example:
root@localhost:/root> ifconfig eth0 192.168.10.2

If you want to use host names instead of IP addresses, create a temporary /etc/hosts file for the target (temporary because it is in the RAM disk) with entries for the target and the host, for example:
127.0.0.1 192.168.10.1 192.168.10.2 localhost.localdomain server1.lab.org target7.lab.org localhost server1 target7

3.

Change directory to the targets future root directory (currently /mnt/hda1) and use sftp or ftp to connect to the host, for example:
root@localhost:/root> cd /mnt/hda1 root@localhost:/mnt/hda1> sftp server1

4.

Change to the export directory on the host where you created your target's ISO image. Download the compressed file system and the kernel file, for example:
sftp> mget *bzImage* sftp> mget *dist.tar.bz2 sftp> quit

5.

Unpack the file system in the mounted hard drive (/mnt/hda1), which will become the root directory on the target:
> tar -xvjpf *dist.tar.bz2

6.

Move the kernel to the boot directory on the hard drive of the target:
root@localhost:/mnt/hda1> mv *bzImage* boot/bzImage

(Note that this shortens the name to bzImage for convenience.)

19.3.4 Configuring Target System Files and Booting


Set up your file system mount table (fstab) and your GRUB boot menu (menu.lst) as described in this section. You may also want to set up your hosts file if you are using a network connection. 1. If you want to use host names instead of IP addresses, edit the target drive's permanent /etc/hosts file (located in /mnt/hda1/etc/hosts) as shown with the RAM disk file system's /etc/hosts file in step 2 in Downloading from a Network Host, p.236. Copy the RAM disks fstab file to the new fstab file on the hard drive of the target:
root@localhost:/root> cp /etc/fstab /mnt/hda1/etc/fstab

2.

If you are using a non-Wind River bootable CD-ROM it may not contain a suitable fstab file. If that is the case, edit /mnt/hda1/etc/fstab to look like this:
proc /proc proc defaults 0 0 # AutoUpdate sysfs /sys sysfs defaults 0 0 # AutoUpdate devpts /dev/pts devpts defaults 0 0 # AutoUpdate relayfs /mnt/relay relayfs defaults 0 0 # AutoUpdate tmpfs /dev/shm tmpfs defaults 0 0 # AutoUpdate /dev/hdc /mnt/hdc_cdrom iso9660 noauto,users,exec 0 0 # AutoUpdate /dev/hda1 /mnt/hda1 ext3 auto,users,suid,dev,exec 0 0 # AutoUpdate /dev/hda2 swap swap defaults 0 0 # AutoUpdate /dev/fd0 /mnt/floppy vfat,msdos noauto,users,suid,dev,exec 0 0 # AutoUpdate

237

Wind River Linux User's Guide, 3.0

3.

Create a new boot menu as follows: a. Backup the default boot/grub/menu.lst file:
root@localhost:/root> cd /mnt/hda1/boot/grub root@localhost:/mnt/hda1/boot/grub> mv menu.lst orig_menu.lst

The orig_menu.lst file contains useful instructions that can help you understand the menu entries. It also shows you how to set up a system for dual- or multi-booting if you want to configure target disks that way in the future. b. Using a text editor, create a new menu.lst file that contains the following:
default timeout title root kernel boot 0 5 my Common PC (hd0,0) (hd0,0)/boot/bzImage root=/dev/hda1 fastboot /boot/bzImage

Save your file. 4. Install GRUB to the Master Boot Record (MBR). a. Start GRUB by entering grub at the command line:
root@localhost:/mnt/hda1/boot/grub> grub grub>

b.

Set the root device, and then install GRUB to the MBR, with the following three commands:
grub> root (hd0,0) grub> setup (hd0) grub> quit

Reboot, removing the CD-ROM so that the target reboots from hard disk. (The system must be rebooting before you can remove the CD-ROM.)

19.4 Creating ISO and USB Flash Drive Images


Use the following procedure to create and test ISO and USB images: 1. If a target can support booting an ISO or USB flash drive add the standard+squashfs kernel and iso bootimage options to your configure command line, for example:
--enable-kernel=standard+squashfs --enable-bootimage=iso

to your configure line. The common_pc is an example of a target that supports both boot options. 2. Enable the CONFIG_VFAT kernel option for VFAT file system support. You can use the linux.menuconfig build target or the Workbench Kernel Configuration tool to set the option, for example:
$ make -C build linux.menuconfig

And change the VFAT option in File systems > DOS/FAT/NT Filesystems to y or * (not M). After building the target, create the boot image:
$ make boot-image

238

19 Stand-Alone Deployment to Disk 19.4 Creating ISO and USB Flash Drive Images

This will create two files in the build/export directory

target-name-boot.iso target-name-usb.img

3.

Test the images with host-cross/bin/qemu, for example:


$ make start-target TOPTS=-cd export/common_pc-boot.iso $ make start-target TOPTS=-disk export/common_pc-usb.img

Use a CD-writer to write the .iso to CDROM. If you have root permissions, you can use the dd command to write the USB image to a flash drive. .
NOTE: Note the USB image defaults to a size of 256M.

239

Wind River Linux User's Guide, 3.0

240

20
Deploying SELinux
20.1 Introduction
Configuring an SELinux platform project requires that you include the selinux feature template, but you must further configure the run-time system due to the nature of SELinux as described in this chapter. Not all configurations support SELinux. Consult your Wind River representative for more information.

20.1.1 Configuring an SELinux Platform Project


To configure a supported platform for SELinux, add the selinux feature template to you configure command line:
$ configure ... --with-template=feature/selinux ...

Configuring SELinux on the Target

Due to the nature of the SELinux toolchain, a few manual bootstrap issues need to be addressed before you are able to use a fully functional installation. These steps are described in Booting the Target and Loading the Policy, p.241. In addition, if you want to perform policy management while on the target, you must build a policy store as described in Building the policy store, p.242

Booting the Target and Loading the Policy

Because SELinux depends on every file in the root file system to be set to a particular file context, this can only be done during runtime. Use the following procedure to make the modifications to the runtime. 1. 2. Boot into a shell (supply the boot arguments root=/dev/sda rw init=/bin/bash selinux=1). Mount essential file systems:
# mount -t proc none /proc && \ # mount -t sysfs none /sys && \ # mount -t selinuxfs none /selinux

241

Wind River Linux User's Guide, 3.0

3.

Manually load the policy into memory:


# dd of=/selinux/load \ bs=6000000 \ if=/etc/selinux/wr-standard/policy/policy.23

NOTE: Please use a block size bigger than the size of policy.23 as the write needs to be done in one write, otherwise the policy will not load.

4.

Restore file security contexts and synchronize RAM with the disk:
restorecon -v -R / sync ; sync ; sync

At this point, you will be able to reboot (set init=/sbin/init) into a functional SELinux system.
NOTE: You must reboot in order to build the policy store as described next.

If you are unable to boot your system, it is usually because /* is not coming up in the right security context. You must follow the procedure exactly.

Building the policy store

In order to manage the modules loaded in your policy, you need to create a policy store. Due to the nature of the SELinux toolchain, the policy store cannot be created during compile time because the libraries use /etc/selinux as the SELinux rootpath, which in essence would require doing the entire build in a chroot. Since a chroot is not used for building SELinux, you need to create the policy store manually if you wish to manage the policy in the target. If no policy management support is needed in the target, this step is not required for a functional SELinux system. In order to create a policy store, you must put all the module.pp files into a single policy.X file. Do this with semodule as follows: 1. Turn off enforce mode
# setenforce 0

Or
# echo 0 > /selinux/enforce

2.

Create the policy store by regenerating the policy:


# cd /usr/share/selinux/wr-standard # semodule -v -n \ -s wr-standard \ -b base.pp \ $(for i in *.pp ; do if [ "$i" != "base.pp" ] ; \ then echo -n "-i $i "; fi; done)

NOTE: The example is for a csh user, Substitute bash or other shell commands if your shell is different.

CAUTION: This can take time and spaceas an example, on a 2.4 GHz Pentium

4 it may take the semodule command over 30 minutes, and use over 500 MB of RAM.

242

20 Deploying SELinux 20.1 Introduction

3. 4.

Relabel the file system (optional, if you have not changed the policy):
$ restorecon -R /

If all is correct, you should get the following message:


"Ok: transaction number 0."

Remember, building a policy store is only needed for policy management while on the target, for example when adding or removing a module using semodule. It may be more beneficial to do this sort of work at compile time from within the build/refpolicy/ directory instead.

243

Wind River Linux User's Guide, 3.0

244

PA R T I V

Use Cases
21 22 23 24 Building Run-times with RPM and Source ............................. 247 Examples of Adding Packages ............................................... 255 Using Custom Templates and Layers ..................................... 271 Kernel Use Cases ..................................................................... 283

245

Wind River Linux User's Guide, 3.0

246

21
Building Run-times with RPM and Source
21.1 Introduction 247 21.2 Tutorial One: RPM Build for Common PC 248 21.3 Tutorial Two: Source Build for Common PC 250 21.4 Tutorial Three: Building ISO Images and Partial Run-time Systems 251 21.5 Tutorial Four: RPM Build on ARM Versatile AB-926EJS 252 21.6 Tutorial Five: Source Build on ARM Versatile AB-926EJS 253 21.7 Tutorial Six: Building Ramdisk and Flash File Systems 254

21.1 Introduction
These are three step-by-step tutorials on building Wind River Linux run-time systems for the Common PC, a generic X86 board, and three step-by-step tutorials on building Wind River Linux run-time systems for the ARM Versatile AB-926EJS. Note that both of these boards can be simulated by QEMU. The tutorials for the Common PC cover:

Using the RPM method to build a complete run-time system. Using the source method to build a complete run-time system, with tests enabled. Using the source method to build a complete run-time system, including an ISO image for hard disk deployment. Using the source method to build a root file system only. Using the source method to build a kernel only.

The tutorials for the ARM Versatile AB-926EJS cover:

Using the RPM method to build a complete run-time system, with flash images enabled. Using the source method to build a complete run-time system, with flash images enabled.

247

Wind River Linux User's Guide, 3.0

Using the RPM method to build a complete run-time system as a ramdisk (initrd) image, a JFFS2 flash file system, or a CRAMFS flash file system.

installDir refers to /home/user/WindRiver.

21.2 Tutorial One: RPM Build for Common PC


The RPM build method uses a pre-built Wind River kernel, and builds a complete file system from RPMs. This is the fastest way of creating a run-time system. Some operations are performed as the root userthese are indicated by the # sign prompt.
Step 1: Create the NFS export and TFTP download directories for the run-time system.
# mkdir /home/user/export /tftpboot

Step 2:

Make the work directory.

Within /home/user, and as a regular user, make a work directory (in this example, workdir). This can hold any number of builds:
$ cd /home/user $ mkdir workdir

Step 3:

Make the project build directory.

Change directory to workdir and make the project build directory. This will hold the build and source files, and the run-time system itself. In this example it is named after its board; you may name it as you like:
$ cd workdir $ mkdir common_pc

Step 4:

Configure the project build directory.

Within workdir/common_pc/, run the configure script (configure) that resides in installDir/wrlinux-3.0/wrlinux/. The configure command options determine which kernel to use, and which root file system to build, for a specific board. In this case, a standard kernel and file system is configured, for the Common PC board:
$ cd common_pc $ configure \ --enable-board=common_pc \ --enable-kernel=standard \ --enable-rootfs=glibc_std

Once the configure command has completed, you can review its output at anytime in the configure.log file in the project build directory.

248

21 Building Run-times with RPM and Source 21.2 Tutorial One: RPM Build for Common PC

Step 5:

Build the run-time file system within the project build directory.

Build the file system from RPMs and include the default kernel:
$ make fs

Within a few minutes the build system will create a compressed run-time file system image within export/. The kernel is prebuilt and resides in installDir/wrlinux-3.0/layers-wrll-linux-version/boards/common_pc/standard/. The build system automatically copies it to your export/ subdirectory.
Step 6: Copy the pre-built kernel to the TFTP download directory.

Note that the command below renames the kernel to bzImage.


$ cd export $ su Password: (root password) # cp *bzImage* /tftpboot/bzImage

Step 7:

Uncompress the run-time file system to the NFS export directory.

Use the tar command from the NFS export directory to extract and uncompress the run-time file system. The tar commands -x option instructs tar to extract. The -j command instructs it to uncompress through bzip2. The -v option instructs it to be verbose (this is not a necessary option). The -p option instructs it to preserve permissions, and the -f option identifies the following file as the archive file to be uncompressed. To relieve the tedium of typing a long filename, the full name of the compressed run-time system file is abbreviated with a wildcard.
# cd /home/user/export # tar -xjvpf /home/user/workdir/common_pc/export/*dist.tar.bz2

Your board can now use tftp to download the /tftpboot/bzImage kernel and NFS-mount the exported file system you have created.

249

Wind River Linux User's Guide, 3.0

21.3 Tutorial Two: Source Build for Common PC


This tutorial builds a complete Wind River run-time system for the Common PC board, including the kernel, from source. The example below includes an option that builds the test suite. The first build will take some time; subsequent builds will be faster.
Step 1: Make the necessary directories, install any RPM updates, and run configure.

Follow Step 1 to Step 3, in Tutorial One: RPM Build for Common PC, p.248, above.
Step 2: Configure the project build directory.

Within workdir/common_pc, run the configure script. The configure command options direct which kernel to build, and which root file system to build, for a specific board. In this case, a standard Linux kernel and file system is configured, for the Common PC board. The test suite option is included:
$ cd common_pc $ configure \ --enable-board=common_pc \ --enable-kernel=standard \ --enable-rootfs=glibc_std \ --enable-test=yes

Step 3:

Build the complete run-time system.

At the prompt, run make build-all.


$ make build-all

Step 4:

Copy the kernel and the file system to their download and NFS export directories.

Both the kernel and a compressed run-time file system image are now in the workdir/common_pc/export directory. The kernel must be copied to the directory where TFTP is configured to download it to the target, and the file system image must be uncompressed to its NFS export directory. In this tutorial, the destination for the kernel is /tftpboot, and the destination for the file system is /home/user/export. In the command below, the kernel is both copied and renamed.
$ cd export $ su Password: (root password) # cp *bzImage* /tftpboot/bzImage # cd /home/user/export # tar -xjvpf /home/user/workdir/common_pc/export/*dist.tar.bz2

250

21 Building Run-times with RPM and Source 21.4 Tutorial Three: Building ISO Images and Partial Run-time Systems

21.4 Tutorial Three: Building ISO Images and Partial Run-time Systems
This tutorial builds an ISO bootable image, a kernel alone, and a file system alone (without kernel), for the Common PC board.

Building an ISO Image Step 1: Make the necessary directories, install the RPM updates, and run configure.

Follow Step 1 to Step 3, in Tutorial One: RPM Build for Common PC, p.248, above.
Step 2: Configure the project build directory.

Within workdir/common_pc, run the configure script. In this example, the configure command options configure the build environment to enable the subsequent build of an ISO boot image:
$ configure \ --enable-board=common_pc \ --enable-kernel=standard \ --enable-rootfs=glibc_std \ --enable-bootimage=iso

Step 3:

Build the run-time system.

At the prompt, run make build-all.


$ make build-all

Step 4:

Create the ISO image by running:


$ make boot-image BOOTIMAGE_FSTYPE=iso

This command creates an ISO image within the export directory. Burn the ISO image onto a CD.

Building a File System Only Step 1: Make the necessary directories, install the RPM updates, and run configure.

Follow Step 1 to Step 3, in Tutorial One: RPM Build for Common PC, p.248, above.
Step 2: Configure the project build directory.

Within workdir/common_pc, run the configure script. In this example, the configure command options configure the build environment to build a file system only:
$ configure \ --enable-cpu=x86_64 \ --enable-rootfs=glibc_cgl

Step 3:

Build the run-time file system.

At the prompt, run make build-all.

251

Wind River Linux User's Guide, 3.0

Building a Kernel Only Step 1: Make the necessary directories, install the RPM updates, and run configure.

Follow Step 1 to Step 2, in Tutorial One: RPM Build for Common PC, p.248, above.
Step 2: Configure the project build directory.

Within workdir/common_pc, run the configure script. In this example, the configure command options configure the build environment to build a kernel only:
$ configure \ --enable-board=common_pc \ --enable-kernel=standard

Step 3:

Build the run-time kernel.

At the prompt, run make build-all.

21.5 Tutorial Four: RPM Build on ARM Versatile AB-926EJS


Some operations are performed as the root user: these are indicated by the # sign.
Step 1: Create the NFS export and TFTP download directories for the run-time system.
# mkdir /home/user/export /tftpboot

Step 2:

Make the work directory.

Within /home/user, and as a regular user, make a work directory (in this example, workdir). This can hold any number of builds:
$ cd /home/user $ mkdir workdir

Step 3:

Make the project build directory.

Change directory to workdir, and make the project build directory. This will hold the build and source files, and the run-time system itself. In this example it is named after the board; you may name it as you like:
$ cd workdir $ mkdir arm_versatile

Configure the project build directory. Within workdir/arm_versatile, run the configure script (configure) that resides in installDir/wrlinux-3.0/wrlinux/.

In this example, the configure command enables the subsequent building of a variety of flash images.
$ cd arm_versatile $ configure \ --enable-board=arm_versatile_926ejs \ --enable-kernel=small \ --enable-rootfs=glibc_small \ --enable-bootimage=flash

252

21 Building Run-times with RPM and Source 21.6 Tutorial Five: Source Build on ARM Versatile AB-926EJS

Step 4:

Build the run-time file system within the project build directory.

Note that once the project build directory has been configured, information that readme files are available in the local README directory is displayed. To perform the RPM build, enter:
$ make

Within a few minutes the build system will create a compressed run-time file system image within prjbuildDir/export. It also creates the file system for QEMU within prjbuildDir/export/dist/.
Step 5: Copy the kernel and the file system to their download and NFS export directories.

Both the kernel and a compressed run-time file system image are now in the workdir/arm_versatile/export directory. The kernel must be copied to the directory where TFTP is configured to download it to the target, and the file system image must be uncompressed to its NFS export directory. In this tutorial, the destination for the kernel is /tftpboot, and the destination for the file system is /home/user/export. In the command below, the kernel is both copied and renamed.
$ cd export $ su Password: (root password) # cp *uImage* /tftpboot/bzImage # cd /home/user/export # tar -xjvpf /home/user/workdir/arm_versatile/export/*dist.tar.bz2

21.6 Tutorial Five: Source Build on ARM Versatile AB-926EJS


This tutorial builds a complete run-time system for the ARM Versatile AB-926EJS board from source. The first build will take some time; subsequent builds will be faster.
Step 1: Make the necessary directories, install any RPM updates, and run configure.

Follow Step 1 to Step 3, Tutorial Four: RPM Build on ARM Versatile AB-926EJS, p.252, above.
Step 2: Build the complete run-time system.

At the prompt, run make build-all.


$ make build-all

Step 3:

Copy the kernel and the file system to their download and NFS export directories.

Both the kernel and a compressed run-time file system image are now in the workdir/arm_versatile/export directory. The kernel must be copied to the directory where TFTP is configured to download it to the target, and the file system image must be uncompressed to its NFS export directory.

253

Wind River Linux User's Guide, 3.0

In this tutorial, the destination for the kernel is /tftpboot, and the destination for the file system is /home/user/export. In the command below, the kernel is both copied and renamed.
$ cd export $ su Password: (root password) # cp *uImage* /tftpboot/uImage # cd /home/user/export # tar -xjvpf /home/user/workdir/arm_versatile/export/*dist.tar.bz2

21.7 Tutorial Six: Building Ramdisk and Flash File Systems


This tutorial builds a ramdisk image, a JFFS2 image, and a CRAMFS image. It is not necessary to do a preliminary make build-all; only a make fs is needed. This tutorial assumes that steps Step 1 through Step 4 have been completed, in Tutorial Four: RPM Build on ARM Versatile AB-926EJS, p.252.

Building a Ramdisk Image

Within the project build directory (in this example, /home/user/workdir/arm_versatile), enter:
$ make boot-image BOOTIMAGE_FSTYPE=initrd BOOTIMAGE_RAM0SIZE=200000

The ramdisk image, arm_versatile_926ejs-initrd.gz.uboot, is within workdir/arm_versatile/export.

Building a JFFS2 Image

Within the project build directory, enter:


$ make boot-image BOOTIMAGE_FSTYPE=jffs2

The JFFS2 image, arm_versatile_926ejs-jffs2, is within workdir/arm_versatile/export.

Building a CRAMFS Image

Within the project build directory, enter:


$ make boot-image BOOTIMAGE_FSTYPE=cramfs

The CRAMFS image, arm_versatile_926ejs-cramfs, is within workdir/arm_versatile/export.

254

22
Examples of Adding Packages
22.1 Introduction 255 22.2 Adding SRPM Packages 256 22.3 Adding Spec Packages 259 22.4 Adding Classic Packages 261 22.5 Adding Packages with a GUI Tool 268 22.6 Adding an RPM Package to a Running Target 270

22.1 Introduction
You may want to add one or more packages to the set of packages automatically included in your project. To add packages to your platform, you should first check your Wind River Linux installation to see if the package(s) you want to add are already provided.
NOTE: You can view a list of the file system packages in your current project in prjbuildDir/pkglist.

If you simply want to replace an existing package with a different version, you can make use of the infrastructure already provided for the package. Follow the procedure in Adding mm, p.259 to replace an existing package. You can add three different types of packages to the Wind River Linux build system. Specify the type of package you want to add in the package_TYPE variable in the build system makefile for each package (dist/package/Makefile).Table 22-1 summarizes the three ways of adding packages.
Table 22-1 Three Ways to Add Packages

Package

package_TYPE

How to Add

SRPM

package_TYPE = SRPM

Patch the supplied spec file for rpmbuild.

255

Wind River Linux User's Guide, 3.0

Table 22-1

Three Ways to Add Packages

Package

package_TYPE

How to Add

Spec Classic

package_TYPE = spec package_TYPE =a

Create a spec file for rpmbuild. Patch the supplied package makefile and build.

a. Do not specify a value for package_TYPE when adding packages with a classic makefile.

The following sections provide examples of how to add packages to your platform project for each of the three types of packages. The examples assume you have already configured a platform project. If you have created a platform project with a small file system, you can still follow the procedures but may have to add additional packages that are required by the packages added in the examples.

22.2 Adding SRPM Packages


Source RPM packages (*.src.rpm) provide a spec file that you must patch to integrate into the cross-build system. You can either patch the spec file with quilt or do it manually as shown in the following examples.

22.2.1 Adding the logwatch SRPM


The following procedure shows how to add an SRPM package from an external (non-Wind River) source to your build.
Step 1: Get the Source RPM Package

Acquire the package from its location on the Web, CD, or other computer. In this example, logwatch is available from http://download.fedora.redhat.com/pub/fedora/linux/releases/7/Fedora/source/ SRPMS/ Place the package in the local custom layers packages/ directory (prjbuildDir/packages/). Do not in any way uncompress or unpack the file.
Step 2: Create the Makefile and Patch Directories

Create a directory named after the package within the local custom layers dist directory (prjbuildDir/dist) and create a patch subdirectory of the packageName directory. In this example, the package directory would be logwatch. The structure would be prjbuildDir/dist/logwatch/patches. A simple way to create this from prjbuildDir is:
$ mkdir -p dist/logwatch/patches

256

22 Examples of Adding Packages 22.2 Adding SRPM Packages

Step 3:

Create the Makefile and MD5 Checksum

Create the makefile within prjbuildDir/dist/logwatch. Refer to Necessary Makefile Contents, p.112 for details on the contents of the Makefile. To calculate the md5sum for logwatch and replace the logwatch_MD5SUM value with it. To calculate the md5sum, run md5sum on the package:
$ md5sum packages/logwatch-*

Replace the logwatch_VERSION value with the correct version number. This is the string in the package name between logwatch- and .src-rpm, for example 7.3.4-6.fc7. Your Makefile for logwatch will look something like this:
PACKAGES += logwatch

logwatch_TYPE = SRPM logwatch_RPM_DEFAULT= logwatch logwatch_RPM_ALL = logwatch logwatch-debuginfo logwatch_MD5SUM = f17c0a1722a590406ce7a30b5e9b2ccb logwatch_VERSION = 7.3.4-6.fc7 logwatch_ARCHIVE = logwatch-$(logwatch_VERSION).src.rpm logwatch_UPSTREAM =

http://download.fedora.redhat.com/pub/fedora/linux/releases/7/Fedora/source/S RPMS/$(logwatch_ARCHIVE)

Step 4:

Add the Package to the pkglist and Makefiles.

Use the pkgname.addpkg make target to add the package and any known dependencies to pkglist:
$ make -C build logwatch.addpkg

This adds the package name without version number or suffix to prjbuildDir/pkglist, and regenerates your makefiles to include the package. If you specified any dependencies in the makefile, they will be included in pkglist if they are not already in pkglist.
Step 5: Unpack the Package

Run the patch rule and unpack and patch the SRPM within prjbuildDir/build:
$ make logwatch.unpack

Running the patch rule for the package will create the main build directory, prjbuildDir/build/logwatch-7.3.4-6.fc7 and unpack the SRPM into several subdirectories. The tar archive file and all the patches are placed within the SOURCES subdirectory. The unpacked sources will go into the BUILD/logwatch-7.3.4-6.fc7 subdirectory, and be patched. The spec file goes into the SPECS subdirectory.
Step 6: Copy and Edit the spec File

Copy the spec file prjbuildDir/build/logwatch-version/SPECS/logwatch.spec to prjbuildDir/dist/logwatch/. Edit the copied version in dist/logwatch. You must make the first of the following changes:

Immediately after the %build and %install section headers, add the RPM macro, %configure_target.

257

Wind River Linux User's Guide, 3.0

If you desire, add a change indicator (such as -WR), to the Release line. If you desire, add an entry to changelog.

(Refer to Necessary spec File Changes, p.113 for additional information on spec files and Lua Scripting in Spec Files, p.114 for information on pre- and post-install scripts.)
Step 7: Make the package.

Change directory to build and make the package as follows:


$ cd .. $ make logwatch

If it does not compile correctly, examine the spec file changes and rebuild it until it does.
NOTE: If you are adding custom patches to the SRPM, place your patch(es) in prjbuildDir/dist/logwatch/patches/ and edit the prjbuildDir/dist/logwatch/logwatch.spec file to include them. Step 8: Build the file system.

It is often the case that you are able to successfully include the added package into your file system at this time:
$ make fs

This particular example of logwatch, however, has been chosen because it will cause an error when building the file system:
../../wrlinux-2.0/wrlinux/scripts/rpmdeps.pl: Unresolved dependency mailx required by logwatch

This indicates that logwatch requires another package, mailx, for installation.
Step 9: Add mailx.

Check to see if mailx is available in Wind River Linux first:


$ ls -1 installDir/wrlinux-3.0/layers/wrll-wrlinux/packages| grep mailx mailx-8.1.1-44.2.2.src.rpm

If mailx is already there, run make -C build mailx.addpkg to add it to pkglist and then make -C build mailx. Otherwise, you will need to find a mailx package and add it using the appropriate procedure for the type of package that it is. You can then perform the make, and logwatch will be installed in the file system.
Step 10: Create a layer to save your changes.

Create a layer that includes the changes you have made to your current project build directory:
$ cd prjbuildDir $ make export-layer

Your layer will be created in prjbuildDir/export/export-layer/name.date. Your packages are included in a pkglist.add file in the new layer, in this example they are in templates/default/pkglist.add. You can then re-create your current configuration at any time with your original configuration command (which can be found in conf_cmd.ref in the layer) and additional --with-layer=path_to_layer configuration option.

258

22 Examples of Adding Packages 22.3 Adding Spec Packages

22.3 Adding Spec Packages


Wind River supports building packages from spec files without requiring a src.rpm file. In the build system makefile for the package (dist/package/Makefile), packages of this type must have the entry package_TYPE = spec. These packages can be of various types (tar.gz, .bz2, and so on) but may not be SRPM files (src.rpm). The package will then be built in the same way as is done for packages of the SRPM type.

Adding mm

In the following procedure, you either add or update the mm package, depending on your installation. If you already have mm installed, you can copy existing infrastructure files and edit them as described in the procedure. If your installation does not include mm, you can create the files as shown. This procedure describes how to add a package with the spec method, and it also shows how you can update (override) an installed package with a newer version.
Step 1: Get the package.

Get the latest version of the package that you can find on the Web or another source. At the time of this writing, mm-1.4.2.tar.gz was available. Place the package in prjbuildDir/packages/.
Step 2: Create the infrastructure.

If the mm package exists in your installation you can copy the dist infrastructure and contents to your local project build directory:
$ cp -r installDir/wrlinux-3.0/layers/wrll-wrlinux/dist/mm/ prjbuildDir/dist/

Otherwise, create the directories:


$ mkdir -p prjbuildDir/dist/mm/patches

Step 3:

Create or edit the makefile.

If you have copied the mm directories and files from your installation, edit dist/mm/Makefile for the correct MD5SUM, VERSION, ARCHIVE, and UPSTREAM settings. If you need to create the makefile from scratch, it should look something like this:
Example 22-1 Makefile for Spec File Package PACKAGES mm_TYPE mm_NAME mm_RPM_DEFAULT mm_RPM_DEVEL mm_RPM_ALL mm_MD5SUM mm_VERSION mm_ARCHIVE mm_UPSTREAM mm_DEPENDS += mm = = = = = = = = = = spec mm mm mm-test mm-devel mm mm-test mm-devel mm-debuginfo bdb34c6c14071364c8f69062d2e8c82b 1.4.2 mm-1.4.2.tar.gz http://location/mm-1.4.2.tar.gz/$(mm_ARCHIVE) glibc

259

Wind River Linux User's Guide, 3.0

Step 4:

Create or edit the spec file.

If you have copied the mm directories and files from your installation, edit dist/mm/mm.spec for the correct version number and remove the following two lines which patch the 1.4.0 version:
Patch500: mm-1.4.0-add-libtool-tag.patch ... %patch500 -p1 -b .add-libtool-tag

If you need to create the spec file from scratch, it should look something like the one shown in Example 22-2. Note that you can often find spec files for your package on the Web that you can use to start with.
Example 22-2 Spec File for Spec File Package Name: mm Version: 1.4.2 Summary: A shared memory library. Release: 1_WR%{?_wr_rel} Group: System Environment/Libraries URL: http://www.engelschall.com/sw/mm/ Source0: http://www.engelschall.com/sw/mm/mm-%{version}.tar.gz # WRLinux patches License: Apache Software License BuildRoot: %{_tmppath}/%{name}-%{version}-root %description The MM library provides an abstraction layer which allows related processes to easily share data using shared memory. %package devel Summary: Files needed for developing applications which use the MM library. Group: Development/Libraries Requires: %{name} = %{version}-%{release} %description devel The MM library provides an abstraction layer which allows related processes to easily share data using shared memory. The mm-devel package contains header files and static libraries for use when developing applications which will use the MM library. %prep %setup -q %build %configure_target export LD="" export ac_cv_maxsegsize=67108864 %configure --with-shm=MMFILE \ --with-headers="%{_host_cross_include_dir}" make CC_FOR_BUILD="%{_host_cc_wrapper}" CFLAGS_FOR_BUILD="%{_host_cflags}" CFLAGS="${CFLAGS}" %install %configure_target rm -rf $RPM_BUILD_ROOT %makeinstall %clean rm -rf $RPM_BUILD_ROOT %files %defattr(-,root,root) %doc LICENSE README PORTING THANKS %attr(0755,root,root) %{_libdir}/*.so.* %{_libdir}/*.so

260

22 Examples of Adding Packages 22.4 Adding Classic Packages

%files devel %defattr(-,root,root) %{_bindir}/* %{_includedir}/* %{_libdir}/*.a %{_libdir}/*.la %{_mandir}/*/* %changelog * Comments here

Step 5:

Reconfigure your project.

Go to your project build directory and reconfigure your project so that it includes the new package:
$ cd prjbuildDir $ make reconfig

Step 6:

Build the new package.

You can now build mm and it will build the new package:
$ make -C build mm.build

Note that when the build is finished, you have a build directory for the new version, not the old one, for example, prjbuildDir/build/mm-1.4.2/.

22.4 Adding Classic Packages


Classic packages are generated directly from source using the Makefile in prjbuildDir/build/package-version/. The way you integrate them into the Wind River Linux build environment differs depending on whether the package comes with a configure script in prjbuildDir/build/package-version/.

22.4.1 Adding Classic Packages with configure


The configure script in combination with the build system makefile prjbuildDir/dist/package/Makefile is able to determine the necessary variable settings for the cross-build environment. This makes it easier to add to your project than if you have a package that does not come with a configure script (as described in 22.4.2 Adding Classic Packages without configure, p.263). The following shows how to add the links package, a third-party package that comes with a configure script supplied.

Adding links Step 1: Place the compressed source file in packages/.

Acquire the file from its location on the Web, CD, or other computer. At the time of this writing, a links-1.00pre20.tar.gz file is available from http://artax.karlin.mff.cuni.cz/~mikulas/links/download/.

261

Wind River Linux User's Guide, 3.0

Put the package in the local custom layers packages directory (prjbuildDir/packages/). Do not in any way uncompress or unpack the file.
Step 2: Create the Makefile and Patch Directories

Create a directory named after the package within the local custom layers prjbuildDir/dist directory and create a patch subdirectory of the package_name directory. In this example, the package directory would be links so you would have prjbuildDir/dist/links/patches/. A simple way to do this is:
$ cd prjbuildDir $ mkdir -p dist/links/patches

Step 3:

Create the Makefile and MD5 Checksum

Create the makefile within prjbuildDir/dist/links. A simple way to do this is to copy an existing makefile from the Wind River Linux distribution for a classic file and modify it. For example, copy installDir/wrlinux-3.0/layers/wrll-wrlinux/dist/which/Makefile to prjbuildDir/dist/links. In the makefile, do the following: 1. 2. Change all instances of which to links. Replace the value of links_MD5SUM with the value you get from the following command:
$ md5sum prjbuildDir/packages/links*

3.

Replace the value of links_VERSION with the string in the package name between the name of the package (pkg-)and .tar.gz. For a package named links-1.00pre20.tar.gz, this would be 00pre20. Replace the following with appropriate values:
links_DESCRIPTION links_SUMMARY links_LICENSE links_UPSTREAM links_GROUP

4.

If you can locate an RPM of the package at a site such as rpmseek.com, you may find all of the information you require there. The following is an example of a complete makefile for the links package:
PACKAGES += links

links_DESCRIPTION = Links is a text-based Web browser. Links does \ not display any images, but it does support tables and most other \ HTML tags. Links advantage over graphical browsers is its speed -- \ Links starts and exits quickly and swiftly displays webpages. links_NAME = links links_RPM_DEFAULT = links links_RPM_ALL = links links-debuginfo links_SUMMARY = A text-mode Web browser. links_SUPPORTLVL = 3 links_GROUP = Applications/Networking/Internet links_RUN_DEPS = glibc links_MD5SUM = e05e4838920c14c9d683ff8b4730c164 links_VERSION = 1.00pre20 links_ARCHIVE = links-$(links_VERSION).tar.gz links_UPSTREAM = http://artax.karlin.mff.cuni.cz/mikulas/$(links_ARCHIVE) links_LICENSE = GPL links_DEPENDS = glibc

262

22 Examples of Adding Packages 22.4 Adding Classic Packages

NOTE: Do not use any single-quotes () or double-quotes () in your comments, for example in pkg_DESCRIPTION or pgk_SUMMARY. Step 4: Add the package to pkglist.

Use the pkgname.addpkg make target to add the package and any known dependencies to pkglist and reconfigure your build/Makefiles.*:
$ make -C build links.addpkg

Step 5:

Test your work to be sure the new package builds properly before building the file system.

1.

Unpack the links source archive:

$ make -C build links.unpack

The links source in now in build/links-version/.


NOTE: If you get the following error:
Would download links here, but configured to not do that.

it means that the build system cannot find your links-version.tar.gz file. Make sure that you have the correct version number and name specified in the makefile so that when the full name is expanded it matches the name of the tar.gz file in packages/. 2. You can now build the RPM package for installation:

$ make -C build links.rpm

Step 6:

Build the file system:


$ make fs

When you have successfully built the RPM, the links package will be installed from the RPM when you build the file system.
NOTE: You could have skipped step 5 and proceeded immediately to building the file system (make fs), and your package source code would be unpacked and built during the file system build procedure. The advantage of first unpacking your source archive and building the RPM is that you do not have to wait for other parts of the file system to build before you are able to determine if you have added the package correctly.

22.4.2 Adding Classic Packages without configure


If you are using a classic makefile to add a package that does not have a configure script, you must perform additional work on the build system makefile and patch the supplied makefile as shown in the following procedure.

263

Wind River Linux User's Guide, 3.0

Adding schedutils

The following example adds the schedutils package. The example requires several changes to the makefile it comes with because:

The makefile variable CC must be changed to the appropriate toolchain. The package does not come with a configure script. The makefile installs under /usr while the Wind River build environment installs under wrlinux/usr. The list of binaries produced must be changed to those supported by the target architecture.

In the following example we build schedutils on the arm_versatile_926ejs. (Note that schedutils is now a part of util-linux and not usually installed separately any longer.) This example uses the importPackages.tcl script to set up the package build infrastructure as described in 22.5 Adding Packages with a GUI Tool, p.268. The following procedure assumes you have created a project directory and configured it for the arm_versatile 926ejs, for example:
$ configure --enable-board=arm_versatile_926ejs --enable-kernel=standard \ --enable-rootfs=glibc_std

Step 1:

Import the Package

Initialize your environment and then start the importPackage.tcl script to download schedutils from the Web:
$ $ $ $ cd installDir ./wrenv.sh -p wrlinux-3.0 cd prjbuildDir wtxwish installDir/wrlinux-3.0/scripts/importPackage.tcl

NOTE: This should cause your path to include the installDir/workbench-version/foundation... path. Enter the following command to verify your path:

$ echo $PATH If there is no foundation directory path in your path, you can do the following:
$ export PATH=$PATH:installDirworkbench-version/foundation/x86-linux2/bin

for bash, or
$ setenv PATH $PATH:installDirworkbench-version/foundation/x86-linux2/bin

for csh. At the time of this writing, the package can be found at http://rlove.org/misc/schedutils-1.5.0.tar.gz. Select Wget, enter the URL, click Update and click Go. When the tool has completed the import, you will have the package in packages/, the Makefile and patches/ directory in dist/schedutils/, and the build directory build/schedutils/.

264

22 Examples of Adding Packages 22.4 Adding Classic Packages

Step 2:

Edit the build system makefile.

The importPackges.tcl script creates a Makefile in dist/schedutils/, filling in the settings that it can and pointing out additional entries that you need to edit. In particular, search for angle brackets (< and >) which indicate where you must supply values. For the schedutils Makefile, you must supply values for the following:
schedutils_UPSTREAM = <pkg_URL>/$(schedutils_ARCHIVE) schedutils_DESCRIPTION = <Description of the package> schedutils_SUMMARY = <RPM Summary of the package> schedutils_UPSTREAM = <pkg_URL>/$(schedutils_ARCHIVE) schedutils_MD5SUM =

Refer to the comments in the makefile for instructions on filling in these fields. After making the edits, your makefile (minus the comments) will look something like this:
PACKAGES += schedutils schedutils_VERSION = 1.5.0 schedutils_ARCHIVE = schedutils-1.5.0.tar.gz schedutils_UPSTREAM = http://rlove.org/misc/$(schedutils_ARCHIVE) schedutils_LICENSE = GPL schedutils_DEPENDS = glibc schedutils_DESCRIPTION = schedutils is a set of utilities for retrieving and \ manipulating process scheduler-related attributes, such as real-time \ parameters and CPU affinity. schedutils_NAME = schedutils schedutils_SUMMARY = Linux utilities for manipulating scheduler attributes. schedutils_RPM_DEFAULT = schedutils schedutils_RPM_DEVEL = schedutils_RPM_ALL = schedutils schedutils_SUPPORTLVL = 3 schedutils_GROUP = System Environment/Base schedutils_RUN_DEPS = glibc schedutils_MD5SUM = bb8dc76dd896bc190d4b5347db86e12a

Step 3:

Try to build the package.

At this point, you could try to build the package:


$ make -C build schedutils

The build fails with a message such as the following:


prjbuildDir/build/schedutils-1.5.0/configure: No such file or directory

As previously mentioned, schedutils does not come with configure. You will need to modify the makefile for this situation as shown in the next step.
Step 4: Add configure to the makefile.

Add the following lines to dist/schedutils/Makefile:


schedutils.config: schedutils.patch @$(MAKE_STAMP)

If you build schedutils now, it gets past the configure error and you come to the next errors:
install: cannot create regular file `/usr/local/bin/chrt': Permission denied install: cannot create regular file `/usr/local/bin/ionice': Permission denied install: cannot create regular file `/usr/local/bin/taskset': Permission denied

265

Wind River Linux User's Guide, 3.0

The reason for these errors is that the package you acquired from the Web, like most third-party packages you acquire, is not configured for building in a cross-development environment. It assumes you want to install the package on the host where you are building it. You must patch the supplied makefile (as described in Step 7) and edit dist/schedutils/Makefile to integrate the package build process into the Wind River Linux build environment as described in the next step.
Step 5: Edit the makefile for the build environment.

Add the following to dist/schedutils/Makefile:


schedutils_MAKE_OPT = \ $(call configure_target,schedutils) \ PROGS="$(schedutils_PROGS)" \ CFLAGS="$(schedutils_TARGET_CFLAGS)-I$(HOST_CROSS_INCLUDE_DIR)" schedutils_INSTALL_OPT = \ PROGS="$(schedutils_PROGS)" \ INSTALLBIN="$(INSTALL)" \ MANPAGES="$(addsuffix.1,$(schedutils_PROGS))"\ install PREFIX="$(schedutils_INSTALL_DIR)/usr"

These entries do the following:

schedutils_MAKE_OPT = Values listed under this variable are passed to the makefile on the command line as make $(schedutils_MAKE_OPT). As a result, values in the supplied makefile such as: CFLAGS = -O2 -Wall -W -Wstrict-prototypes ${ANAL_WARN} are replaced by: CFLAGS=$(schedutils_TARGET_CFLAGS) -I$(HOST_CROSS_INCLUD E_DIR)

schedutils_INSTALL_OPT = Values listed under this variable are passed to make install as make $(schedutils_INSTALL_OPT). As a result, values in the supplied makefile such as:
PREFIX = /usr/local

are replaced by:


PREFIX=$(schedutils_INSTALL_DIR)/usr Step 6: Build schedutils again.

In some cases, you would now be able to build your imported package without a problem:
$ make -C build schedutils.distclean $ make -C build schedutils

In the case of the schedutils build for the arm_verstile_926ejs, however, you meet an additional error:
ionice.c:48:3: error: #error "Unsupported archiecture!"

The ionice program portion of schedutils is not supported for this architecture and must be removed from the build. To save time and space in this example, note that another schedutils program, taskset, is not needed and can also be removed as shown in the next step.

266

22 Examples of Adding Packages 22.4 Adding Classic Packages

Step 7:

Create a patch.

Create a patch instead of editing the makefile each time you perform a make pkg.unpack. Create the schedutils-1.5.0-cross-compiler.patch patch shown in Example 22-3 and put it in dist/schedutils/patches/. Then create a patches.list file in the same directory which contains only the name of the patch, for example:
$ cat dist/schedutils/patches/patches.list schedutils-1.5.0-cross-compiler.patch Example 22-3 Commented schedutils-1.5.0-cross-compiler.patch Patch --- schedutils-1.5.0/Makefile2005-07-29 13:32:57.000000000 -0700 +++ schedutils-1.5.0.build/Makefile2007-10-26 09:38:02.000000000 -0700 @@ -21,15 +21,20 @@ CFLAGS = -O2 -Wall -W -Wstrict-prototypes ${ANAL_WARN} INSTALLBIN= install -INSTALLMAN= install --mode a=r -INSTALLDOC= install --mode a=r -INSTALLDOCDIR= install --directory

+# Replace hard coded install with INSTALLBIN variable that is modified in +# the make command line as one of the variables listed in +# $(schedutils_INSTALL_OPT).
+INSTALLMAN= $(INSTALLBIN) --mode a=r +INSTALLDOC= $(INSTALLBIN) --mode a=r +INSTALLDOCDIR= $(INSTALLBIN) --directory PROGS = chrt ionice taskset MANPAGES= chrt.1 taskset.1 DOCS = AUTHORS ChangeLog COPYING INSTALL README -all: chrt ionice taskset

+# Replaces hard-coded targets list with PROGS variable that is modified +# in the make command line as one of the variables listed in +# $(schedutils_MAKE_OPT) and +# $(schedutils_INSTALL_OPT)
+all: $(PROGS) chrt: chrt.c $(CC) $(CFLAGS) -DVERSION=\"$(ver)\" -o chrt chrt.c @@ -51,13 +56,23 @@ -o -name '*.tmp' -o -size 0 \) \ -type f -print | xargs rm -rf

+# Fixes the installation so that instead of: +# install file1 file2 file3 /destination/directory +# it does: +# install file1 /destination/directory
+# per each file in 'for' loop install: ${PROGS} @echo Install binaries to: ${BINDIR} @echo Install manpage to: ${MAN1DIR} + + + + + @${INSTALLBIN} ${PROGS} ${BINDIR} @cd man/ && ${INSTALLMAN} ${MANPAGES} ${MAN1DIR} ${INSTALLBIN} -d ${BINDIR} for fl in $(PROGS) ; do \ ${INSTALLBIN} $$fl ${BINDIR}; \ done ${INSTALLMAN} -d ${MAN1DIR}

267

Wind River Linux User's Guide, 3.0

+ + +

for fl in ${MANPAGES} ; do \ (cd man/ && ${INSTALLMAN} $$fl ${MAN1DIR}); \ done @echo Done! Do 'make installdoc' if you wish to install the docs.

installdoc: ${PROGS}

Step 8:

Build the package.

The package should now build correctly.

22.5 Adding Packages with a GUI Tool


There is an importPackages.tcl script integrated into Workbench that you can also invoke directly from the command line. The importPackages.tcl script allows you to import source packages from your file system, from the network, and from the collection of packages supplied with Wind River Linux. The purpose of the script is to make the process of adding packages to your target file system easier. In the following example, a package is downloaded from the Web and preliminary work is performed that sets-up the build system infrastructure automatically.
Step 9: Start the script.

Change directory to your project build directory and start the tool for adding packages:
$ $ $ $ cd installDir ./wrenv.sh -p wrlinux-3.0 cd prjbuildDir wtxwish $WIND_BASE/scripts/importPackages.tcl &

The import dialog opens. Select Import.


Step 10: Specify the location of the package.

Select Wget as shown if you are downloading the package from the Web.

268

22 Examples of Adding Packages 22.5 Adding Packages with a GUI Tool

Step 11:

Enter the URL of the package to download.

You may, for example, want to download the thttpd-2.25b-16.fc9.src.rpm package from http://download.fedora.redhat.com/pub/fedora/linux/releases/7/Fedora/source/ SRPMS, so enter the full URL with the package name, in this case http://download.fedora.redhat.com/pub/fedora/linux/releases/7/Fedora/source/ SRPMS/thttpd-2.25b-16.fc9.src.rpm. Click Update and note that the package name and version fields get filled-in.

269

Wind River Linux User's Guide, 3.0

Step 12:

Download the package.

Click Go to download the package. If the Verbose box is checked, you will be prompted to press ENTER twice as the script interactively displays its progress. Uncheck the Verbose box to avoid the interactive prompting.
Step 13: Complete the process manually.

The Done message in the importPackages.tcl screen indicates the process of importing the package is complete. You can click Close to end the script. At this point, you can see that the package name has been added to pkglist, the package is in packages/, and the dist/ infrastructure is in place.

22.6 Adding an RPM Package to a Running Target


This tutorial installs the man package onto a running target (the SBC8560), that has already been configured and built to include a full suite of man pages with the --enable-doc-pages=target option.
Step 1: Copy and install the RPM dependencies for the man package.

From your preferred source for target RPMs, obtain the RPM packages that the man package depends on, and which are not already part of the standard Glibc run-time system. Copy them to the run-time system and install them on the running target with the rpm command:
> rpm -ivh info*rpm

Install in this order: 1. 2.


Step 2:

info groff

Copy and install the man package.

Similarly copy and install the man package:


> rpm -ivh man*rpm

All installed man pages can now be viewed with the man command.

270

23
Using Custom Templates and Layers
23.1 Introduction 271 23.2 Adding a Layer to a Platform Project 273 23.3 Adding Another Layer 274 23.4 Overriding Layer Contents with Another Layer 275 23.5 Patching a Host Tools Package 276 23.6 Configuring and Patching the Kernel 277 23.7 Using Feature Templates in Layers 279 23.8 Modifying a BSP 280

23.1 Introduction
Layers and templates are optional configuration techniques you may use with Wind River Linux projects. You might use templates, for example, to cause relatively small changes at the end of the configuration process. You would typically use layers to control larger configuration issues, perhaps reconfiguring and patching the kernel, modifying system files, and including one or more templates. Examples of cases where you may find that layers provide advantages are when:

You plan to combine the work of different internal or external groups (many-to-one scenarios). You wish to share work with multiple projects or groups (one-to-many scenarios). You are making a step to a next kernel, release, or product version.

The following examples introduce some of the ways you may use layers and templates to do everything from adding a package to building a product in various feature configurations, or modifying an existing board support package.

271

Wind River Linux User's Guide, 3.0

23.1.1 Examples in this Use Case


This use case contains the following examples:

23.2 Adding a Layer to a Platform Project, p.273 23.3 Adding Another Layer, p.274 23.4 Overriding Layer Contents with Another Layer, p.275 23.5 Patching a Host Tools Package, p.276 23.6 Configuring and Patching the Kernel, p.277 23.7 Using Feature Templates in Layers, p.279 23.8 Modifying a BSP, p.280

The initial example applies a layer that simply adds an application. The initial configuration is then updated with a series of layers to show how layers can be used in combination. Examples that follow this patch the kernel and illustrate how to use a layer with feature templates to configure different product features in a hypothetical phone product line. A final example creates a custom BSP by modifying an existing BSP without altering the original BSPs contents. The examples use a QEMU-supported target, the arm_versatile_926ejs, to verify results.

23.1.2 The Layers Used in the Example


Perform this use case in combination with a directory structure of example layers that is contained in installDir/wrlinux-3.0/samples/. It is in a zip archive named layers_and_templates.zip. Unzip the archive in a location outside of your build and development environments, for example, in a Layers subdirectory of your home directory. The archive contains this PDF file (layers_templates_use_case.pdf) and several layers in the layers_and_templates directory. The layers_and_templates directory contains the following subdirectories (layers):

hello_layerthis is the first layer you add. It adds a new target application. firstmodthis layer modifies the first layer by patching the new target application. secondmodthis layer overrides the patch in the previous layer. qemumodthis layer modifies a host application. kernelmodthis layer modifies the kernel with new configuration settings and a patch. ftthis layer example uses feature templates to configure different feature sets of a phone. bspmodthis layer shows how to modify an existing BSP to create a new BSP without altering the contents of the original BSP.

272

23 Using Custom Templates and Layers 23.2 Adding a Layer to a Platform Project

23.2 Adding a Layer to a Platform Project


A layer can be used to add and remove packages. An example of how to add a package is shown in this first example layer which adds the hello application.
Step 1: Create a platform project.

Create a project with a small file system, for example:


$ configure --enable-board=arm_versatile_926ejs \ --enable-kernel=standard \ --enable-rootfs=glibc_small

You now have a standard glibc- and busybox-based file system configured as can be seen in the pkglist file:
$ cat pkglist busybox filesystem glibc libgcc linux setup timezone wrs_kernheaders

NOTE: If you use Workbench to configure your project, you will see many more packages when you view the pkglist file because Workbench includes the additional debug and demo templates by default. Step 2: Add a layer.

This time, create a platform project but add a layer to the existing, default configuration with the --with-layer argument. Include hello_layer from layers_and_templates:
$ configure --enable-board=arm_versatile_926ejs \ --enable-kernel=standard \ --enable-rootfs=glibc_small \ --with-layer=/full_path/layers_and_templates/hello_layer

The layer, hello_layer, has now been configured into the project. The new application, hello, has been added to pkglist by the layers pkglist.add file (hello_layer/templates/default/pkglist.add):
$ cat pkglist busybox filesystem glibc hello libgcc linux setup timezone wrs_kernheaders

Step 3:

Verify the source will be built.

You can enter the following make command at this point to confirm that the source is copied into your build directory in preparation for the file system build:
$ make -C build hello.patch

The hello example source is copied into build/hello-WRS/.

273

Wind River Linux User's Guide, 3.0

Step 4:

Make the new file system.

Make the file system that now includes hello:


$ make fs

Step 5:

Test the new application.

Because this is a QEMU-supported target, you can run QEMU to quickly test that the new application is in place and works:
$ make start-target ... # hello hi there # CTRL+A x $

The hello application is there and working. You can see that it is in the root users path. It is located in /bin, as specified in hello_layer/dist/hello/Makefile.
NOTE: The hello application added in this example does not use the standard tar archive or SRPM packaging scheme, rather, the source is already unpacked. This approach can be very useful in development for making changes to source (for example in a source code control system) because those changes are applied immediatelyno repackaging is required to make them available to the build system.

23.3 Adding Another Layer


Layers can also modify components in a platform project. Suppose you had forwarded your hello_layer to someone else, and they decided to make some changes and return them to you. Rather than modify your layer, they created another layer that patches the hello application in your layer.
Step 1: Create a project that includes two layers.

Add the new layer to the --with-layer argument, separating it with a comma as shown:
$ configure --enable-board=arm_versatile_926ejs \ --enable-kernel=standard \ --enable-rootfs=glibc_small \ --with-layer=/full_path/layers_and_templates/firstmod,/full_path/layers_and_templat es/hello_layer

Step 2:

Apply the patch.

You can now apply the patch provided by the new layer:
$ make -C build hello.distclean $ make -C build hello.patch

If you examine build/hello_WRS/hello.c you can see that it is changed to print "bye there".

274

23 Using Custom Templates and Layers 23.4 Overriding Layer Contents with Another Layer

Step 3:

Build and test the new file system.

If you now build and test the new file system, you will see that the hello application prints the message as modified by the patch in the firstmod layer.
NOTE: It is easy to back out of changes made by layersif you do not want to include the changes from firstmod, simply configure your project without it.

23.4 Overriding Layer Contents with Another Layer


Platform modifications can come from multiple layers. As an example, the secondmod layer contains a templates/default just like firstmod does. By proper ordering of the layers on the configure command line, you can determine which layer will override another.
Step 1: Add a third layer.

Configure a project with the hello_layer, firstmod, and secondmod layers as follows:
$ configure --enable-board=arm_versatile_926ejs \ --enable-kernel=standard \ --enable-rootfs=glibc_small \ --with-layer=/full_path/layers_and_templates/secondmod,/full_path/layers_and_templa tes/firstmod,/full_path/layers_and_templates/hello_layer

Note the order in which the layers are specifiedsecondmod is listed first and then firstmod, so the default template in secondmod will override the default template in firstmod. In other words, layers are applied in reverse order, so that the first layers specified are the last layers applied. The last layers applied override layers applied earlier.
Step 2: View the order of layer processing.

You can see the order that layers are processed in the prjbuildDir/layers file, where the first layers listed overlay the layers listed later:
$ cat prjbuildDir/layers /full_path/layers_and_templates/secondmod /full_path/layers_and_templates/firstmod /full_path/layers_and_templates/hello_layer ...

The secondmod layer is able to modify the code in firstmod because it is applied later. The secondmod layer has a higher priority than the firstmod layer.
Step 3: Apply the patches.

Patch the source:


$ make -C build hello.distclean $ make -C build hello.patch

If you examine build/hello_WRS/hello.c you can see that it is changed to print "thats all folks".

275

Wind River Linux User's Guide, 3.0

Step 4:

View the order of patch processing.

quilt is used by the build system to manage the patches. You can use the quilt series command (or cat the contents of the prjbuildDir/build/hello-WRS/wrlinux_quilt_patches/series file) to see the order of patch processing:
$ alias quilt=$PWD/host-cross/bin/quilt $ cd build/hello-WRS $ quilt series patches_links/full_path/layers_and_templates/firstmod/templates/default/hello /localchange.patch patches_links/full_path/layers_and_templates/secondmod/templates/default/hell o/localchange2.patch $

(Note that to use quilt you must have prjbuildDir/host-cross/bin in your path, or specify the path to quilt on the command line.) If you had specified firstmod before secondmod on your configure command line, the order of patches in series would be reversed and the build system would attempt to apply the secondmod patch before the firstmod patch. This would fail when you entered the make -C build hello.patch command.
Step 5: Build and test the new file system.

If you now build and test the new file system, you will see that the hello application prints the message thats all folks, which is contained in the secondmod layer.

23.5 Patching a Host Tools Package


The previous examples described how to use layers to add and patch target applications. You can also use a layer to patch a host tool as shown here. Some things to note about host tools:

The host tools packages always use the classic Makefile. See 10. Adding Packages for more on classic packages. Place any patches for the package source tree in tools/pkg/patches/pkg-what_is_done.patch. list any patches in a patches.list file in tools/pkg/patches/patches.list.

See installDir/wrlinux-3.0/layers/wrll-host-tools/tools/ for examples using the host tool package infrastructure. This example uses the layer qemumod to provide a patch (templates/default/qemu/my_qemu.patch) for the qemu emulator host tool. To patch the tool, include the layer and rebuild the host tools as shown in the following example.

276

23 Using Custom Templates and Layers 23.6 Configuring and Patching the Kernel

Step 1:

Configure the project.

The following configure command includes the layer with the patch and enables the host tools to be rebuilt:
$ configure --enable-board=arm_versatile_926ejs \ --enable-kernel=standard --enable-rootfs=glibc_small \ --with-layer=/full_path/layers_and_templates/qemumod \ --enable-prebuilt-tools=no

Step 2:

Rebuild qemu.

You can now rebuild qemu so that it will include the patch from the layer:
$ cd build-tools/ $ make qemu.rebuild

Step 3:

Verify that the patch is applied.

You can use quilt or view the series file to see that the last patch applied was the one provided by the layer:
$ cd qemu-version/ $ quilt series ... patches_links/full_path/layers_and_templates/qemumod/templates/default/qemu/m y_qemu.patch

(Note that the patch itself is empty and does nothingit is just used to demonstrate the way patches can be applied to update host tools with layers.)

23.6 Configuring and Patching the Kernel


In the following example you create a new platform project that uses a layer to modify a kernel configuration parameter and also patch the kernel. The included layer kernelmod has a default template that does two thingsit enables a kernel configuration setting (CONFIG_BINFMT_AOUT) and patches the kernel as described next.

Enabling CONFIG_BINFMT_AOUT

The default template in the kernelmod layer includes a binfmt.cfg file (templates/default/binfmt.cfg) that enables the CONFIG_BINFMT_AOUT setting:
CONFIG_BINFMT_AOUT=y

It also contains an SCC file (templates/default/binfmt.scc) that refers to the fragment:


kconf non-hardware binfmt.cfg

277

Wind River Linux User's Guide, 3.0

Patching the Kernel

The following patch (from templates/default/linux/2.6.x/mykernelpatch.patch) is applied, which will output a message at boot time:
--init/calibrate.c | 1 + 1 file changed, 1 insertion(+) --- a/init/calibrate.c +++ b/init/calibrate.c @@ -117,6 +117,7 @@ void __devinit calibrate_delay(void) unsigned long ticks, loopbit; int lps_precision = LPS_PREC; + printk("La-la-la\n"); if (preset_lpj) { loops_per_jiffy = preset_lpj; printk("Calibrating delay loop (skipped)... "

Configuring and Building

Use the following procedure to modify the kernel with a layer.


Step 1: Configure the project.

Enter the following configure command to apply the kernelmod layer:


$ configure --enable-board=arm_versatile_926ejs \ --enable-kernel=standard --enable-rootfs=glibc_small \ --with-layer=/full_path/layers_and_templates/kernelmod

Step 2:

Rebuild the kernel.

When you have configured the project to include the layer, rebuild the default standard kernel to include your custom modifications:
$ make -C build linux.rebuild

Step 3:

Verify results.

The kernel .config file should now include the CONFIG_BINFMT_AOUT setting:
$ grep CONFIG_BINFMT_AOUT build/linux-*/.config CONFIG_BINFMT_AOUT=y

You can see your patch is applied using git:


$ cd build/linux $ git whatchanged commit a47d0ffc98d13a7891766e62d3e7cbb684983c2b Author: John Doe <john.doe@windriver.com> Date: Fri Feb 13 14:59:09 2009 -0800 Adds an extra patch to the kernel Just showing a way to patch the kernel with a layer Signed-off-by: John Doe <john.doe@windriver.com> ...

Refer to 13. Patch Management for more on configuring and patching the kernel.

278

23 Using Custom Templates and Layers 23.7 Using Feature Templates in Layers

23.7 Using Feature Templates in Layers


Most changes introduced by a layer are contained in templates. In the examples so far, you have been using default templates. A default template is enabled whenever a layer is included on the configure command lineyou did not have to explicitly tell configure to include the default template. You may also use feature templates, but you must explicitly enable them. They can be used, for example, to fine-tune the platform by selectively including or excluding features without modifying the layer.
Combining Features

Another use for feature templates is aggregation. You can create "master" templates that include other templates. For a use case, imagine that you have a cell phone platform, and you'd like to be able to configure the system for different phones, ranging from a base phone to a full-fledged feature phone. To do this, you could create different feature templates, each one including a different set of sub-features. Master templates could then combine these templates to create different configurations. The ft layer included with this example contains multiple feature templates. Some of the templates contain include filesthese are the master templates that include other templates. For example, basicphone and featurephone each contain include files that combine a different set of features:
$ cat ft/templates/feature/basicphone/include feature/gprs $ cat ft/templates/feature/featurephone/include feature/edge feature/camera

These templates each cause a different set of camera, edge, or gprs feature templates to be included in a configuration.
Step 1: Configure a set of features.

For example, with the following command you would create a basicphone configuration:
$ configure --enable-board=arm_versatile_926ejs \ --enable-kernel=standard --enable-rootfs=glibc_small \ --with-layer=/full_path/layers_and_templates/ft \ --with-template=feature/basicphone

Notice that you must specifically identify the template when it is not the default template in the layer.
Step 2: Verify your configuration.

The end of the template_paths file shows that you have included the new feature templates:
$ cat template_paths ... /full_path/layers_and_templates/ft/templates/feature/gprs /full_path/layers_and_templates/ft/templates/feature/basicphone

The feature/basicphone template uses an include file to include feature template gprs, so you see gprs listed here as well as basicphone.

279

Wind River Linux User's Guide, 3.0

Step 3:

Reconfigure to create a featurephone.


$ configure --enable-board=arm_versatile_926ejs \ --enable-kernel=standard --enable-rootfs=glibc_small \ --with-layer=/full_path/layers_and_templates/ft \ --with-template=feature/featurephone

Step 4:

Verify your configuration.

The end of the template_paths file shows that you have included the new feature templates:
/full_path/layers_and_templates/ft/templates/feature/gprs /full_path/layers_and_templates/ft/templates/feature/edge /full_path/layers_and_templates/ft/templates/feature/camera /full_path/layers_and_templates/ft/templates/feature/featurephone

The featurephone template include file includes the edge and camera templates, and the edge template has an include file that includes gprs, so you see them all in template_files.
Step 5: Specify multiple templates.

Another way you could combine features is to specify multiple templates with the --with-templates argument using a comma-separated list. For example, this configure command combines features basicphone and camera:
$ configure --enable-board=arm_versatile_926ejs \ --enable-kernel=standard --enable-rootfs=glibc_small \ --with-layer=/full_path/layers_and_templates/ft \ --with-template=feature/basicphone,feature/camera ... $ cat template_paths ... /full_path/layers_and_templates/ft/templates/feature/gprs /full_path/layers_and_templates/ft/templates/feature/basicphone /full_path/layers_and_templates/ft/templates/feature/camera

With more complicated scenarios, for example if the different camera features required kernel patches, additional files, and so on, you could create layers for each feature instead of just templates, combining them as described earlier to create the desired final configuration.

23.8 Modifying a BSP


In this example, you modify an existing board support package (BSP) for the arm_versatile_926ejs. One way to do this would be to duplicate the whole board/arm_versatile_926ejs template and make changes there. The main problem with this approach is maintenancechanges introduced at the master copy of the BSP do not propagate to the copy. A better approach is to use an identically named board template that refers to the original board template, and then make your additional changes in the template you created. You do this with an include directive in your custom template that causes the original board template to be included. In this way you dont touch the original template and you confine your changes to one location.

280

23 Using Custom Templates and Layers 23.8 Modifying a BSP

A layer that demonstrates this is the bspmod layer included in the example distribution. The include file in bspmod/templates/board/arm_versatile_926ejs contains an identically named board/arm_versatile_926ejs template that causes the build system to include the original board template from the base kernel layer. This ensures that the base support for the BSP is included. The audit.scc and audit.cfg files control one kernel config option for the purpose of demonstrating how the custom BSP template overrides the default BSP template. Follow this procedure to see how the base BSP gets configured:
Step 1: Configure the project.

Add the bspmod layer to your platform project:


$ configure --enable-rootfs=glibc_small --enable-board=arm_versatile_926ejs \ --enable-kernel=standard \ --with-layer=/home/user Layers/layers_and_templates/bspmod

Step 2:

Configure the kernel.

Enter the following to configure the kernel:


$ make -C build linux.reconfigure

Step 3:

View the status of the kernel config option.

By default, for this BSP, the kernel config option CONFIG_AUDIT is not set. It is set, however, in the template just added. View the setting of the CONFIG_AUDIT option after adding the custom template and configuring the kernel:
$ grep CONFIG_AUDIT build/linux-*/.config CONFIG_AUDIT=y CONFIG_AUDIT_GENERIC=y

The BSP has been configured using the standard BSP, adding your custom BSP change.

281

Wind River Linux User's Guide, 3.0

282

24
Kernel Use Cases
24.1 Introduction 283 24.2 Adding a Feature to a Supported Kernel 283 24.3 Using KVM 285 24.4 Collecting Kernel Core Dumps with Kdump 290

24.1 Introduction
This chapter presents various examples of kernel development. Also see 9. Configuring the Kernel and 13. Patch Management for additional examples and explanations of kernel configuration and development.

24.2 Adding a Feature to a Supported Kernel


In the following procedure, you create a template that adds a kernel feature to the end of the kernel feature list. The patches that are applied are for demonstration purposes only, but illustrate the ease with which you can include or exclude your additional features with configure command options. You can also easily modify the template so that the kernel always includes the patches as is demonstrated in the example. In the following use case you will: 1. 2. 3. Create a template or layer with a template Add a kernel feature template Add patches to the kernel feature template

283

Wind River Linux User's Guide, 3.0

4. 5.
Step 1:

Configure and build with and without the new kernel feature Optionally modify the template to always include the feature

Create a template or layer with a template

Following the standard conventions of the build system, create a template or a layer with a template that includes a linux subdirectory. For example, if your feature is called custom_log_lvl you would create a template such as:
templates/features/custom_log_lvl

which contains a linux subdirectory:


templates/features/custom_log_lvl/linux

Step 2:

Add a kernel feature template file

Create a kernel feature template file in the linux subdirectory. The name of the file will be the name of the kernel feature. By convention, the directory and kernel template have the same name, but this is not a requirement. A kernel template file name follows the format: filename.scc. For this example, create templates/features/custom_log_lvl/linux/custom_log_lvl.scc. When properly configured, a feature called custom_log_lvl will be available to the kernel patching subsystem (see step 4).
Step 3: Add patches to the kernel feature template

The kernel patching subsystem offers a set of directives that are used in kernel features to control and manipulate which patches a re applied to the kernel. In this example, the patch directive is used to add patches to the kernel patch queue. Put the following contents in templates/features/custom_log_lvl/linux/custom_log_lvl.scc:
patch add_new_log_lvl.patch patch pr_debug_use_new_lvl.patch

Place the two patches shown in Example 24-1 and Example 24-2 in the template so that they are added to the kernel's patch queue when you want to configure the feature into the kernel.
Example 24-1 add_new_log_lvl.patch

b/include/linux/kernel.h | 1 + 1 file changed, 1 insertion(+) --- a/include/linux/kernel.h.orig +++ b/include/linux/kernel.h @@ -50,6 +50,7 @@ extern const char linux_proc_banner[]; #define KERN_NOTICE "<5> "/* normal but significant condition */ #define KERN_INFO "<6> "/* informational */ #define KERN_DEBUG "<7> "/* debug-level messages */ +#define KERN_CUSTOM "<7> CUSTOM: "/* custom-level messages */ extern int console_printk[]; Example 24-2 pr_debug_use_new_lvl.patch

b/include/linux/kernel.h | 2 +1 file changed, 1 insertion(+), 1 deletion(-)

284

24 Kernel Use Cases 24.3 Using KVM

--- a/include/linux/kernel.h.orig +++ b/include/linux/kernel.h @@ -207,7 +207,7 @@ extern void dump_stack(void); #ifdef DEBUG /* If you are writing a driver, please use dev_dbg instead */ #define pr_debug(fmt,arg...) \ printk(KERN_DEBUG fmt,##arg) + printk(KERN_CUSTOM fmt,##arg) #else static inline int __attribute__ ((format (printf, 1, 2))) pr_debug(const char * fmt, ...) {

Step 4:

Configure and build with and without the new kernel feature

To configure and build a kernel with the new feature applied, specify the new kernel feature template when you specify the kernel to the configure command with --enable-kernel=standard+custom_log_lvl. For example, a simple configure line for a common PC could be:
$ configure --enable-board=common_pc \ --with-template-dir=PATH_TO/templates/features/custom_log_lvl \ --enable-kernel=standard+custom_log_lvl \ --enable-rootfs=glibc_std

When the configure command completes you can build the kernel and it will include the new feature. Simply remove the reference to the kernel feature template if you want to build the kernel without it:
$ $ configure --enable-board=common_pc \ --with-template-dir=PATH_TO/templates/features/custom_log_lvl \ --enable-kernel=standard \ --enable-rootfs=glibc_std

24.3 Using KVM


At the time of this writing, only the standard kernel and glibc_std root file system on the common_pc and common_pc_64 BSP are supported. Contact your Wind River representative for information on support of other combinations. This section provides an example of how to run the KVM guest on a host with a Wind River Linux kernel and root file system.

Overview Of KVM

KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V).It consists of a loadable kernel module, kvm.ko, which provides the core virtualization infrastructure, and a processor-specific kernel module, kvm-intel.ko or kvm-amd.ko.

285

Wind River Linux User's Guide, 3.0

The topology of KVM in Wind River Linux looks as follows:


wrlinux-3.0 (root) layers/ wrll-linux-version/ templates/feature/kvm/ (kvm building template) pkglist.add README (feature description) dist/kvm/ (User space side) Makefile patches/*.diff (user space patches) kernel-cache/features/kvm/ (Kernel side) kvm.scc kvm.cfg 0001-kvm*.patch (kenrel patches) ---------|-----------------------------------------------------------bsp/ (pseudo) common_pc_64_kvm_guest common_pc_kvm_guest

As you can see, KVM has two partsthe KVM host side code shown on the top; and the KVM guest BSP below, which is used as guest kernel to validate the KVM host.

KVM Host Requirements

The host you will use as the KVM host has certain requirements. The following assumes it is currently running Wind River Linux with the glibc_std file system. Before starting the following procedure, determine if virtualization (VT) is supported on your host. This requires two steps: 1. Enter the following command:
$ egrep '(vmx|svm)' --color=always proc/cpuinfo

If your target has VT support, you'll see the following output:


flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr lahf_lm

2.

Confirm that your BIOS has virtualization enabled (if required). This step simply requires you to confirm the relevant options in your BIOS have the correct settings. Examples of options to investigate are:
POST Behavior->Virtualization Performance->Virtualization.

Change the status of virtualization from off to Enabled.

24.3.1 Configuring the KVM Host


This section describes how to configure and build a kernel and root file system with KVM support. KVM is not enabled by default on any BSP, so you have to enable it in the kernel and in userspace. A feature template and a kernel feature are provided to enable KVM:

--with-template=feature/kvm --enable-kernel=standard+features/kvm

286

24 Kernel Use Cases 24.3 Using KVM

To configure and build the example, use the following procedure: 1. Enter the following configure command:
$ configure \ --enable-board=common_pc_64 \ --enable-rootfs=glibc_std \ --enable-jobs=4 \ --enable-kernel=standard+features/kvm \ --with-template=feature/kvm

2.

Build KVM packages:


$ make -C build kvm

When complete, you will find the following KVM packages in prjbuildDir/export/RPMS/x86-64:

gnutls-version_WR3.0zz.x86_64.rpm ncurses-version_WR3.0zz.x86_64.rpm zlib-version_WR3.0zz.x86_64.rpm libgpg-error-version_WR3.0zz.x86_64.rpm libgcrypt-version_WR3.0zz.x86_64.rpm kvm-version_WR3.0zz.x86_64.rpm

3.

Assemble the root file system:


$ make fs

Boot KVM on common_pc_64 (with TAP)

Use the following procedure to launch a KVM guest from the KVM host using TAP network configuration.
KVM Host Preparation

In order to launch the KVM guest, you must first boot the KVM host. This section describes how to boot the board (common_pc_64) with the Wind River Linux standard kernel and glibc_std with KVM support. This example assumes a KVM host with a SATA hard disk used as the root device.
Deploy the Root File System

Deploy the root file system on a KVM host root device such as hard disk.
NOTE: You cannot use NFS as the KVM host root device when booting the KVM

guest. The following is a simple way to install the root file system contained in the tar archive, but is based on two assumptions:

There is a idle partition on the KVM host HD, for example sda2. The KVM host machine supports NIC boot. Set the KVM host's boot sequence in the BIOS settings to onboard NIC, and enabled as Onboard Devices > Integrated NI > Enabled w/PXE.

1.

287

Wind River Linux User's Guide, 3.0

2.

Configure a pxeboot server for the KVM host. This can be, for example, the development host. See 17. Deploying Your Board with PXE for information on how to configure a PXE server. Boot your KVM host. Create the file system on the KVM host on the idle partition with mkfs and then mount the partition. Untar your root file system (common_pc_64-glibc-std.tar.bz2) on the idle partition. Reboot the KVM host machine from the hard drive. Untar common_pc_64-linux-modules-WR3.0zz_standard.tar.bz2 in the "/".

3. 4. 5. 6. 7.

24.3.2 Configuring the KVM Guest


This section describes the steps required to prepare a KVM guest kernel and root file system to boot on the KVM host booted in Boot KVM on common_pc_64 (with TAP), p.287. 1. Configure and build a BSP named common_pc_64_kvm_guest on your development host:
$ configure \ --enable-rootfs=glibc_small \ --enable-kernel=standard \ --enable-board=common_pc_64_kvm_guest

Note that if you want to use the apache server in 24.3.3 Run apache or boa, p.289, use glibc-std as your KVM guest root file system. 2. Build the file system:
$ make fs

When this complete, there is a KVM guest kernel image and root file system tar archive in your prjbuildDir/export directory. 3. 4. Transfer the kernel and root file system to proper path in the KVM host (this will be used in the following). Make an hda rootfs image (on the KVM host machine). In the directory of the KVM guest root file system deployed in Deploy the Root File System, p.287, run the following commands:
# modprobe loop # make-kvm-guest-rootfs-img ./rootfs.img 768000 \ prjbuildDir/export/common_pc_64-glibc_std-standard-dist.tar.bz2

NOTE: Run make-kvm-guest-rootfs-img without any parameters to get help.

rootfs.img is output as an glibc_small or glibc_std ext2 root file system image, both of which can be used as the KVM guest root file system.

288

24 Kernel Use Cases 24.3 Using KVM

Start the KVM guest (linux) from the KVM host (linux):

Use the following commands to launch a KVM guest kernel from the KVM host:
# modprobe kvm-intel (or modprobe kvm-amd) # qemu-system-x86_64 -nographic -net nic,model=i82557b \ -net tap,script=/etc/qemu-ifup \ -hda path_to/rootfs.img -kernel path_to/kernel \ -append "root=/dev/hda rw console=ttyS0,115200 \ ip=192.168.0.3::192.168.0.1:255.255.254.0"

Confirm your IP and gateway configuration is correct before starting. The IP address of the KVM guest should be in the same subnet segment of the KVM host. Set your netmask appropriately. After having booted the KVM guest, the virtual machine can be accessed from your local net work through a command such as ssh root@192.168.0.3.

24.3.3 Run apache or boa


Running apache

To use apache server, you should have glibc-std as your KVM guest root file system. After you boot the KVM guest from the KVM host, the boa server will be started by default, so you should kill it first so that you can start httpd for apache to avoid conflict for port 80. Open the KVM guest IP address using a web browser from any local IP and you should see the following message:
It works

Running boa

You can also use boa, accessed at http://kvm-guest_ip_address/. The following message should display:
boa is running

Before quitting KVM with CTRL+A X, ensure that the KVM guest is cleanly shut down.

289

Wind River Linux User's Guide, 3.0

24.4 Collecting Kernel Core Dumps with Kdump


Not all boards and root file system/kernel combinations support kdump. Contact your Wind River representative for information on support of other combinations. Kdump is a reliable Linux kernel crash dumping mechanism that allows you to capture a crash dump from the context of a freshly booted kernel rather than the context of the crashed kernel. Kdump uses the kexec kernel feature of the first or primary kernel to boot into a second kernel whenever system crashes. This second kernel, often called a capture kernel, boots with very little memory and captures the dump image. The first kernel reserves a section of memory that the second kernel uses to boot. kexec enables booting the capture kernel without going through the system firmware (for example BIO or OpenFirmware), allowing the contents of first kernel's memory to be preserved, essentially constituting the kernel crash dump. Kexec and kdump are supported by CGL kernels and file systems for the x86 and PowerPC architectures. Parts of the procedures vary depending on architecture. The following sections present examples for the x86 and PowerPC architectures.

24.4.1 Kdump Example with x86


This use case demonstrates a generic x86 architecture kdump application.You can perform the following procedure using QEMU configured for a common_pc architecture with a glibc_cgl file system which includes necessary tools. If you are using, for example, a glibc_std file system, you must add the kexec_tools package (make -C build kexec_tools.addpkg). 1. Configure your platform project:
$ configure --enable-board=common_pc \ --enable-kernel=standard \ --enable-rootfs=glibc_cgl

2. 3.

Build your file system:


$ make fs

Configure a kernel with kexec and kdump support:


$ make -C build linux.menuconfig

enable Processor types and features > kexec system call (This sets CONFIG_KEXEC=y). enable Processor types and features > kernel crash dumps (This sets CONFIG_CRASH_DUMP=y). enable Processor types and features > Build a relocatable kernel (This sets CONFIG_RELOCATABLE=y). For CGL kernels only (so that crash dumps are interpretable), disable Security options > Grsecurity > Grsecurity (This sets CONFIG_GRKERNSEC is not set).

Save and exit menuconfig. 4. Compile the kernel:


$ make -C build linux.rebuild

290

24 Kernel Use Cases 24.4 Collecting Kernel Core Dumps with Kdump

5.

Copy the kernel images to the target. Two images are requireda vmlinux image with symbols, referenced by the crash analysis tool; and a bzImage which serves as the capture dump kernel.
$ cp export/common_pc-vmlinux-symbols-WR3.0zz_standard export/dist/root/ $ cp export/common_pc-bzImage-WR3.0zz_standard export/dist/root/

6.

Configure QEMU with a sufficient amount of RAM for this procedure, and also to pass a command-line argument to the kernel to reserve a buffer to hold the capture kernel:
$ make config-target

Set TARGET0_QEMU_MEM=256 for 256 MB. Add crashkernel=64M@16M to the end of TARGET0_QEMU_KERNEL_OPTS (for example, lock=pit oprofile.timer=1 crashkernel=64M@16M). This reserves 64 MB at the physical address 16 MB.

7. 8.

Start the QEMU target:


$ make start-target

Log in as the user root with password root and load the capture-kernel image into the reserved buffer for execution upon a kernel panic.
# kexec -p common_pc-bzImage-WR3.0zz_standard \ --args-linux \ --append=cat /proc/cmdline | \ sed s/ crashkernel=64M@16M// noacpi maxcpus=1

-p means this kernel should be loaded on panic; --args-linux denotes that the image is a Linux kernel; --append= specifies the command line arguments to pass the capture-kernel

In this case we pass the same arguments as for the primary kernel but without reserving a window for another capture-kernel. We also ensure the crashkernel boots with only one CPU and ACPI disabled. 9. Trigger a kernel panic by loading a bad module, doing something nasty, or executing the following command
# echo c > /proc/sysrq-trigger

Wait for the crash-kernel to boot. 10. Log in again as the user root with password root and copy the core dump from the crashed kernel to permanent storage, for example:
# cp /proc/vmcore /root/vmcore.dump

11.

Reboot the target using the standard kernel. The crash kernel boots with very little memory and so may not be capable of being used to analyze the crash dump without the kernels out-of-memory (OOM) killer killing the process. Target:
# shutdown -h now

Host:
$ make start-target

Note that, alternatively, you could create a boot script to automatically copy the core to storage and reboot back into service.

291

Wind River Linux User's Guide, 3.0

Using kexec for Quick Reboot

Kexec may also be used to quickly reboot a target, bypassing the system firmware. To do this: 1. 2. Boot the target as normal. You do not need to supply a crashkernel command line argument. Load a kernel to reboot into:
root@localhost:/root> kexec -l common_pc-bzImage-WR3.0zz_standard \ --args-linux \ --append=cat /proc/cmdline

3.

Reboot into the new kernel. Note that this does nothing graceful to prepare userspace to go down.
root@localhost:/root> kexec -e

Issues and Limitations

A window of memory must be reserved to hold the capture kernel and a small amount of bookkeeping data (less than1MB). For most applications a 64MB buffer is sufficient, as specified with crashkernel=64M@16M. Once booted with such a command line argument, that memory is no longer available for use by the system. It is not possible to kexec on panic to a new kernel from the context of a capture kernel. The capture kernel is potentially booted with very little memory and is not recommended for use in SMP mode. Therefore, Wind River recommends the system return to the standard kernel after the crash dump is collected.

292

PA R T V

Appendixes
A B C D E F G H Open Source Documentation ............................ 295 Common make Command Targets ................... 299 File System Layout Configuration .................... 303 KGDB Debugging and the Command Line ...... 309 Connecting with TIPC ........................................ 313 Control Groups (cgroups) .................................. 321 Build Variables .................................................... 325 Cavium Simple Executive Integration and Debugging ........................................................... 331 Glossary .............................................................. 359

293

Wind River Linux User's Guide, 3.0

294

A
Open Source Documentation
A.1 Introduction 295 A.2 Carrier Grade Linux 295 A.3 Networking 296 A.4 Security 296 A.5 Linux Development 296

A.1 Introduction
This chapter includes URL links to open source networking, security, and Linux development documentation that is relevant to Wind River Linux. Such documentation is available from various sources. The main source used here is the Linux Documentation Project. Open source documentation, while valuable, must always be scrutinized for relevance. It is sometimes written specifically for a certain Linux distribution (which may not always be obvious), and sometimes even for a specific version. It is often out-of-date. It is a good idea to compliment, where possible, the resources below with resources that may exist from vendors, mailing lists, and from the maintainers themselves.

A.2 Carrier Grade Linux


The Linux Foundations Carrier Grade Linux page is a repository for articles, white papers and projects devoted to developing Carrier Grade-compliant Linux distributions and applications.

295

Wind River Linux User's Guide, 3.0

A.3 Networking
Some of these documents are very general and others very specific. Note that the first two documents are very comprehensive, and include a good deal of information on specific protocols.

The Linux Networking Overview HOWTO. (www.tldp.org/HOWTO/Networking-Overview-HOWTO.html) The Linux Networking HOWTO. Previously the Net-3 Howto. (www.tldp.org/HOWTO/NET3-4-HOWTO.html) The PPP HOWTO. (www.tldp.org/HOWTO/PPP-HOWTO/index.html) ADSL Bandwidth Management HOWTO. (www.tldp.org/HOWTO/ADSL-Bandwidth-ManagementHOWTO/index.html) Traffic Control HOWTO. (www.tldp.org/HOWTO/Traffic-Control-HOWTO/) Netfilter/Iptables HOWTO. This includes a good deal of documentation on packet filtering, NAT, and tutorials. (www.netfilter.org/documentation) VPN HOWTO. (www.tldp.org/HOWTO/VPN-HOWTO/index.html)

A.4 Security
This section includes documents on Netfilter, Iptables, SSL and SSH.

Netfilter/Iptables HOWTO. This includes a good deal of documentation on packet filtering, NAT, and tutorials. (www.netfilter.org/documentation) SSL Certificates HOWTO. (http://www.tldp.org/HOWTO/SSL-Certificates-HOWTO/index.html) OpenSSH. This is the home page for the OpenSSH project, with links to documentation and download sites for all the programs included in the OpenSSH suite. (www.openssh.com)

A.5 Linux Development


This section includes documents on Linux development.

Building and Installing Software Packages for Linux. (www.tldp.org/HOWTO/Software-Building-HOWTO.html) Program Library HOWTO. (www.tldp.org/HOWTO/Program-Library-HOWTO/index.html) Linux Loadable Kernel Module HOWTO. (www.tldp.org/HOWTO/Module-HOWTO/index.html) Linux Parallel Processing HOWTO. (www.tldp.org/HOWTO/Parallel-Processing-HOWTO.html) Secure Programming for Linux HOWTO. (www.tldp.org/HOWTO/Secure-Programs-HOWTO/index.html)

296

A Open Source Documentation A.5 Linux Development

RPM HOWTO. (www.tldp.org/HOWTO/RPM-HOWTO/index.html) http://fedora.redhat.com/docs/drafts/rpm-guide-en/index.html. An RPM guide from Red Hat. http://www.rpm.org/max-rpm/. Additional useful information on RPM.

297

Wind River Linux User's Guide, 3.0

298

B
Common make Command Targets
B.1 Introduction
Table B-1 describes common make commands performed in the project build directory along with their Workbench equivalents.
Table B-1 Command Line and Workbench Build Options

make Command in prjbuildDir

Workbench Build Target

Description

make fs

fs

Build a new file system from RPMs where available, use source otherwise. No need to specify clean because export/dist and the file system image file are automatically cleaned. Force a build of everything (file system and kernel) from source. Remove the project_prj contents and folder. Run the clean rule for each package in the prjbuildDir/build directory. Re-process templates and layers. Recreates list files and makefiles but does not support changes to config.sh (which require a new configuration).

make build-all

build-all delete

make clean make reconfig

make -C build linux.clean make -C build clean make -C build linux

kernel-clean

Clean the kernel build. Same as make clean above.

kernel_build

Build Linux kernel.

299

Wind River Linux User's Guide, 3.0

Table B-1

Command Line and Workbench Build Options (contd)

make Command in prjbuildDir

Workbench Build Target

Description

make -C build linux.rebuild

kernel_rebuild

Clean, then build Linux kernel. The export/ directory is updated with the:

boot kernel kernel symbol file tar file that contains the kernel modules (which also include debug information).

linux.rebuild only rebuilds objects that are required by dependencies. make -C build linux.config make -C build linux.reconfig kernel_config Extract and patch kernel source for kernel configuration Regenerates the kernel configuration by reassembling the config fragments. kernel_menuconfig Extract and patch kernel source and launch menu-based tool for kernel configuration. Extract and patch kernel source and launch X Window tool for kernel configuration. Wind River Workbench tool for kernel configuration. Generates a board's DTB file needed to boot many PowerPC targets. Consult the BSP README for additional information and the proper DTS Base Name to use. Note that this command requires that you have already built the kernel with make -C build linux or make fs. Include the analysis tools (formerly called ScopeTools) in the file system. Build specific host tool tool.

make -C build linux.menuconfig

make -C build linux.xconfig

kernel_xconfig

Kernel Configuration make -C build linux.DTSbaseName.dtb

make -C build scopetools; make fs make -C build-tools tool.rebuild

300

B Common make Command Targets B.1 Introduction

Table B-1

Command Line and Workbench Build Options (contd)

make Command in prjbuildDir

Workbench Build Target

Description

make -C build pkg_name.unpack make -C build pkg_name.prepatch make -C build pkg_name.patch make -C build pkg_name.postpatch make -C build pkg_name.compile

For package build targets using Workbench, click the User Space Configuration tool, select the package you want to build, and select the Targets tab. You can then click the appropriate button for the package build target you want.

Unpack the packages source but stop before patching phases.

Copy package source into the build area and apply patches.

This will only do the compile. If you just specify pkg_name (with no .compile suffix), the toplevel dependency of .sysroot will trigger and the build system will compile the package, generate an RPM, and install it to the sysroot.

make -C build pkg_name.install make -C build pkg_name.clean make -C build pkg_name.distclean Clean the package pkg_name. Clean the package and the package patch list. This deletes the existing build directory of the package as well as .stamp files. Build the specific package pkg_name. Build the specific package pkg_name for the specified alternate CPU. if not recognized by the build system as a target, anything is passed into the package itself and run there. This would be like running make -C \ build/package-<version> <anything> Create the exportable prjbuildDir/export/host-tools.tar.bz2 archive.

make -C build pkg_name make -C build-cputype pkg_name make -C build pkg_name.anything

make host-tools

make -C build pkg_name.addpkg

Add a package and any packages it is known to require, and reconfigure the makefiles as appropriate. Add a package for the specified CPU and any packages that it requires. Clean, then build a package.

make -C build-cputype pkg_name.addpkg make -C build pkg_name.rebuild

301

Wind River Linux User's Guide, 3.0

Table B-1

Command Line and Workbench Build Options (contd)

make Command in prjbuildDir

Workbench Build Target

Description

User Space Configuration make -C build busybox.menuconfig make export-layer export-layer

Wind River Workbench tool to add, remove, patch packages. Menu-based tool to configure busybox. Extracts the changes to a project into a layer in the export/ directory, which can then be shared with other projects, and added to source control. Creates a sysroot/ directory in the export/ directory, which can be used for providing build specs in Workbench. Create an exportable toolchain that can be used in combination with an exported sysroot for a portable application environment. Starts a GUI applet that assists the developer in adding external packages to a project. Reboot target with latest kernel and file system. Start a QEMU simulation.

make export-sysroot

export-sysroot

make export-toolchain

export-toolchain

import-package

make deploy make start-target

deploy

302

C
File System Layout Configuration
C.1 Introduction 303 C.2 changelist.xml Commands 304 C.3 The fs_final.sh Script 308

C.1 Introduction
The file system layout feature has been designed to allow you to view the contents of the export/dist file system in Workbench as it will be generated by the development system. The following sections explain how to use scripts and XML to add custom files and directories to that file system or to RPMs. See Wind River Workbench User's Guide (Linux Version) for using the File System Configuration Layout tool in Workbench to do the following:

Examine file meta properties. Add files and directories to the file system. View parent packages and remove packages. Add devices to /dev and change their ownership.

The filesystem/changelist.xml file is an XML file that is managed by Workbench but can be edited or modified by editors or command line tools. The script wrlinux/scripts/fs_changelist.lua processes this file immediately before the optional finalization script fs_final.sh (see C.3 The fs_final.sh Script, p.308). The result is the export/dist file system image which has been created as follows: 1. 2. 3. 4. 5. All packages are exported into export/dist. fs_install.sh is processed into an RPM, and exported into export/dist. The files in filesystem/fs are copied into export/dist. The changelist.xml file is processed on top of export/dist. Finally, your optional fs_final.sh is processed, as the last word.

303

Wind River Linux User's Guide, 3.0

C.2 changelist.xml Commands


The XML file filesystem/changelist.xml iterates the files that you wish to be added to or removed from the file system. Removed files are relative to export/dist, and the added files are absolute paths on the host. This list is not ordered, though it is preferably in sorted order for ease of browsing by command line users. All files and directories are added explicitly. Adding a directory does not automatically add its contents as well. This allows for a GUI or command tool to easily do the following:

Add a directory and then remove subsets. Remove a directory and add back in subsets. Apply unique attributes to any subset of an added directory tree.

All the listed fields in the following are required for their respective action, unless otherwise noted.

General Attributes

There is one general attribute to describe the version of the file.


Example
<layout_change_list version="1" >

Removing a File, Directory, Pipe, Symlink, or Device Required Fields

action=delfile name=name Name of the file, directory, pipe, symlink, or device to delete.
Example
<cl action="delfile" name="/usr/share/f_foo0" />

Adding a File Required Fields

action=addfile name=filename Name of the file added to the target file system. umode=permissions The permissions of the target file, in octal (as with chmod).

304

C File System Layout Configuration C.2 changelist.xml Commands

Optional Fields

source=full_path The name and path of the file on the host file system. If present, the source file to be copied into the target file system. If not present, then this entry is used to modify the permissions an existing file. size=size Where size is the pre-calculated size of the file used if the source field is present, saving a size lookup by the tools that process this file (Workbench or command line tools). uid=username The user name, in text or in numeric form (as with chown). gid=groupname The group name, in text or in numeric form (as with chgrp).
Examples
<cl action="addfile" name="/usr/share/f_foo1" <cl action="addfile" name="/usr/share/f_foo2" umode="777" /> <cl action="addfile" name="/usr/share/f_foo3" size="10" /> <cl action="addfile" name="/usr/share/f_foo4" uid="user1" gid="group1" /> source="/tmp/layout/f_foo1" /> source="/tmp/layout/f_foo1" source="/tmp/layout/f_foo1" source="/tmp/layout/f_foo1"

Adding a Directory Required Fields

action=adddir name=dirname Name of the directory added to the target file system.
Optional Fields

source=name Name and path of the directory on the host file system. If present, the permissions of the source directory are used to create the directory on the target file system. If not present, then this entry is used to create a new empty directory. umode=permissions The user/group/other permissions of the directory, in octal, if the source field is not present, to modify or override the permissions of an existing directory. uid=username The user name, in text or in numeric form (as with chown). gid=groupname The group name, in text or in numeric form (as with chgrp).

305

Wind River Linux User's Guide, 3.0

Examples
<cl action="adddir" name="/usr/share/f_dir1" <cl action="adddir" name="/usr/share/f_dir2" umode="777" /> <cl action="adddir" name="/usr/share/f_dir3" size="10" /> <cl action="adddir" name="/usr/share/f_dir4" uid="user2" gid="group2" /> source="/tmp/layout/f_dir1" /> source="/tmp/layout/f_dir1" source="/tmp/layout/f_dir1" source="/tmp/layout/f_dir1"

Notes

If the source field is not present, then a new empty directory is created on the target file system. If the source field is present, then the source directory name and attributes are copied from that source location. This command will not copy the contents of the source directory. Each file or sub-directory is expected to be iterated explicitly with the respective file or directory add directive.

Adding a Symlink Required Fields

action=addsymlink name=name Name of the symlink file added to the target file system. target=target Name of the target within the target file system. umode=permissions The user/group/other permissions of the target symlink, in octal.
Optional Fields

uid=username The user name, in text or in numeric form (as with chown). gid=groupname The group name, in text or in numeric form (as with chgrp).
Examples
<cl action="addsymlink" name="/usr/share/f_sym1" /> <cl action="addsymlink" name="/usr/share/f_sym2" umode="777" /> <cl action="addsymlink" name="/usr/share/f_sym3" size="10" /> <cl action="addsymlink" name="/usr/share/f_sym4" uid="user3" gid="group3" /> target="/usr/share/f_foo1" target="/usr/share/f_foo1" target="/usr/share/f_foo1" target="/usr/share/f_foo1"

306

C File System Layout Configuration C.2 changelist.xml Commands

Adding a Device Required Fields

action=addbdev or addcdev (block or char) name=name Name of the directory added to the target file system umode=permissions The user/group/other permissions of the target device, in octal major=major_number The major number for this device. minor=minor_number The minor number for this device.
Optional Fields

uid=username The user name, in text or in numeric form (as with chown). gid=groupname The group name, in text or in numeric form (as with chgrp).
Examples
<cl action="addbdev" name="/usr/share/f_bdev1" <cl action="addbdev" name="/usr/share/f_bdev2" umode="777" /> <cl action="addbdev" name="/usr/share/f_bdev3" /> <cl action="addbdev" name="/usr/share/f_bdev4" uid="user4" gid="group4"/> <cl action="addcdev" name="/usr/share/f_cdev1" <cl action="addcdev" name="/usr/share/f_cdev2" umode="777" /> <cl action="addcdev" name="/usr/share/f_cdev3" /> <cl action="addcdev" name="/usr/share/f_cdev4" uid="user4" gid="group4"/> major="3" minor="4" /> major="3" minor="4" major="3" minor="4" size="10" major="3" minor="4" major="1" minor="2" /> major="1" minor="2" major="1" minor="2" size="10" major="1" minor="2"

Adding a Pipe Required Fields

action=addpipe name=name Name of the directory added to the target file system. umode=permissions The user/group/other permissions of the target pipe, in octal.

307

Wind River Linux User's Guide, 3.0

Optional Fields

uid=username The user name, in text or in numeric form (as with chown). gid=groupname The group name, in text or in numeric form (as with chgrp).
Examples
<cl <cl <cl <cl action="addpipe" action="addpipe" action="addpipe" action="addpipe" name="/dev/f_pipe1" name="/dev/f_pipe2" name="/dev/f_pipe3" name="/dev/f_pipe4" /> umode="777" /> size="10" /> uid="user4" gid="group4"/>

C.3 The fs_final.sh Script


The build system processes any script named fs_final.sh as the final step. This gives you the chance to script any arbitrary and final changes to the file system. Prior to the file system layout feature documented in this appendix, this was the only way to make file system changes and to trim the output of RPMs. It is expected that most of the functionality of fs_final.sh is met with the file system layout feature, and that this script should now only be needed for extreme or complicated changes.

308

D
KGDB Debugging and the Command Line
D.1 Introduction 309 D.2 Debugging with KGDB from the Command Line 309 D.3 KGDB Debugging Using the Serial Console (KGDBOC) 312

D.1 Introduction
This appendix presents some notes on KGDB debugging using gdb from the command line. Refer to Wind River Workbench by Example, Linux Version for details on using the Workbench debugger with Wind River Linux. You may find it useful to make a KGDB connection from the command line using gdb for several reasons:

You are more familiar with gdb for particular types of debugging, You wish to automate some KGDB tests. You are having problems with your KGDB connection from Workbench.

D.2 Debugging with KGDB from the Command Line


Go to your project build directory (this makes it easier to provide the path to the vmlinux symbol file):
$ cd prjbuildDir

Locate your cross-compiled version of gdb.You can find one in the projects host-cross directory, for example this one for a powerpc project where the gdb binary name has a prefix for its cross compile architecture, for example:
./host-cross/arm-wrs-linux-gnueabi/x86-linux2/arm-wrs-linux-gnueabi-gdb

NOTE: If the host and target architectures are the same, you can use the hosts gdb.

309

Wind River Linux User's Guide, 3.0

Run the cross-compiled version of gdb on your vmlinux. You will see various banners when it starts.
$ ./host-cross/arm-wrs-linux-gnueabi/x86-linux2/arm-wrs-linux-gnueabi-gdb \ export/*vmlinux-symbols*

For some boards, you need to assert the architecture for gdb. For the 8560, for example, it is necessary to specify:
(gdb) set architecture powerpc:common

For a MIPS-64 CPU board with a 32-bit kernel, it is necessary to specify:


(gdb) set architecture mips

NOTE: Without this setting, gdb will continually respond with errors such as Program received signal SIGTRAP, Trace/breakpoint trap. 0x00000000 in ?? () and other errors.

In the gdb session, connect to the target. Port 6443 is reserved for KGDB communication:
(gdb) target remote udp:target IP:6443

You will see various warnings and exceptions that you can ignore. If, however, gdb informs you that the connection was not made, review your configuration, command syntax, and the IP addresses used. Enter the where command, and note the output.:
(gdb) where

You should see a backtrace stack of some depth. If you see only one or two entries, or a ??, then you are observing an error. Enter the info registers command, and note the output:
(gdb) info registers

You should see the list of registers. Examine this list. If, for example, the program counter (the last entry) is zero or otherwise un-reasonable, then you are observing an error. Enter a breakpoint command for do_fork.
(gdb) break do_fork

On the target, enter an ls command and press RETURN once:


target_# ls

On the host, see that the breakpoint was hit.:


Breakpoint 1 at 0xc011a863: file kernel/fork.c, line 1120.

Continue the target execution.


(gdb) c

Note that the target resumes normal operation. Now press CTRL+C to get the gdb prompt back, and have it wait for the next breakpoint.
CTRL+C (gdb)

From here, you can press CTRL+C to send a break, set breakpoints, view the stack, view variables, and so on. You may wish to build the kernel with CONFIG_DEBUG_INFO=y if you want more debugging info.

310

D KGDB Debugging and the Command Line D.2 Debugging with KGDB from the Command Line

Release the KGDB connection.


(gdb) disconnect (gdb) quit

CAUTION: If you quit gdb without first disconnecting from the target, you may have to re-boot the target before you can reconnect. In fact, you may also lose Telnet and other communication, especially if the target was stopped at a breakpoint.

D.2.1 Enabling and Disabling KGDB in the Kernel


By default, KGDB is enabled in the pre-built and generated Wind River Linux kernels. Here are the steps to disable KGDB, used for example when transitioning to production builds.

Using the Command Line

Set up the kernels configuration support files.


$ cd projectdir $ make -C build linux.config

Run the command menuconfig to modify the kernel for debugging.


$ cd linux-version $ make [ARCH=ppc|i386] menuconfig

The ARCH parameter is required when the hosts architecture does not match the targets architecture; it is optional when they do match. Go to the Kernel hacking menu item using the down-arrow key, and press ENTER. Then go to the Compile the kernel with debug info menu item using the down-arrow key, and type y to enable. Now click the tab button to move the bottom menu to Exit and click return. Click the tab button again to Exit and click return again. You will now be prompted Do you wish to save our new kernel configuration? The menu should be on the Yes selection, press ENTER to save the configuration. Re-build the kernel. This will reset the stamp files for the kernel, and rebuild the kernel so that the new configuration is applied.
$ cd .. $ make linux.rebuild

NOTE: Using the build target make linux is not sufficient, because this command

will not reset the stamp files and the configuration changes will not be applied. You will have a new kernel and vmlinux symbol table file created in the export directory. Remember these files for Workbench and the command line testing.

311

Wind River Linux User's Guide, 3.0

D.3 KGDB Debugging Using the Serial Console (KGDBOC)


KGDBOC permits KGDB debugging operations using the serial console. The serial console operates in two modesthe usual mode in which you use the serial console to login and so on, and a mode that allows you to enter the KGDB debugger. KGDBOC requires a serial polling driver which is available with the following drivers on Wind River Linux Platform targets:

8250 (most common targets) plb011 (ARM versatile) CPM (various 82xx, 83xx, 85xx) MPSC (ppmc280 ATCAf101)

Target Preparation

To use KGDBOC you must specify the device assigned to the console. You can find this in the console= argument in your target's boot line. You can also view the boot line at runtime with the command cat /proc/cmdline. For example, on an ARM Versatile 926EJS target, console=ttyAMA0. On a common PC target, console=ttyS0. Load the kernel module, supplying the appropriate port, for example:
# modprobe kgdboc kgdboc=ttyS0

Host Preparation

On your development host, run the agent-proxy from your project build directory:
$ ./host-cross/bin/agent-proxy arguments

For example, if you are using a terminal server (128.224.50.30 on port 2011), the command would be:
$ agent-proxy 2222^2223 128.224.50.30 2011

If you have the target directly connected to your Linux host:


$ agent-proxy 2222^2223 0 /dev/ttyS0,115200

Replace /dev/ttyS0,115200 with your serial port device and baud rate to the target.
NOTE: This program turns your host into a mini terminal server.

After agent-proxy has properly connected to the target, the console port is now multiplexed into a pass-through console and a debug port, which will automatically send the SYSRQ sequence. You can use the Workbench terminal view or a Telnet program to connect to the target console as follows:
$ telnet localhost 2222

For the KGDB connection with Workbench, specify a terminal server connection to TCP port 2223. If you use gdb to connect to KGDB, use the following command to connect:
$ target remote localhost:2223

NOTE: When the KGDB connection is active you will see the raw KGDB data appear on the pass-through console connection.

312

E
Connecting with TIPC
E.1 Introduction 313 E.2 Configuring TIPC Targets 314 E.3 Configuring a TIPC Proxy 315 E.4 Configuring Your Workbench Host 316 E.5 Using usermode-agent with TIPC 317

E.1 Introduction
This chapter describes how to configure Linux TIPC targets and your Workbench host to support debugging. For detailed information about TIPC, see the official TIPC project Web site at http://tipc.sourceforge.net/. The transparent inter-process communication (TIPC) infrastructure is designed for inter-node (cluster) communication. Targets located in a TIPC cluster may not have access to standard communication links or may not be able to communicate with hosts not located on the TIPC network. Because of this, host tools used for development may not be able to access those targets and debug them without special tools. To solve this communication problem between the TIPC target and TCP/IP hosts, Wind River provides the wrproxy process, which acts as a gateway between the host and the target. A basic diagram of a Workbench host configured to debug a TIPC target is shown in Figure E-1.
Figure E-1 Workbench Host, Proxy, and TIPC Target

Workbench Host with Target Server

TIPC Proxy with wrproxy

TIPC Target with usermode-agent

The Workbench host communicates using UDP, the TIPC target communicates using TIPC, and the proxy translates between them.

313

Wind River Linux User's Guide, 3.0

Note that the functions of the three network hosts shown in Figure E-1 may be combined in different ways, for example, the wrproxy and usermode-agent may both reside on a single target. You may even configure your Workbench host to support all functions if you want to test your debug capabilities in native mode before configuring external TIPC targets. The following sections describe how to configure TIPC targets, configure a proxy, and configure your Workbench host to support debugging over TIPC.

E.2 Configuring TIPC Targets


To configure TIPC targets, you must install the TIPC kernel module on them. To configure them to communicate with Workbench, you must also run the usermode agent on them. Note that TIPC communication between nodes in a cluster does not require UDP or TCP/IP networking services so those functions do not need to be included with the kernel, enabling a smaller kernel with fast, intra-node (TIPC) communication. For the TIPC-configured node to communicate with the Workbench host, however, a proxy must be provided that is capable of both TIPC and UDP communication capabilities. The proxy may be provided by one of the cluster nodes, a separate host, or the Workbench host itself as described in E.3 Configuring a TIPC Proxy, p.315.

E.2.1 Adding the TIPC Utilities


After configuring the target and before building the file system, add the TIPC utilities package (not necessary if you are configured for the glibc_cgl root file system) :
$ make -C build tipc-utils.addpkg $ make fs

E.2.2 Installing the TIPC Kernel Module and Utilities


If you are using a Wind River Linux platform, the tipc.ko kernel module is supplied with the standard kernel. Install the module and configure it on the target as follows. 1. 2. Load the TIPC kernel module:
# modprobe tipc

Set the local TIPC address:


# tipc-config -a=1.1.1 -be=eth:eth0

(Your actual command will differ if your network device is not eth0 or if you chose an address different from 1.1.1.)

314

E Connecting with TIPC E.3 Configuring a TIPC Proxy

3.

Check that everything is configured properly:


# tipc-config -a

The output should display your current TIPC address, for example, 1.1.1.

E.2.3 Running the usermode-agent


You must run usermode-agent on each target you want to reach. The target that runs usermode-agent must have TIPC capability. Enter the following command:
$ usermode-agent -comm tipc &

E.3 Configuring a TIPC Proxy


The proxy enables communication between the Workbench host and the TIPC target. The target server on the Workbench host (see E.4 Configuring Your Workbench Host, p.316) instructs the proxy agent to communicate using TIPC with a specified TIPC target address. The proxy agent is the wrproxy command (or wrproxy.exe with Windows). The host that runs wrproxy must have TIPC capability. To configure TIPC capability, install the TIPC kernel module (see E.2.2 Installing the TIPC Kernel Module and Utilities, p.314), or build TIPC into the kernel and reboot it. When you have TIPC support in the kernel, configure the host with a TIPC address that is different from target TIPC addresses using tipc-config (see E.2.2 Installing the TIPC Kernel Module and Utilities, p.314). Enter the following command on the TIPC-capable network host that is to serve as the proxy between Workbench and the TIPC target:
$ wrproxy &

You can also use the -p port option to specify a different TCP port number for wrproxy to listen to (default 0x4444), the -V option for verbose mode, or the -h option to get command help.
NOTE: If you specify a port other than the default port for the proxy, then you must specify the same port when configuring the target server as described in E.4 Configuring Your Workbench Host, p.316.

Figure E-2 illustrates a configuration in which the proxy agent runs on the same host as Workbench. Figure E-3 illustrates a configuration in which the proxy agent runs on one of the nodes in a cluster. Another example might be a separate host that runs wrproxy, between the targets in the cluster and the Workbench host.

315

Wind River Linux User's Guide, 3.0

Figure E-2

TIPC Configuration with Proxy Agent on Workbench Host

Workbench Host tgtsvr UDP wrproxy TIPC TIPC Target

Figure E-3

TIPC Configuration with Proxy Agent on Cluster Target

Cluster with TIPC Interconnections Workbench Host tgtsvr UDP TIPC Target wrproxy TIPC Target

TIPC Target

TIPC Target

E.4 Configuring Your Workbench Host, p.316 describes how to configure the target server on the Workbench host to connect to the proxy agent and reach the TIPC target that you want to connect to.

E.4 Configuring Your Workbench Host


Use the tgtsvr command to connect to the proxy for communication with a TIPC target. The following command shows the TIPC options to use:
tgtsvr [-V] -B wdbproxy -tipc -tgt targetTipcAddress ProxyIpAddres

For example, to connect to a target with a TIPC address of 1.1.8 using a proxy with the IP address 192.168.1.5, use the following command:
$ tgtsvr -B wdbproxy -tipc -tgt 1.1.8 192.168.1.5

Additional Information

A fuller syntax for the tgtsvr command is:


tgtsvr [-V] -B wdbproxy -tipc -tgt targetTipcAddress [-tipcpt tipcPortType -tipcpi tipcPortInstance] wdbProxyIpAddress|name

316

E Connecting with TIPC E.5 Using usermode-agent with TIPC

Table E-1 explains the italicized parameter values in the command.


Table E-1 TIPC-Specific Parameter Values for Starting a Target Server

Parameter

Value

targetTipcAddress tipcPortType

The TIPC address of the target with the TIPC network stack. For example: 1.1.8. The TIPC port type to use in connecting to the WDB target agent. The default port type for the connection is 70. You should accept the default port unless it is already in use. The TIPC port instance to use in connecting to the WDB target agent. The default port instance for the connection is 71. You should accept the default port instance unless it is already in use. The IP address or DNS name of the target with WDB Agent Proxy.

tipcPortInstance

wdbProxyIpAddress|name

Note that if you change the default TIPC port configuration, you must also change the default TIPC port for the usermode-agent as described in E.5 Using usermode-agent with TIPC, p.317. Alternatively, you can use the Workbench GUI to configure the host. Select wdbproxy as the backend when you create a new connection in the Remote Systems view and then fill in the fields with the values you would supply as command line arguments. The command line that is created at the bottom of the GUI should be similar to the example shown in this section.

E.5 Using usermode-agent with TIPC


This section explains the several possible options available when launching the usermode agent. The listening port is the port used by the usermode agent to communicate with the target server on the host machine. If you change the listening port on the usermode agent side, then you have to specify the same port number to the target server.
Port option

The port option is:


-p or -port 0xpppp (UDP) | xxxx:yyyy (TIPC)

This option allows you to select an alternate listening port for the usermode agent. Two network connection types are supported:

UDP this is the default connection type. If you do not specify a particular type of network connection, UDP is used.

317

Wind River Linux User's Guide, 3.0

If you do not want to use the default UDP port (0x4321), you can choose and set the one you want using this option. The port number can be entered in either decimal or hexadecimal format. To set the port number using the hexadecimal format, use the 0x%x format where %x represents the port number in hexadecimal base. For example, to launch the usermode agent using UDP and port 6677:
$ usermode-agent -p 6677

or
$ usermode-agent -p 0x1A15

TIPC this is the TIPC network connection. If you do not want to use the default TIPC port type (70) and TIPC port instance (71) then you can choose and set the ones you want using this option. The port numbers can be entered in either decimal or hexadecimal format. To set the port numbers using the hexadecimal format, you need to use the 0x%x format where %x represents the port number in hexadecimal base. To launch the usermode agent using TIPC and port type 1234, port instance 55:
$ usermode-agent -p 1234:55

or
$ usermode-agent -p 0x4D2:37

Communication Option

The communication option allows to specify which kind of connection will be used for connection between target server and usermode agent. Comm option is:
-comm serial | tipc

If the serial option is set then you can also specify the serial link device to use rather than the default one (/dev/ttyAMA1) and the baud speed for the serial link (115200 is the default baud speed). To set a different device for the serial link connection, the flag -dev has to be used with the -comm serial option. For the baud speed, you need to set the -baud option combined with the -comm serial option.
Example

To launch the usermode agent using serial link connection and serial device /dev/ttyS0:
$ usermode-agent -comm serial -dev /dev/ttyS0

Example

To launch the usermode agent using serial link connection with default serial device and baud speed of 19200:
$ usermode-agent -comm serial -baud 19200

Example

To launch the usermode agent using serial link connection with serial device /dev/ttyS0 and baud speed of 19200:
$ usermode-agent -comm serial -dev /dev/ttyS0 -baud 19200

318

E Connecting with TIPC E.5 Using usermode-agent with TIPC

If the tipc option is set then you can also specify the port type (default is 70) and port instance (default is 71) of the TIPC connection. To set a different port type or instance, use the flag -tipcpt or -tipcpi, in either decimal or hexadecimal format.
Examples

To launch the usermode agent using TIPC network connection with default port type and default port instance:
$ usermode-agent -comm tipc

To launch the usermode agent using TIPC network connection with specific port type 123 and specific port instance 456:
$ usermode-agent -comm tipc -tipcpt 123 -tipcpi 456

Daemon mode

The -daemon option lets the usermode agent become a daemon after all initialization functions are completed. The output message, if any, are still reported on the device where the process has been started.
Environment Inheritance

The -inherit-env option makes all the child processes inheriting the environment from the parent environment. Since the usermode agent is the father of all the processes, then the processes will inherit the shell environment from which the usermode agent has been launched.
No Thread Support (Linux Thread Model Only)

The -no-threads option allows you to use the usermode agent on a kernel using Linux threading model even if the libpthread library is stripped. Basically, the libpthread library is used by the usermode agent to detect thread creation, destruction and so on. On a kernel using Linux threading model, if the libpthread library is stripped then the multithread debug would not be reliable so, by default, the usermode agent exit if this option is not set, to ensure a reliable debug scenario. This option is useless if your kernel is running using NPTL threading model.
Other Options

The -v option displays version information about the usermode agent, that is, build and release information. The -V option set the usermode agent to run in verbose mode. This is useful to have the listening information: port number, listening connection type and the target server connection to this usermode agent. The -help or -h option displays all the possible startup options for the usermode agent.

319

Wind River Linux User's Guide, 3.0

320

F
Control Groups (cgroups)
F.1 Introduction 321 F.2 CPUSETS 322 F.3 cgroups 323

F.1 Introduction
The basic functionality discussed here is based on CPUSETS, which allow you to restrict tasks to specific CPUs and specific memory nodes. The restrictive sets to which tasks are assigned are the CPUSETS. cgroups build on CPUSET functionality to provide generic cgroups, which are a means of grouping processes, and resource groups, to provide further control of generic cgroups. Wind River Linux supports the mainline cgroup controllers and adds four additional controllers.

dm-iobandAn I/O bandwidth controller implemented as a device-mapper driver. Several jobs using the same physical device have to share the bandwidth of the device. dm-ioband gives bandwidth to each job according to its weight, and each job can set its own value. bio_trackingAdds block I/O tracking to dm-ioband. net_traffic_controllerA resource controller you can use to schedule and shape traffic belonging to the task(s) in a particular cgroup. The implementation consists of two parts:

A resource controller (cgroup_tc) that is used to associate packets from a particular task belonging to a cgroup with a traffic control class ID (tc_classid). This tc_classid is propagated to all sockets created by tasks in the cgroup and will be used for classifying packets at the link layer. A new traffic control classifier (cls_cgroup) that can classify packets based on the tc_classid field in the socket to specific destination classes.

321

Wind River Linux User's Guide, 3.0

memrlimitImplements a virtual address space controller using cgroups. Address space control is provided along the same lines as RLIMIT_AS control, which is available via getrlimit(2)/setrlimit(2). The interface for controlling address space is provided through rlimit.limit_in_bytes.

The following presents an overview and some simple examples of CPUSETS and cgroups. For detailed information, refer to the files cpusets.txt and cgroups.txt in prjbuildDir/build/linux/Documentation/. Additional discussions are available online, as for example at http://kerneltrap.org/node/8059.

F.2 CPUSETS
CPUSETS provide the base-level infrastructure that enables the dynamic creation and destruction of resource partitions within a system. A given CPUSET may describe zero or more individual CPUs and zero or more memory nodes, and each set may contain zero or more tasks. All tasks within each set are treated according to the normal system resource control mechanisms but are subject to the limitations of the CPUSET, not the full system. One of the key design goals of CPUSETS is for large systems running many processes be able to adapt to varying job loads over time without impacting responsiveness of applications running on the system. This allows key classes of jobs to be given preferential treatment while leaving lower priority tasks to share remaining resources as well as dynamically adjusting resources available to all classes. The following example shows how to start a new job that is to be contained within a CPUSET. Perform the following sequence of commands to create a CPUSET named Charlie, containing CPUs 2 and 3 and Memory Node 1. Then start a subshell in that CPUSET. 1. 2. 3. Create a directory that will serve as the CPUSET:
# mkdir /dev/cpuset

Mount the CPUSET:


# mount -t cgroup -ocpuset cpuset /dev/cpuset

Create the new CPUSET with mkdir's and write's (or echo's as in this example) in the /dev/cpuset virtual file system:
# # # # # # # cd /dev/cpuset mkdir Charlie cd Charlie /bin/echo 2-3 > cpus /bin/echo 1 > mems /bin/echo $$ > tasks sh

The subshell sh is now running in CPUSET Charlie. The following command should display /Charlie:
# cat /proc/self/cpuset

4.

Start a task that will be the "founding father" of the new job.

322

F Control Groups (cgroups) F.3 cgroups

5. 6.

Attach that task to the new cpuset by writing its PID to the /dev/cpuset tasks file for that cpuset. fork, exec or clone the job tasks from this founding father task.

F.3 cgroups
To start a new job that is to be contained within a cgroup, using the cpuset cgroup subsystem, the steps are: 1. 2. 3. 4. 5. 6. Create the cgroup:
# mkdir /dev/cgroup

Mount the cgroup


# mount -t cgroup -ocpuset cpuset /dev/cgroup

Create the new cgroup with mkdir's and write's (or echo's) in the /dev/cgroup virtual file system. Start a task that will be the "founding father" of the new job. Attach that task to the new cgroup by writing its PID to the /dev/cgroup tasks file for that cgroup. Fork, exec or clone the job tasks from this founding father task.

For example, the following sequence of commands will setup a cgroup named Charlie, containing just CPUs 2 and 3, and Memory Node 1, and then start a subshell sh in that cgroup:
# # # # # # # # # # # mount -t cgroup -ocpuset cpuset /dev/cgroup cd /dev/cgroup mkdir Charlie cd Charlie /bin/echo 2-3 > cpuset.cpus /bin/echo 1 > cpuset.mems /bin/echo $$ > tasks sh # The subshell 'sh' is now running in cgroup Charlie # The next line should display '/Charlie' cat /proc/self/cgroup

323

Wind River Linux User's Guide, 3.0

324

G
Build Variables
G.1 Introduction
The list and description of config.sh build variables shown in Table G-1 is provided for informational purposes onlyyou would not typically change config.sh files directly. These are constructed and inherited during the configure process from the templates. Note that many of the items are also copied into the config.properties file which is used to initialize Workbench with it's project information, and a few of the fields are also copied into the toolchain wrappers. Therefore, even if you modify config.sh, your modifications may not be carried forward to other components using the fields.
Table G-1 Build Variables and Description

Variable

Description

BANNER

Informational message printed when configure completes. Can be used in any template. Specifies the generic toolchain architecture: arm, i586, mips, powerpc. Must match toolchain. Generally specified in the templates/arch/... item. Only set in an arch template. These are all of the available CPU variants for a configuration. For example, in a Power PC 32-bit/64-bit install, both ppc and ppc64 would be listed. A value from this variable is substituted for the V ARIANT prefix in the following variables.

TARGET_TOOLCHAIN_ARCH

AVAILABLE_CPU_VARIANTS

The following items should be prefixed with the V ARIANT name as specified in AVAILABLE_CPU_VARIANTS. V ARIANT is replaced with the specific variant, for example V ARIANT_TARGET_ARCH=powerpc becomes ppc_TARGET_ARCH=powerpc. V ARIANT_COMPATIBLE_CPU_VARIANT Specifies all of the CPU variants that are compatible with the specific variant. For example ppc is compatible with ppc_750. V ARIANT_TARGET_ARCH The architecture used by GNU configure to specify that variant.

325

Wind River Linux User's Guide, 3.0

Table G-1

Build Variables and Description (contd)

Variable

Description

V ARIANT_TARGET_COMMON_CFLAGS

CFLAGS that are beneficial to pass to an application but not required to optimize for a multilib. Equivalent of CFLAGS=... in the environment or in a makefile. Name of a variant. Also used as the RPM architecture.
BIG or LITTLE.

V ARIANT_TARGET_CPU_VARIANT. V ARIANT_TARGET_ENDIAN V ARIANT_TARGET_FUNDAMENTAL_ASFLAGS

Flags to be passed to the assembler when using the toolchain wrapper to assemble with a given userspace. These are hidden from applications. Flags to be passed to the compiler when using the toolchain wrapper to compile for a given userspace. These are hidden from applications. Flags to be passed to the linker when using the toolchain wrapper. These are hidden. - The name of the library directory for the ABI - lib, lib32, lib64. linux-gnu or linux-gnueabi The preferred color when installing RPM packages to the architecture:

V ARIANT_TARGET_FUNDAMENTAL_CFLAGS

V ARIANT_TARGET_FUNDAMENTAL_LDFLAGS V ARIANT_TARGET_LIB_DIR V ARIANT_TARGET_OS V ARIANT_TARGET_RPM_PREFER_COLOR

0No preference 1ELF32 2ELF64 4 - MIPS ELF32_n32

(Color is RPM terminology for a bitmask used in resolving conflicts. If RPM is going to install two files, and they have conflicting md5sum or sha1, it uses the color to decide if it can resolve the conflict. Two files of color 0 cause a conflict and the install fails. Otherwise, the system's "preferred" color takes precedence for the install. If the file is outside of the permitted colors, then again it's an error (if it causes a conflict.) V ARIANT_TARGET_RPM_TRANSACTION_COL
OR

The colors that are allowed when installing RPM packages to that architecture. A bitmask of the above. For example, on a 32-bit system, generally 1. On a 64/32 bit system, 3. On a mips64 system, 7. The internal gcc directory prefix to get to the sysroot information. Bitsize of a word, 32 or 64.

V ARIANT_TARGET_RPM_SYSROOT_DIR V ARIANT_TARGET_USERSPACE_BITS

BSP-Specific Items

326

G Build Variables G.1 Introduction

Table G-1

Build Variables and Description (contd)

Variable

Description

BOOTIMAGE_JFFS2_ARGS

For targets that support JFFS2 booting, these values will be passed when creating the JFFS2 image. Endianess (-b/-l), erase block size (-e), and image padding (-p) are commonly passed. Features to be implicitly patched into the kernel independent of the configure line. Name of the image used to boot the board, used to create the export default image symlink. BSP name as recognized by the build system. List of images is created by the kernel build. Mainly used for compatibility reasons. Indicates which platform(s) a particular board supports. Internal Wind River use only. The list of kernels supported by a particular board. List of root file systems supported by a particular board. Additional host tools that should be built to support this board.

KERNEL_FEATURES

LINUX_BOOT_IMAGE

TARGET_BOARD TARGET_LINUX_LINKS TARGET_PLATFORMS

TARGET_PROCFAM TARGET_SUPPORTED_KERNEL TARGET_SUPPORTED_ROOTFS

TARGET_TOOLS_SUBDIRS

QEMU-related variables. Refer to the release notes for details on QEMU-supported targets. Enter make config-target in prjbuildDir for additional information.
TARGET_QEMU_BIN

The QEMU host tool binary to use, if this BSP can be simulated by QEMU. The console port the target uses. This is BSP specific. For example, for common_pc it is ttyS0, and for the arm_versatile_926ejs it is ttyAMA0. Some BSPs such as the common_pc and common_pc_64 use a different Ethernet type. This parameter can be used to select a different Ethernet type to override the default that is hard coded in the QEMU host binary.

TARGET_QEMU_BOOT_CONSOLE

TARGET_QEMU_ENET_MODEL

327

Wind River Linux User's Guide, 3.0

Table G-1

Build Variables and Description (contd)

Variable

Description

TARGET_QEMU_KERNEL

The "short" name of the boot image to search for in the export directory inside the BUILD_DIR. For common_pc it would be set to bzImage or for the arm_versatile_926ejs it would be set to zImage. The specific image that is used is based on the boot loader that is hard-coded into the QEMU binary. This image is different than the boot image the real target might use in some cases. If you specify a full path to a binary kernel image it will not search the export directory and will instead use the image you specified. These are any extra options you might want to pass to the kernel boot line to override the defaults. These are any additional options you need to pass to the QEMU binary to get it to run correctly. In the case of the ARM Versatile and MTI Malta boards, the -M argument is passed so that the QEMU host binary will be configured with the correct simulation model since each host binary supports multiple simulation models within the same architecture.

TARGET_QEMU_KERNEL_OPTS

TARGET_QEMU_OPTS

Feature or Root File System Specific Items


TARGET_LIBC

Value should be glibc or uclibc. No value means glibc is assumed. Additional flag to add to the fundamental cflags (in the toolchain wrapper) for the libc being used. Normally this is blank except for the uclibc case where it is -muclibc.This is hidden from the application space. An additional CFLAG that needs to be used when a feature or rootfs is specified. Again hidden from the application space. Name of the ROOTFS configured.

TARGET_LIBC_CFLAGS

TARGET_ROOTFS_CFLAGS

TARGET_ROOTFS

Generic Optimizations
TARGET_COPT_LEVEL TARGET_COMMON_COPT TARGET_COMMON_CXXOPT

These are all optional optimizations that override defaults in configure. Generally you use these if you want to change the optimizations for -Os and not -O2. See the glibc_small rootfs for an example.

328

G Build Variables G.1 Introduction

Additional Notes on Build Variables

multilib templates are designed to match the multilibs as defined by the compiler and libc's. The cpu templates are expected to include a multilib template and either use it "as-is" or augment it with additional optimizations. Only multilib templates are allowed to specify TARGET_FUNDAMENTAL_* flags. cpu templates can only specify:

TARGET_COMMON_CFLAGS TARGET_CPU_VARIANT AVAILABLE_CPU_VARIANTS COMPATIBLE_CPU_VARIANTS

Everything else is expected to be inherited from multilib templates. For all of the items in the multilib/cpu templates, they should be prefixed with the variant name. The following items are required to be prefixed with a variant:
TARGET_COMMON_CFLAGS TARGET_CPU_VARIANT TARGET_ARCH TARGET_OS TARGET_FUNDAMENTAL_CFLAGS TARGET_FUNDAMENTAL_ASFLAGS TARGET_FUNDAMENTAL_LDFLAGS TARGET_SYSROOT_DIR TARGET_LIB_DIR TARGET_USERSPACE_BITS TARGET_ENDIAN TARGET_RPM_TRANSACTION_COLOR TARGET_RPM_PREFER_COLOR COMPATIBLE_CPU_VARIANTS TARGET_ROOTFSonly specify in a ROOTFS template TARGET_COPT_LEVEL, TARGET_COMMON_COPT, TARGET_COMMON_CXXOPT - specify either ROOTFS or board template, do not specify CPU or Multilib.

The best way to determine what to do in a custom template is use wrll-wrlinux as an example, with the information provided here in order to create custom templates.

329

Wind River Linux User's Guide, 3.0

330

H
Cavium Simple Executive Integration and Debugging
H.1 Introduction 331 H.2 Preparing the Host 334 H.3 Configuring and Building from the Command Line 335 H.4 Running Simple Executive Applications 337 H.5 Simple Executive Layer Technical Notes 339 H.6 Configuring and Building with Workbench 341 H.7 Configuring the Kernel with Workbench 345 H.8 Debugging from the Command Line 347 H.9 Setting Up the Target 348 H.10 Setting up the Host 350 H.11 Debugging Caveats 351 H.12 Debugging with Workbench 353 H.13 Known Issues, Limitations, and Tips 357

H.1 Introduction
This document describes the release of the Simple Executive (Simple Exec) support for Wind River Linux.

Sections G.1 through G.7 address basic elements, installation, configuration, build, and integration with Wind River Workbench Sections G.8 through G.11 explain debugging using the Command line interface Section G.12 describes debugging using Wind River Workbench Section G.13 discusses Known Limitations and Tips

331

Wind River Linux User's Guide, 3.0

H.1.1 Components of Wind River Simple Executive Support


Support for Simple Executive in WRLinux 3.0 requires additional functionality from Cavium's SDK and Linux reference sources for demo example application code and build scripts. It also requires added debug capability for Workbench itself.

Cavium Simple Executive SDK RPM

The Octeon SDK from Cavium Networks contains the Simple Executive library source. It also contains demo example source and Makefile fragments. The latter portion requires license agreements to redistribute, so it must be installed in order to build applications. Get the SDK directly from Cavium. In a directory where you have plenty of space, extract the Cavium 1.8.0 SDK archives, for example into /opt/octeon-sdk-1.8.0. Although the SDK archive is an .rpm file, there is no need to install the RPM using the rpm utility. Convert the .rpm to a cpio archive and extract the cpio archive as follows:
$ cd /opt/octeon-sdk-1.8.0 $ rpm2cpio /path/to/OCTEON_SDK-1.8.0-275.i386.rpm | cpio -div

Cavium Simple Executive Linux RPM

This is a reference kernel implementation from Cavium, containing Simple Executive kernel module source code compatible with the WRLinux Octeon kernel. Get the RPM directly from Cavium. In a directory where you have plenty of space, extract the Cavium Linux 1.8.0 RPM, for example into /opt/octeon-sdk-1.8.0. Typically the Linux RPM is installed in the same directory structure as the SDK RPM. Although the SDK archive is an .rpm file, there is no need to install the RPM using the rpm utility. Convert the .rpm to a cpio archive and extract the cpio archive as follows:
$ cd /opt/octeon-sdk-1.8.0 $ rpm2cpio /path/to/OCTEON_LINUX-1.8.0-275.i386.rpm | cpio -div

WRLinux Simple Executive Layer

The wrlinux-3.0 tree contains a layer incorporating support for building standalone and Linux usermode Simple Exec applications. This new Simple Executive layer can then be found here: $WIND_HOME/wrlinux-3.0/layers/wrll-cavium-simple_exec

332

H Cavium Simple Executive Integration and Debugging H.1 Introduction

Workbench 3.x Simple Executive Debug Integration Patch

The wrwb-3.0x_pp-cavium.zip patch, available from Wind River, extends Workbench with dialogue and debugger framework support for Cavium's extended GDB debug. Extract the zip archive in the $WIND_HOME directory, which is the installation directory for Workbench 3.x:
$ cd $WIND_HOME $ unzip /path/to/wrwb-3.0x_pp-cavium.zip

This will place a .jar file (the implementation) into workbench-3.1/wrwb/wrworkbench/eclipse/plugins, and .properties and .xml files into /eclipse/features/com.windriver.ide.debug.octeon_1.0.0.

H.1.2 Provided "Feature Templates"


The wrll-cavium-simple_exec layer provides some prepared feature templates to aid in the formation of the project's package list. These feature templates can be found here: $WIND_HOME/wrlinux-3.0/layers/wrll-cavium-simple_exec/templates/ features
--with-template=feature/se_demo_basic

This adds the original basic Simple Executive applications, plus the mips64_octeon CPU_VARIANTS for the minimal package set, including only one example application package, crypto_proprietary.
libstdcxx simple_exec_open simple_exec_proprietary crypto_proprietary glibc.mips64_octeon libgcc.mips64_octeon libstdcxx.mips64_octeon wrs_kernheaders.mips64_octeon simple_exec_open.mips64_octeon simple_exec_proprietary.mips64_octeon crypto_proprietary.mips64_octeon

This template includes packages that are needed for both n32 and 64-bit usermode builds. If you are only interested in n32 builds, the rootfs can be made smaller by eliminating the .mips64_octeon packages. This could be done by either editing the template, or editing the pkglist file after the project is configured.
--with-template=feature/se_demo_all

This adds the full SE application list, including the mips64_octeon CPU_VARIANTS for the minimal package set.
libstdcxx simple_exec_open simple_exec_proprietary crypto_proprietary application_args_proprietary hello_proprietary linux_filter_proprietary low_latency_mem_proprietary mailbox_proprietary

333

Wind River Linux User's Guide, 3.0

named_block_proprietary queue_proprietary traffic_gen_proprietary uart_proprietary glibc.mips64_octeon libgcc.mips64_octeon libstdcxx.mips64_octeon wrs_kernheaders.mips64_octeon simple_exec_open.mips64_octeon simple_exec_proprietary.mips64_octeon crypto_proprietary.mips64_octeon application_args_proprietary.mips64_octeon hello_proprietary.mips64_octeon linux_filter_proprietary.mips64_octeon low_latency_mem_proprietary.mips64_octeon mailbox_proprietary.mips64_octeon named_block_proprietary.mips64_octeon queue_proprietary.mips64_octeon traffic_gen_proprietary.mips64_octeon uart_proprietary.mips64_octeon

This template includes packages that are needed for both n32 and 64-bit usermode builds. If you are only interested in n32 builds, the rootfs can be made smaller by eliminating the .mips64_octeon packages. This could be done by either editing the template, or editing the pkglist file after the project is configured.

H.2 Preparing the Host


This section provides instructions to install all Simple Executive development elements to your host.

H.2.1 Installing the Simple Executive Layer Prerequisites


To install the Cavium Simple Executive Layer: 1. Install the Simple Executive SDK and the Linux RPM from Cavium by performing the following commands in a terminal window:
$ $ $ $ mkdir /opt/octeon-sdk-1.8.0 cd /opt/octeon-sdk-1.8.0 rpm2cpio /path/to/OCTEON_SDK-1.8.0-275.i386.rpm | cpio -div rpm2cpio /path/to/OCTEON-LINUX-1.8.0-275.i386.rpm | cpio -div

2.

Install wrlinux-3.0 and Workbench 3.1, if you have not already done so. For information on installing the product, see the following documents: Wind River Product Installation and Licensing Administrator's Guide Wind River Product Installation and Licensing Developer's Guide Install the available Workbench 3.x Simple Executive Debug Integration patch. See Workbench 3.x Simple Executive Debug Integration Patch, p.333.

3.

334

H Cavium Simple Executive Integration and Debugging H.3 Configuring and Building from the Command Line

H.2.2 Available Documentation


There are a number of sources of documentation available in the installation:

Cavium SDK html-based documentation can be found by opening your browser at this location: $SDK_ROOT/docs/html/index.html

The wrlinux package man and info pages from the file system packages can be found at this location: $WIND_HOME/wrlinux-3.0/docs

Workbench has online documentation. From Workbench, select Help > Help Contents to see the index. For example, information about Wind River Linux Platform Projects can be found under Wind River Documentation > Guides > Operating System > Wind River Linux Platforms User's Guide 3.0. Also, the Linux user's guide and online versions of the package man and info pages can be found under Wind River Documentation > References > Operating System > Wind River Linux Operating System Reference.

Details about the cav_ebt5800 BSP can be found in this file: <project_dir>/READMES/4-README-cav_ebt5800

H.3 Configuring and Building from the Command Line


H.3.1 Configuring your Project
A typical example configuration is as follows:
$ $WIND_HOME/wrlinux-3.0/wrlinux/configure \ --enable-board=cav_ebt5800 \ --enable-rootfs=glibc_std --enable-kernel=standard \ --with-layer=wrll-cavium-simple_exec \ --with-template=feature/se_demo_basic

This configuration example adds in the Simple Executive layer and arranges for the crypto_proprietary sample application to be built.

H.3.2 Customize your Package List


This section describes the ways you can customize your file system's package list, for both the basic file system packages, and the Simple Executive application packages.

335

Wind River Linux User's Guide, 3.0

Using the Feature Templates

In the project configuration example in H.3.1 Configuring your Project, p.335, above, we used the feature/se_demo_basic to setup the package list. Here is the set of recommended layer and template selections.

Simple Executive layer not included. (No Simple Executive application support)

No --with-layer=wrll-cavium-simple_exec or --with-template= options

With Simple Executive layer included, but no Simple Executive applications

--with-layer=wrll-cavium-simple_exec No --with-template= options

With Simple Executive layer included, with one Simple Executive sample application

--with-layer=wrll-cavium-simple_exec --with-template=feature/se_demo_basic

With Simple Executive layer included, all Simple Executive applications

--with-layer=wrll-cavium-simple_exec --with-template=feature/se_demo_all

There can be only one --with-template option on the configure command line. If you need to include another template option, add it to the end of the --with-template= parameter, separated with a comma.

Making Changes Manually

You can also manually edit the package list. This will allow you to select other Simple Executive applications than provided by the se_demo_basic and se_demo_all templates. While initially developing Simple Executive applications, you may also wish to avoid building some packages, to reduce the build time. Note that the rootfs created in this way may or may not allow successful booting. You will also need to ensure that all inter-package dependencies are resolved by including all of the prerequisite packages. When preparing to actually boot a target, you should either use a prebuilt rootfs, or build with the full list of packages.

H.3.3 Building the Project


Issue the following command to start the build.
$ make SDK_ROOT=/opt/octeon-sdk-1.8.0 build-all

There are no package sources distributed with this layer at present (packages/*.tgz) since Cavium has closed licenses, so octeon-sdk-1.8.0 and octeon-linux-1.8.0 need to be present to extract the source from. The above SDK_ROOT= command line addition will allow the build system to create packages in your <project_dir>/packages from the Octeon SDK.

336

H Cavium Simple Executive Integration and Debugging H.4 Running Simple Executive Applications

Once the packages are extracted from the SDK into packages/*.tgz, then it is no longer necessary to identify SDK_ROOT on the make command line. In fact, you can copy the new tar archives back to the layer, so that other local projects also do not need the SDK_ROOT value. For example:
$ cd <project_dir>/packages $ cp * <installDir>/wrlinux-3.0/layers/wrll-cavium-simple_exec/packages

H.3.4 Specifying Build Types


By default all of the variants of {cvmx,linux}_{n32,64} are built. The 32-bit executables may be found in <prj_dir>/build/<pkgname>-1.8.0 and 64-bit executables in <prj_dir>/build-mips64_octeon. If you want to hand-compile certain image types, you can do it by modifying the dist/<package>/Makefile, or by issuing the following command, for example:
$ OCTEON_TARGET=cvmx_n32 make -C build <packagename>.install

or
$ OCTEON_TARGET=linux_64 make -C build-mips64_octeon <packagename>.rpm

Some values for OCTEON_TARGET include:

linux_n32- usermode n32 binary linux_64 - usermode n64 binary cvmx_n32- standalone n32 binary cvmx_64 - standalone n64 binary

Some values for OCTEON_MODEL include:

OCTEON_CN58XX OCTEON_CN38XX

The complete list of valid OCTEON_MODEL values is available at <project_dir>/host-cross/mips-wrs-linux-gnu/sysroot/usr/include/simple_exec_o pen/octeon-models.txt. The default is to build for OCTEON_MODEL=OCTEON_CN58XX. Demo examples must be compiled for the right MODEL. This can be overridden on the command line, just like OCTEON_TARGET. For example:
$ make OCTEON_TARGET=linux_n32 OCTEON_MODEL=OCTEON_CN58XX

H.4 Running Simple Executive Applications


The method used to run Simple Executive applications depends on whether they are compiled as Linux usermode or standalone. In either case, the application(s) must be made accessible to the target.

337

Wind River Linux User's Guide, 3.0

H.4.1 Linux Usermode Applications


Applications compiled for Linux usermode can be loaded into the targets rootfs and executed directly from the shell prompt. For Linux usermode applications, the demo example application Makefiles (see Application Wrapper Makefiles, p.339) contain <pkgname>.install rules that add the usermode executables to the <pkgname>*.rpm INSTALL_STAGE, and hence they are composed into the target rootfs deployment image during the make rootfs target. Note that the use of usermode applications in a busybox build has not been tested.

H.4.2 Standalone Applications


Applications compiled for standalone operation must be loaded before booting Linux, if Linux is even going to be booted. There is not an absolute requirement for this. For standalone applications, the demo example application Makefiles (see Application Wrapper Makefiles, p.339) contain <pkgname>.install rules that create symlinks to the standalone executables in the <project_dir>/export directory. Once the application(s) is (are) placed where the target can access them via tftp, they are loaded and started with the following commands:
$ Octeon EBT5800# dhcp; tftp $(loadaddr) /tftp/path/to/binary $ Octeon EBT5800# bootoct $(loadaddr) coremask=0x????

The coremask identifies which processor cores to run the application on, for example: core 0 = 0x01, core 1 = 0x02, core 2 = 0x04, and so on. These can be combined to run the application on multiple cores by adding the masks for the needed cores. No application is started until an application is loaded on core 0 (coremask = 0x01). You can use numcores and skipcores instead of coremask. Numcores specifies how many cores to use, and skipcores specifies which core to start on. For example, numcores=3 skipcores=4 is equivalent to coremask=0x0070 It is normally acceptable, and often expected, to run the same application on multiple cores. The crypto_proprietary application demonstrates sharing a serial console through the use of a spinlock to prevent garbled output.

338

H Cavium Simple Executive Integration and Debugging H.5 Simple Executive Layer Technical Notes

H.5 Simple Executive Layer Technical Notes


H.5.1 Simple Executive Applications as wrlinux Packages
The wrlinux build system uses a packaging system to wrap applications into the project builds.

Application Wrapper Makefiles

The Simple Executive layer provides wrapper Makefiles that will prepare the SDK applications into the expected tarball format, plus provide the additional build rules to support the various OCTEON_TARGET and OCTEON_MODEL values. These provided wrapper Makefiles can be found here:
$ ls $WIND_HOME/wrlinux-3.0/layers/wrll-cavium-simple_exec/dist application_args_proprietary crypto_proprietary hello_proprietary intercept_proprietary linux_filter_proprietary low_latency_mem_proprietary mailbox_proprietary named_block_proprietary queue_proprietary simple_exec_open simple_exec_proprietary traffic_gen_proprietary uart_proprietary

You can use these Makefiles as templates for including additional applications. With the provided Makefiles you can see the actions to support automatically extracting content from the SDK, as well as the build and dependency information required by the wrlinux build system. Here is a quick overview of the content of these Makefiles.

PACKAGES+=<application>: this instructs the build system to add this application package to the file system's package list. <application>_SUMMARY and so forth: these values define the package for the wrlinux build system <application>.check: this build rule will test to see if the package's tarball already is present, else it will call the rule to extract it from the SDK. ifndef OCTEON_TARGET and OCTEON_MODEL: These tests insure that default values for these are present. <application>..compile: this build rule will compile that application for the set of expected OCTEON_TARGET values. <application>..install: this build rule will install the application's target files in the prjbuildDir/TARGET_INSTALL dir. <application>..extract: this custom build rule will extract the respective application from the SDK and form the tarball.

339

Wind River Linux User's Guide, 3.0

Application Wrapper Support Files

The following descriptions apply to some of the support files you will find in the respective application dist directories.

wrlinux.mk: this is the support Makefile wrapper used to provide the needed environment for building Simple Executive applications. makelinks: Makelinks is used once (when the layer is installed) to create the symbolic links in $OCTEON_ROOT/host/bin to provide access to the toolchain executables. mips-wrs-linux-gnu-wrapper.sh: mips-wrs-linux-gnu-wrapper.sh provides a wrapper which translates the names and argument lists of toolchain executables from their Cavium prefixes (mips64-octeon-linux-gnu-) into their Wind River equivalents (mips-wrs-linux-gnu-). mips64octeon-wrs-elf-wrapper.sh: mips64octeon-wrs-elf-wrapper.sh provides a wrapper which translates the names and argument lists of toolchain executables from their Cavium prefixes (mipsisa64-octeon-elf-) into their Wind River equivalents (mips64octeon-wrs-elf-). octeon-app-init.h: octeon-app-init.h is required (and #include'd) by most Simple Executive applications. application.mk, common.mk and common-config.mk: Makefile fragments extracted from Cavium's SDK that provide much of the necessary environment for building Simple Executive applications are built.

H.5.2

Miscellaneous Simple Executive Details


Here are some internal details about the Simple Executive layer, its contents, requirements and its actions.

The kernel source tree includes a copy of the Simple Executive source files that are used when the wrll-cavium-simple_exec layer is not configured into the project. With the layer configured into the build, Simple Executive source files located in the sysroot are used instead. This allows the BSP to be built without the need to download the SDK. The ethernet driver requires the use of the Simple Executive library, whether wrll-cavium-simple_exec is included or not. When running Linux n32 usermode Simple Executive applications, it is necessary to configure the kernel with CONFIG_CAVIUM_RESERVE32=512 (or larger, in multiples of 512) to provide a shared memory communication region. The intercept_proprietary package is an example of a kernel loadable module built with the proprietary license version of Simple Executive, it stores the output in the target rootfs as /intercept-example.ko. Refer to the documentation in Cavium's SDK for details on the usage of this application.

340

H Cavium Simple Executive Integration and Debugging H.6 Configuring and Building with Workbench

The crypto_proprietary and crypto_proprietary.mips64_octeon packages, along with other_proprietary packages, will build rpms for the linux_n32 and linux_64 type of crypto demo examples, installing the files in /bin/crypto-linux_n32* and ...-linux_64* in the target rootfs. They also compile the standalone images and place symlinks to them in export/<board>-crypto-cvmx_n32* and crypto* (by Cavium convention, there is no extension like -cvmx_n32 for 64-bit non-Linux load images). The *_proprietary packages are named so because they extract proprietary and include licensed files for the package source, and use the simple_exec_proprietary files instead of the _open ones. The _open packages extract open-licensed files.

H.6 Configuring and Building with Workbench


H.6.1 Adding the SDK Path
To build the sample Simple Executive applications, it is necessary to provide Workbench with the path to the Simple Executive directory, so that the layer can locate and make copies of the applications from the SDK. This is done by adding the path to the SDK in the environment before you start Workbench, so that the build rules can automatically locate your SDK installation. bash users:
$ export SDK_ROOT=/opt/octeon-sdk-1.8.0

csh users:
$ setenv SDK_ROOT /opt/octeon-sdk-1.8.0

H.6.2 Overriding the OCTEON_MODEL Value


To override the default OCTEON_MODEL from OCTEON_CN58XX to, for example, OCTEON_CN38XX, you can do this before you create your project. bash users:
$ export OCTEON_MODEL=OCTEON_CN38XX

csh users:
$ setenv SDK_ROOT OCTEON_CN38XX

H.6.3 Starting Workbench


When you start Workbench, it is recommended to add extra heap space for Workbench to handle the potentially large wrlinux projects. For example:
$ cd $WIND_HOME $ ./startWorkbench.sh -vmargs -Xmx512m

341

Wind River Linux User's Guide, 3.0

H.6.4 Configure a Platform Project with Simple Executive Support


Perform the following steps to configure a platform project: 1. 2. 3. From Workbench, select File > New > Wind River Linux Platform Project. Enter a project a name, for example cavium_se_example1. Add the paths to the provided toolchain layer and the Simple Executive layer, for example: <installDir>/wrlinux-3.0/layers/wrll-cavium-simple_exec 4. Select these basic options:

Board: cav_ebt5800 Rootfs: glibc_std Kernel: standard

5.

Add a feature template to set up your initial package list, for example: feature/se_demo_basic

6.
Figure H-1

Click Finish to configure the project

Configuring a Platform Project with Simple Executive support

342

H Cavium Simple Executive Integration and Debugging H.6 Configuring and Building with Workbench

H.6.5 Building the Platform Project


You can now build the project by right-clicking on the build-all target, and selecting Build Target. This will rebuild all packages and the kernel from the source. Alternatively, you can right-click on the kernel_build target and selecting Build Target to build the kernel image, and then right-click on the fs target and selecting Build Target to build the file system. This will attempt to re-use pre-built package RPM files, and can save considerable time.

H.6.6 Working with the Package List


To view the package list and its metadata, double-click on the User Space Configuration icon in the project.
Figure H-2 Working with the package list

343

Wind River Linux User's Guide, 3.0

H.6.7 Changing the OCTEON_TARGET Value for a Package


You can override the OCTEON_TARGET value for any packages by performing the following steps. 1. 2. 3. Select the package in the Installed Package list Click the Options tab. Add the desired override flag from the list:

OCTEON_TARGET=linux_n32: usermode n32 binary OCTEON_TARGET=linux_64: usermode n64 binary OCTEON_TARGET=cvmx_n32: standalone n32 binary OCTEON_TARGET=cvmx_64: standalone n64 binary

4. 5.

Save the changes with File > Save. Click the Targets tab, and select the build (or rebuild) button.

NOTE: You can also use this feature to force a value for OCTEON_MODEL, as described in H.3.4 Specifying Build Types, p.337.
Figure H-3 Overriding the OCTEON_TARGET value for a package

344

H Cavium Simple Executive Integration and Debugging H.7 Configuring the Kernel with Workbench

H.7 Configuring the Kernel with Workbench


You can use the built-in Workbench kernel configurator to view and manage the kernel configuration options. In addition, Workbench supports menuconfig and xconfig. Right-click on the project in the Project Explorer pane and select Build Options to view. 1. Select the project, and double click on the Kernel Configuration icon. You will be prompted the first time to allow Workbench to unpack the kernel and generate the .config file. Right-click in the view, and then select Find. Enter the pattern *CAVIUM*. Observe that all of the Cavium-named options appear in the matching list. This is shown in Figure H-4.

2. 3.

345

Wind River Linux User's Guide, 3.0

Figure H-4

Finding kernel configure options

4.

For this example, double-click on the item Memory to reserve for user process shared region (MB) [CAVIUM_RESERVE32]. Observe that the kernel option tree is automatically opened to this entry, at this location, as reflected in Figure H-5.
Machine selection > Allow User Space to access hardware IO directly [CAVIUM_OCTEON_USER] = Y > Memory to reserve for user process shared region (MB) [CAVIUM_RESERVE32] = 512

5.

If you change any values, their icons will display an asterisk. Select File > Save to save your changes to the .config file and the asterisks will disappear.

346

H Cavium Simple Executive Integration and Debugging H.8 Debugging from the Command Line

Figure H-5

Kernel Configuration Options

H.8 Debugging from the Command Line


This section describes using the GNU Debugger (GDB) included with Wind River Linux to debug Simple Executive Library standalone applications. The command line debugger is also used by Workbench to debug standalone applications. See H.12 Debugging with Workbench, p.353, for additional information.

H.8.1 Overview
Cavium Networks' Octeon family of multi-core processors provides developers the option of developing multiprocessor code in either a symmetric or asymmetric manner. Often, a system will dedicate a few cores to run an SMP operating system, such as Wind River Linux, while using one or more additional cores to run other code. This other code can be another (perhaps even another SMP) operating system, but from a performance standpoint it is often beneficial to dedicate a core to a single user-defined function. This function can operate free from the overhead of an operating system, so it does not need to contend with other tasks for the use of the processor core. These functions may be referred to as standalone applications. While sophisticated debugging capabilities are readily available for the Linux environment , the non-Linux standalone applications are very minimal, and have no built-in debugging hooks. Using GDB with these applications requires the presence of a stub program to implement the GDB remote packet protocol. This

347

Wind River Linux User's Guide, 3.0

stub may be embedded in the standalone application during the development process and then removed for production. Debugging standalone applications using this process raises two fundamental issues. First, including the debugging stub in the standalone application increases its size and complexity, and may lead to some uncertainty that the production code functions similarly to the debug code. Second, a debugging stub that is implemented in this way may need to be re-developed to suit the environment of each standalone application, requiring repetition of the expenditure of time and programming resources. Cavium's approach is to localize the debugging stub and provide a minimal standardized interface to it, making availability of GDB for debugging standalone applications automatic. Cavium accomplished this by placing the debugging stub in the same monitor program that is used to load all programs onto the target system, u-boot. The low-level communication between GDB and Simple Executive standalone applications is implemented using a customized version of the GDB packet protocol. The customization includes extensions for multicore debug control. On the host end is a version of GDB that understands this customized protocol. On the target end is a stub embedded into the u-boot bootloader. The connection between the two is an RS-232 serial link.

H.8.2 Prerequisites
First install, configure and build your Wind River Linux Simple Executive project using the instructions in H.3 Configuring and Building from the Command Line, p.335, and H.6 Configuring and Building with Workbench, p.341.

H.8.3 Available Documentation


Wind River Linux' GDB contains extensions originally provided in Cavium's Octeon SDK GNU debugger, so much of the information presented here is also discussed in Cavium's Simple Executive Debugger documentation. See H.2.2 Available Documentation, p.335, for additional information.

H.9 Setting Up the Target


H.9.1 Review: Starting a Standalone Application
This example assumes you have established serial communication with the target console port (first serial port) using a NULL modem cable to an unused serial port on your Wind River Linux development host, and have established a network connection over Ethernet. Review the general steps for starting a standalone application (see H.4.2 Standalone Applications, p.338). The following provides an example of running the crypto

348

H Cavium Simple Executive Integration and Debugging H.9 Setting Up the Target

standalone application without the debugger. Issue the following commands at the u-boot prompt:
Octeon EBT5800# dhcp Octeon EBT5800# tftp $(loadaddr) /crypto-cvmx_n32 Octeon EBT5800# bootoct $(loadaddr) numcores=1

The first command configures the network interface. The second command downloads the standalone application into an address specified by the u-boot environment variable loadaddr. For recent versions of u-boot this is normally 0x20000000. By definition, tftp provides a view of a subtree of the file system of a remote machine. /crypto-cvmx_n32 is an application image prepared by building the WRLinux Simple Executive project. It must be copied from <project_dir>/export to the tftp server's tftproot directory. The third command actually runs the program that was loaded in the second command. To allow loading and simultaneous start of multiple standalone programs, u-boot prevents the code on any of the cores from running until the code running on core 0 is started. At that point, all loaded programs are simultaneously started.

H.9.2 Starting an Application for Debugging


Connect the second serial port (debug serial port) on the Cavium Octeon board to an unused serial port on the Wind River Linux development host, for example: /dev/ttyS1, using a NULL modem cable. All Simple Executive applications have the hooks to use GDB via the u-boot-resident debugging stub, where virtually all of the code for the stub resides, embedded within them. All that is necessary to activate this debugging mode is to add a single parameter to the command used to start the standalone application. For example, to debug the program, it is only necessary to add the parameter debug to the third command above, like this:
Octeon EBT5800# bootoct $(loadaddr) numcores=1 debug=1

Assigning the debug parameter a value is optional, and controls the serial port used for debugging:

debug: Debug (second) serial port debug=0: Console (first) serial port debug=1: Debug (second) serial port debug=2: Third serial port (if available)

Once a bootoct command that includes core 0 has been issued, u-boot starts the standalone application, but halts it at a breakpoint, essentially at the entry point of the code. The debugging stub then waits for the remote GDB to connect.

349

Wind River Linux User's Guide, 3.0

H.10 Setting up the Host


H.10.1 Starting GDB
Before starting GDB, change to the directory (cd) where the binary of the standalone application to be debugged is located. This provides GDB access to the program source (including Simple Executive library source). In the Wind River Linux build system in the default configuration for the Octeon BSPs, 32-bit applications are in the build/<application-name>-<application-version> directory, and 64-bit applications are in the build-mips64_octeon/<application-name>-<application-version> directory. For the n32 and 64 ABI crypto example applications, these directories will be build/crypto_proprietary-1.8.0 and build-mips64_octeon/crypto_proprietary-1.8.0, respectively. Next, the correct (non-Linux toolchain) GDB binary must be located. This may be accessible via several symbolic links, but the actual binary executable will be found in the wrlinux-3.0 tree at: layers/wrll-toolchain-4.3-85/mips64octeon/toolchain /x86-linux2/bin/mips64octeon-wrs-elf-gdb. Assuming that mips64octeon-wrs-elf-gdb is in the command search path, the following command will start the debugger:
$ mips64octeon-wrs-elf-gdb crypto-cvmx_n32

The correct binary executable of GDB can be confirmed by reference to the first line of the messages printed when GDB is started, currently:
GNU gdb (Wind River Linux Sourcery G++ 4.3-85) 6.8.50.20080821-cvs GDB will issue a startup message and a command prompt: (Core#0-gdb)

H.10.2 Connecting to a Target


To connect to the debug port on the Octeon using a terminal server, do the following:
(Core#0-gdb) target octeon tcp:<terminal-server-ip-address>:<terminal-server-tcp-port>

If you are connecting your debug host directly to the target's debug port using for example /dev/ttyS1, then it is sufficient to connect the debug session with:
(Core#0-gdb) target octeon /dev/ttyS1

Once connected, GDB should produce a response similar to:


Remote target octeon connected to /dev/ttyS1 (Core#0-gdb)

The state of where the target is stopped can be determined with the where command:
(Core#0-gdb) where #0 0x10006b6c in __octeon_trigger_debug_exception () #1 0x10006cd8 in __octeon_app_init () #2 0x100001bc in __start () (Core#0-gdb)

At this point, GDB can be used normally. Typically there is no need to single step through the code from the initial breakpoint at

350

H Cavium Simple Executive Integration and Debugging H.11 Debugging Caveats

__octeon_trigger_debug_exception. In most cases, one may insert a breakpoint at main and continue to that breakpoint.

H.11 Debugging Caveats


H.11.1 Single-step and Atomic Operations
Like most debugging of supervisor-mode code, there are certain code regions in which debugging is not allowed. Avoid single-stepping (particularly in assembly mode with the si instruction) into any code that deals with atomic operations on memory. The debug exception taken during single stepping disrupts the ll/sc protocol used for ensuring atomicity. Note that this precludes stepping into spinlocks.

H.11.2 Debugging Multiprocessor Applications


It is possible to debug a single application that is running more than one core. This requires a specific sequence of operations. In the previous section we started the crypto application on a single core. Instead, start it on all cores, like this:
Octeon EBT5800# bootoct $(loadaddr) numcores=16 debug=1

This will result in all cores being stopped in __octeon_trigger_debug_exception. Next, enter the commands:
(Core#0-gdb) (Core#0-gdb) (Core#0-gdb) (Core#0-gdb) set step-all 1 b main c set step-all 0

At this point, all cores will be stopped at the entry point of the application main function. The GDB prompt identifies the core that is currently being debugged. To select any particular core, use the set focus <n> command, where <n> specifies the core of interest. While the focus is on one core, and with step-all turned off, each step taken in the debugger is isolated to the core that currently holds the focus. See Cavium's documentation for more information on the Cavium-specific multi-core debugger commands:

set/show active-cores set/show focus set/show step-all

While it is possible to debug a single application running across multiple cores, it is not possible to debug two applications, or an application and a Linux image, or any other combination that involves multiple address map, namespaces, and multiple control models. The bootloader debug stubs and GDB are designed around a single application debug context.

351

Wind River Linux User's Guide, 3.0

H.11.3 Debugging Standalone Images with Linux Running


It is possible to debug a standalone image running on one or more cores, while Linux is running on other cores. However, this requires modifications to the configuration of the kernel:

In the Machine Selection kernel configuration menu, there is a selection for the Octeon watchdog driver titled CONFIG_CAVIUM_WATCHDOG. Make sure that this selection is disabled. In the Kernel Hacking kernel configuration menu, there is a selection for Remote GDB debugging using the Cavium Networks Multicore GDB which must be selected (CONFIG_CAVIUM_GDB).

Loading and booting each image proceeds much like the examples above and as described in Cavium Network's SDK documentation. Keep the following hints in mind:

Specify a load address for the Simple Executive application that does not conflict with Linux' memory usage. For example, if your kernel's size is less than eight MB, then loading Linux at 0x2000000 would allow for loading the Simple Executive application at 0x20800000. Specify a coremask (or skipcores/numcores) to start (bootoct or bootoctlinux) the first image, so that the u-boot command line is still available to start the second one.

H.11.4 Debugging the Linux Kernel


It is possible to debug a Linux kernel image running on one or more cores. The kernel must be configured, rebuilt, and deployed as described in H.11.3 Debugging Standalone Images with Linux Running, p.352. Boot the kernel using bootoctlinux; note that it is not necessary to give it a debug argument. After it is running, continue with the host setup activities as described in H.10 Setting up the Host, p.350. The directory to cd to (see H.10.1 Starting GDB, p.350) is build/linux-cav_ebt5800-standard-build. When starting GDB, the vmlinux image may be specified on the command line to load kernel symbol information. Connect and begin debugging (see H.10.2 Connecting to a Target, p.350). The advantages to this kernel debugging method include the ability to debug non-Linux applications or the Linux kernel using similar debug methods and physical connections (obviously, rebooting between debug sessions), and minimal intrusion into the Linux kernel's scheduling behavior. An alternate to using the bootloader-based debugging is the standard Wind River default KGDB kernel configuration. This supports debugging via the console or the second serial port either from the standard command-line mips-wrs-linux-gnu-gdb (see footnote 3), or the Wind River Workbench Integrated Development Environment. The advantages to this kernel debugging method include:

No need to recompile the default kernel to debug it. The ability to load and unload KGDB as a module. Provides KGDB over console functionality. Provides a much faster single step when debugged using Workbench.

352

H Cavium Simple Executive Integration and Debugging H.12 Debugging with Workbench

Disadvantages include:

Scheduling is more disrupted as KGDB serializes tasks onto one core. KGDB has some limitations with kernel tasklets. KGDB kernels take away control of the debug interrupts from the bootloader and debug stubs. KGDB is not able to debug standalone (non-Linux) Simple Executive applications.

H.12 Debugging with Workbench


This section contains instructions on how to use the Workbench integration of the GNU debugger for Cavium

H.12.1 Prerequisites
Requires a built Cavium Simple Executive application as a Workbench project. See H.3 Configuring and Building from the Command Line, p.335, and H.6 Configuring and Building with Workbench, p.341, for additional information.

H.12.2 Importing the Application to a C/C++ Project (optional)


The debugger integration is based on the Eclipse CDT (C/C++ Development Tools) debugger front-end which requires the executable being part of a C/C++ project. If the application is already part of a Workbench project of any kind, this step is optional and may be skipped, although it is recommended to create a separate project dedicated for debugging the application. This is done using a simple wizard: 1. 2. 3. 4. 5. From Workbench, select File > Import > C/C++ > C/C++ Executable. Select Search directory and navigate to the directory containing the executable(s) using the Browse button. Select the executables you want to debug. Click Next. On the second page, deselect the option Create a Launch Configuration. We will create a launch configuration later on. Click Finish.

353

Wind River Linux User's Guide, 3.0

6.
Figure H-6

A new project has been created in your workspace. Select the project in the Project Explorer pane for the next step.

Importing a C++ Executable

H.12.3 Creating a Launch Configuration


Create a launch configuration for debugging an application with the GNU debugger for the Cavium Simple Executive target: 1. 2. 3. Open the Workbench Launch Configuration dialog using Run > Debug Configurations. Select the Launch Configuration type Cavium Simple Executive. Right-click and select New to create a new launch configuration pre-filled with the selected project and application. Make sure the project and application on the Main tab are correct. Select the Debugger tab. Verify the path to the correct GDB executable. The default path is parameterized using a Workbench substitution variable cavium_toolchain_dir that points to ${WIND_HOME}/wrlinux-3.0/layers/wrll-toolchain-4.3-85/mips64octeon/to olchain Configure the target connection for the debugger, either over terminal server or direct serial line. Configure startup options. If the option 'Set all cores active' is enabled, the GDB command set active-cores is issued when the debugger is attached to the target If the option Advance to: is enabled, a breakpoint is set on the specified entry point and all cores are resumed. This is equivalent to the following command sequence:
set step-all on break main continue set step-all off

4. 5.

6. 7.

354

H Cavium Simple Executive Integration and Debugging H.12 Debugging with Workbench

8.
Figure H-7

Click Apply to save the settings.

Creating a Launch Configuration

H.12.4 Debugging the Application


To be able to launch a debug session, the application must be started manually on the target using the u-boot command-line. For details on how to do this, see H.9.2 Starting an Application for Debugging, p.349. A Cavium Simple Executive Application debug session is launched by opening the launch configuration created in the previous section and pressing the Debug button. When the debugger is successfully connected to the target, the Workbench perspective should switch to the Debug perspective and the Debug view should show the new debug session.

355

Wind River Linux User's Guide, 3.0

Figure H-8

Debugging the Application

The currently focused core is displayed as Thread[core#] in the Debug view. For example, if core 0 is active, it is displayed as Thread[0]. In addition to the usual run-control commands, such as step over, step into, and so on, three additional context menu items are available specific to the Cavium Simple Executive Application debugger:
Select Active Cores

This menu item is used to select the set of active cores. This corresponds to the GDB command set active-cores.
Figure H-9 Select Active Cores

356

H Cavium Simple Executive Integration and Debugging H.13 Known Issues, Limitations, and Tips

Select Focus Core

This corresponds to the GDB command set focus to select the currently focused core .
Figure H-10 Select Focus Core

Step All Cores

This is a toggle item which controls the GDB step-all flag. When the item appears with a check-mark in the menu, the step-all flag is on, otherwise it is off.

H.12.5 Note(s) on Workflow


Debugger views, like Variables, Expression, and Registers are available as usual, but the workflow may differ from the Workbench debugger, becauseas mentioned abovethe debugger front-end is based on CDT.

H.13 Known Issues, Limitations, and Tips


General release notes:

The export-sysroot build target feature can be used to export the project's sysroot for use on other hosts. The export-toolchain build target feature can be used to export the sysroot's companion Simple Executive toolchain. There has not been any testing with busybox, which is the core part of the glibc_small and the uclibc_small file systems. Due to an erratum in the CN58XX PASS 1 silicon, the low_latency_mem application fails on these parts. The application works correctly on CN38XX and CN58XX Pass 1.1 (or later) silicon.

Debug contexts, single stepping, signals, interrupts, and atomic operations:

Avoid single-stepping (particularly in assembly mode with the si instruction) into any code that deals with atomic operations on memory. Note particularly that this precludes stepping into spinlocks. This is true of any debugged code, kernel or standalone images. When setting a breakpoint in the Linux kernel at do_fork, then continuing from the breakpoint, the debugger immediately hits the breakpoint again. Note that do_fork is reached via a system call, which is restarted as a result of a signal received during the processing of the breakpoint.

357

Wind River Linux User's Guide, 3.0

To work around this, each time you continue from the breakpoint, try using a temporary breakpoint, which is removed immediately once the breakpoint is hit. This will allow you to continue the kernel, rather than stopping again.

Single stepping the Linux kernel does not work very well when interrupts are occurring, such as the clock, network, and serial port(s). A single step is likely to unexpectedly take you into the start of the interrupt service routine. When debugging the Linux kernel running on multiple cores (Symmetric Multiprocessing) as a standalone application, you need to set step-all 1. Otherwise, you may see BUG: soft lockup detected on CPU#0!....

Bootloader and debugger integration limitations:

The bootloader has only one debug context. It is not possible to debug more than one Simple Executive application at a time, and it is not possible to debug both a Simple Executive application and the Linux kernel at the same time. The Simple Executive GDB (and as a result, Workbench) is not able to load, run, start and restart standalone applications or the Linux kernel. You must manually load everything to be debugged via the bootloader before connecting the debugger to the target. It is suggested to load and start applications from higher numbered cores to lower numbered cores. Once core 0 has been started then it is not possible to load and start additional applications. Specify a load address for the Simple Executive application that does not conflict with Linux' memory usage. For example, if your kernel's size is less than 8 MB, then loading Linux at 0x2000000 will allow for loading the Simple Executive application at 0x20800000.

Slow debugger connections:

If the network is congested and/or slow (for example, debugging over a WAN link), Workbench may raise a dialog box that displays: target is not responding (time out). However, it has connected, and the target may be debugged.

358

I
Glossary
board

A model of target hardware; see also target. Several different configurations of a board may each be considered a separate target.
board support package (BSP)

The files needed to allow a particular board to be used as a target by Wind River Linux. Within the context of the Wind River Linux build system, a BSP is a template which can be applied to a project.
BSP directory

The directory containing a particular BSP. This directory is found in the templates/board subdirectory of the layer containing the BSP.
config files

Also kernel config files or config fragments. The *.cfg files that are combined and audited to produce the final .config kernel configuration. file
build directory

The directory named build in a project, where build tasks such as patching and compilation are actually performed.
git

Revision control system used with the Wind River Linux kernel and, in general, by the Linux kernel community.
host toolchain

Used on the development host to build the host tools and other software. This toolchain is provided by the development host operating system, for example by Redhat or Ubuntu. This is a different toolchain than the cross-development toolchain.
host tools

Used on the development host to perform functions that are part of the build process, but other than the toolchain compiling functions. They are built by Wind River or by the user.

359

Wind River Linux User's Guide, 3.0

kernel configuration fragment

A file containing kernel configuration instructions to be combined into a complete kernel configuration file. Each fragment generally controls a related set of features. For instance, the kernel configuration fragment for a BSP specifies CPU, architecture, and driver options needed to run on the target.
kernel directory

The directory containing a kernel source tree. Usually a subdirectory of the build directory.
kernel-cache

A Wind River-maintained repository that contains patches, kernel config files, and the information required to construct the kernel git repository.
kernel image

The file containing a compiled kernel, in the format used by a boot loader to load the kernel into memory.
kernel layer

The layer containing the standard Wind River Linux kernel tree and patches. Found in installDir/layers/wrll-linux-version, where version is the revision of the mainline kernel in use, such as 2.6.27.
layer

A collection of packages and templates for use with the Wind River Linux build system.
layer directory

The directory containing a particular layer.


meta-series

A sequence that contains the set of steps required to create a fully branched, tagged and history-clean git repository.
package

A collection of software and files for installation on a targets root file system. The term package is used generically for both source builds and binary distributions. Examples include the ncurses library, or the busybox shell and utilities.
patch file

A file containing modifications to make to source code, conventionally in a format understood by the historic patch utility. Patches for use with Wind River Linux should be in unified diff format.
patch list

A list of patch files to apply, usually stored in a file called patches.list.

360

I Glossary

project

A working directory containing configuration files used by the build system to produce runnable code for a particular target. A project may also be called a Workbench project or a build project. You may use Workbench, the command line, or a combination of the two when working on projects. A project is assembled by combining templates.
project directory

The directory containing a project. Created by the configure script, or the Wind River Workbench configuration tool.
pseudo

pseudo is Wind Rivers replacement for fakeroot that allows the Wind River build system to install files into the target root file system without having to actually set the UID to root. pseudo intercepts the system calls having to do with root priority on file operations. It creates the regular, special files and directories but maintains a small database on what the settings would be would be if you had actually been rootthis includes setuid, setgid file permissions, device file class/major/minor, uid and gid ownership. This is how the build system can create a root file system tar file that has actual device files and root ownershippseudo carries all the information from one program to the other.
readme file

A file, usually named README, describing a BSP or other files. Sometimes referred to as a readme, rather than a readme file.
smudge file

A file containing patch application instructions. Unlike a patch list, a smudge file can apply patches selectively. Each smudge file must have a unique name.
target

A piece of hardware or simulated hardware on which software needs to be run. Typically, software is built and configured for a particular target before being installed. A target generally refers to a specific configuration of a board.
template

A collection of configurations, settings, and patches used to modify the kernel or file system built for a target. Templates are combined to create a project.
toolchain

The compiler and other tools used on the development host to compile the software that will run on the target. Also called cross-development toolchain to distinguish it from the host toolchain.
unified diff format

The preferred format for patches used with Wind River Linux. Unified diff format is the format produced by diff -u. Unified diffs are easier to read than ed-style diffs, and more compact than context diffs.

361

Wind River Linux User's Guide, 3.0

upstream

To, or in the direction of, the original developer or the maintainer of an open source project.

362

Index

Symbols
.cfg files 23, 104

system benefits 31 variables 325 build-all target 43, 46 building projects 43

A
adding packages general 107 makefile 121 spec file 120 SRPM 109, 256 with layers 273 analysis layer 21 application adding to platform project 14 developer 7 development with sysroots 14, 84 audit data directory 101 auditing 97

C
cavium simple executive 331 CGL 9 checksum meta data 45 checksums 45 classic packages 107 con figuration files, templates 23 conditional real-time 123 config file fragments 104 config.log 34 config.sh build variables 325 file 23 configuration 32 configure examples 37 options 35, 40 script 19, 33 template 71 with layers 78 configuring with profiles 36 consumer_premise_equipment profile 25 conventions in document text 6 core layer (wrll-wrlinux) 22 core layer templates 24 creation.log 34 CRITICAL_IRQSOFF_TIMING 128 CRITICAL_PREEMPT_TIMING 128 cross-development tools 10 custom layers 35, 73 custom templates 67, 71

B
board documentation 11 README files 11 supported 11 templates, installed 27 boot-time 138 boot-time, early 139 boot-time, late 141 BSP creation 177 modification example 280 templates 27 build environment 34 methods 43 subdirectories 34 system (LDAT) 19

363

Wind River Linux User's Guide, 3.0

D
debug file system 39 DEBUG_PREEMPT 128 debugging small file systems 39 default template 70 demo file system 39 deploying a project 16 design, build system 31 developer types 7 development environment 17, 18 development workflow 13 directory structure, installed 18 document conventions 6 documentation, Wind River Linux 4

tree 172 using 168 glibc_cgl file system 26 glibc_small file system 26, 39 glibc_std file system 26 guaranteed real-time 123 guilt 169

H
higher layer 49 host requirements 32 host tool patching 276 host tools layer 22

E
ECGL 9 --enable-ldat-checksum 45 epne profile 25 exporting layer 75 exporting sysroots 84 export-layer target 75 extra templates 28

I
importing packages 268 importPackages.tcl 268 include files 23, 56, 69 industrial_equipment profile 25 init boot 141 initramfs 193 installation 11 installed layers 21 installed software organization 17 iso images, creating 238

F
feature matrix (kernel) 8 feature templates 28 feature templates in layers 279 file system construction 62 layout 303 modification 91 types 26, 27, 39 file system/fs directory 62, 91 filesystem types 26 footprint 152 fs directory 62, 91 fs directory files 23 fs target 43, 44 ftrace 138

K
Kconfig files 98 kern_tools 170 kernel and file system components 8 config files 104 config fragments 98 config options 104 configuration 98 configuration (Workbench) 103 feature matrix 8 feature profiles 8 fragment audits 97 layer 22 layer templates 29 lifecycle 170 patching 174, 180 preemption 123 profiles 8 reconfiguration 103 source tree 167 tree 172 types 8 workflow 170 kernel-cache 166

G
gdb 309 git commands 168 general 169 leaf nodes 171 overview 165 repository 171

364

Index

kernel-init boot kgit 170

139

mobile_mulitmedia_device profile 25 modifying target file system 91 modlist files 23 multilibs 34, 86

L
layer contents 51 defined 20 local custom 74 search list 50 structure 74 layers 20 and templates 20 and templates, relationship 49 creating 74 custom 73 directory 20, 21 examples 272 exporting 75 file (in prjbuildDir) 51 higher and lower 49 in development environment 21 installed 21 manual creation 77 overview 50 processing order 79 LDAT 31 ldat directory 19 LDAT_FORCE_CLEAN environment variable 45 LDAT_LAYER_PATH environment variable 51 leaf nodes 171, 183 Linux Distribution Assembly Tool (LDAT) 19, 31 Linux Kernel Configurator (LKC) 98 linux.menuconfig target 103 LKC 98 local custom layer 74 local layer 35 login 188 lower layer 49 lpne profile 26

O
online support 11 optimizing boot time 138 optional inclusion of templates 59 options, kernel 104 overriding layers 275

P
package adding classic archive package 121 adding classic with rpmbuild 120 adding RPM 122, 270 adding SRPM 109 adding with importPackages.tcl 268 checksums 45 preparing to add 108 rebuilding and checksums 45 removing 121 password 188 patch management 176 merge 179 patching a host tool 276 SRPMs 160 the kernel 174 with quilt 160 pkglist files 23 platform developer 7 platform project configuration 32 pne profile 25 Pre-boot Execution Environment (PXE) boot loader 215 pre-defined profiles 25 PREEMPT_DESKTOP 125 PREEMPT_HARDIRQS 124 PREEMPT_NONE 125 PREEMPT_RCU 127 PREEMPT_RT 124, 126 preempt_rt 9, 123 PREEMPT_SOFTIRQS 124, 127 PREEMPT_VOLUNTARY 125 preempt-rt 123 prjbuildDir as layer 74 processing *list.* files 60 file fragments 60 include files 56 template components 60

M
make build-all 46 export-layer 74, 75 export-sysroot 84 fs 44 linux.menuconfig 103 menuconfig 103 targets 299 xconfig 103 man pages 270 menuconfig of kernel options merge patches 179

103

365

Wind River Linux User's Guide, 3.0

templates 54 profiles configuration 36 custom 72 general 25 pre-defined 25 templates 25 project build directory 34 project deployment 16 PXE boot process overview 215 PXELinux boot loader file 216

starting Workbench 18 startWorkbensh.sh 18 supported boards 11 Syslinux 215 sysroots 83, 84, 86, 113 sysroots directory 19

T
target TIPC 314 target configuration files 91 target file system 62 template components 60 configuration files 23 defined 20 include files 56 names 68 processing 54 search list 53 search order 52 structure 69 templates 20 and layers 20 custom 67 in the development environment 23 installed 23, 24 kernel layer 29 of the same name 58 overview 52 processing order 71 toolchain 28 test templates 28 TFTP configuration file 203 TFTP download directory 207 tgtsvr command (TIPC) 316 TIPC kernel module 314 overview 313 proxy 315 targets 314 toolchain layer 22 layer templates 28 templates 28 toolslist files 23

Q
QEMU 193 description 187 IP addresses 187 KGDB debugging with Workbench terminating 190 quilt 160

188

R
ram disk size, increasing 213 README file 11 readme files 23 real-time 123 real-time support 123 reference manual pages 270 required-*.txt files 32 requirements, host 32 rootfs templates, installed 26 RPM build 44 RPM build (fs) 43 rtcore kernel 9

S
scc 169, 174, 180 files 181 overview 181 searching layers 50 searching templates 52 selinux 241 server installation 227 size of runtime footprint 152 small kernel 9 source build (build-all) 43 source build method 46 spec file 107 SRPM package example 256 SRPM packages 107 standalone server installation 227 standard kernel 8

U
uclibc_small file system 27, 39 usb image creation 238 usermode-agent reference page 317

366

Index

V
variables, build 325

W
WAKEUP_LATENCY_HIST 128 Wind River Linux, overview 6 Wind River Online Support 11 Wind River Workbench 18 --with-layer 78 --with-template 71 Workbench directories 18 workflow 13 workflow, kernel build 166 wrlinux directory 19 wrlinux-3.0 directory 19 wrll-analysis-version layer 21 wrll-host-tools layer 22 wrll-linux layer 22 wrll-linux-version layer 22 wrll-toolchain-version layer 22 wrll-wrlinux templates 24

X
xinetd 203

367

Anda mungkin juga menyukai