Apple Shake_4_User_Manual.pdf
Apple Shake_4_User_Manual.pdf
Apple sur FNAC.COM
- Pour voir la liste complète des manuels APPLE, cliquez ici
ou juste avant la balise de fermeture
-->
ou juste avant la balise de fermeture -->
TELECHARGER LE PDF sur :
http://manuals.info.apple.com/en/Shake_4_User_Manual.pdf
Voir également d'autres Guides APPLE :
Apple-Keynote2_UserGuide.pdf-Japon
Apple-Welcome_to_Tiger.pdf-Japon
Apple-XsanAdminGuide_j.pdf-Japon
Apple-PowerBookG4_UG_15GE.PDF-Japon
Apple-Xsan_Migration.pdf-Japon
Apple-Xserve_Intel_DIY_TopCover_JA.pdf-Japon
Apple-iPod_nano_6thgen_User_Guide_J.pdf-Japon
Apple-Aperture_Photography_Fundamentals.pdf-Japon
Apple-nikeipod_users_guide.pdf-Japon
Apple-QuickTime71_UsersGuide.pdf-Japon
Apple-iMacG5_iSight_UG.pdf-Japon
Apple-Aperture_Performing_Adjustments_j.pdf-Japon
Apple-iMacG5_17inch_HardDrive.pdf-Japon
Apple-iPod_shuffle_Features_Guide_J.pdf-Japon
Apple-MacBook_Air_User_Guide.pdf-Japon
Apple-MacBook_UsersGuide.pdf-Japon
Apple-iPad_iOS4_Brukerhandbok.pdf-Norge-Norvege
Apple-Apple_AirPort_Networks_Early2009_H.pd-Norge-Norvege
Apple-iPod_classic_120GB_no.pdf-Norge-Norvege
Apple-StoreKitGuide.pdf-Japon
Apple-Xserve_Intel_DIY_ExpansionCardRiser_JA.pdf-Japon
Apple-iMacG5_Battery.pdf-Japon
Apple-Logic_Pro_8_Getting_Started.pdf-Japon
Apple-PowerBook-handbok-Norge-Norveg
Apple-iWork09_formler_og_funksjoner.pdf-Norge-Norvege
Apple-MacBook_Pro_15inch_Mid2010_H.pdf-Norge-Norvege
Apple-MacPro_HardDrive_DIY.pdf-Japon
Apple-iPod_Fifth_Gen_Funksjonsoversikt.pdf-Norge-Norvege
Apple-MacBook_13inch_white_Early2009_H.pdf-Norge-Norvege
Apple-GarageBand_09_Komme_i_gang.pdf-Norge-Norvege
Apple-MacBook_Pro_15inch_Mid2009_H.pdf-Norge-Norvege
Apple-imac_mid2011_ug_h.pdf-Norge-Norvege
Apple-iDVD_08_Komme_i_gang.pdf-Norge-Norvege
Apple-MacBook_Air_11inch_Late2010_UG_H.pdf-Norge-Norvege
Apple-iMac_Mid2010_UG_H.pdf-Norge-Norvege
Apple-MacBook_13inch_Mid2009_H.pdf-Norge-Norvege
/Apple-iPhone_3G_Viktig_produktinformasjon_H-Norge-Norvege
Apple-MacBook_13inch_Mid2010_UG_H.pdf-Norge-Norvege
Apple-macbook_air_13inch_mid2011_ug_no.pdf-Norge-Norvege
Apple-Mac_mini_Early2009_UG_H.pdf-Norge-Norvege
Apple-ipad2_brukerhandbok.pdf-Norge-Norvege
Apple-iPhoto_08_Komme_i_gang.pdf-Norge-Norvege
Apple-MacBook_Air_Brukerhandbok_Late2008.pdf-Norge-Norvege
Apple-Pages09_Brukerhandbok.pdf-Norge-Norvege
Apple-MacBook_13inch_Late2009_UG_H.pdf-Norge-Norvege
Apple-iPhone_3GS_Viktig_produktinformasjon.pdf-Norge-Norvege
Apple-MacBook_13inch_Aluminum_Late2008_H.pdf-Norge-Norvege
Apple-Wireless_Keyboard_Aluminum_2007_H-Norge-Norvege
Apple-NiPod_photo_Brukerhandbok_N0190269.pdf-Norge-Norvege
Apple-MacBook_Pro_13inch_Mid2010_H.pdf-Norge-Norvege
Apple-MacBook_Pro_17inch_Mid2010_H.pdf-Norge-Norvege
Apple-Velkommen_til_Snow_Leopard.pdf-Norge-Norvege.htm
Apple-TimeCapsule_Klargjoringsoversikt.pdf-Norge-Norvege
Apple-iPhone_3GS_Hurtigstart.pdf-Norge-Norvege
Apple-Snow_Leopard_Installeringsinstruksjoner.pdf-Norge-Norvege
Apple-iMacG5_iSight_UG.pdf-Norge-Norvege
Apple-iPod_Handbok_S0342141.pdf-Norge-Norvege
Apple-ipad_brukerhandbok.pdf-Norge-Norvege
Apple-GE_Money_Bank_Handlekonto.pdf-Norge-Norvege
Apple-MacBook_Air_11inch_Late2010_UG_H.pdf-Norge-Norvege
Apple-iPod_nano_6thgen_Brukerhandbok.pdf-Norge-Norvege
Apple-iPod_touch_iOS4_Brukerhandbok.pdf-Norge-Norvege
Apple-MacBook_Air_13inch_Late2010_UG_H.pdf-Norge-Norvege
Apple-MacBook_Pro_15inch_Early2011_H.pdf-Norge-Norvege
Apple-Numbers09_Brukerhandbok.pdf-Norge-Norvege
Apple-Welcome_to_Leopard.pdf-Japon
Apple-PowerMacG5_UserGuide.pdf-Norge-Norvege
Apple-iPod_touch_2.1_Brukerhandbok.pdf-Norge-Norvege
Apple-Boot_Camp_Installering-klargjoring.pdf-Norge-Norvege
Apple-MacOSX10.3_Welcome.pdf-Norge-Norvege
Apple-iPod_shuffle_3rdGen_UG_H.pdf-Norge-Norvege
Apple-iPhone_4_Viktig_produktinformasjon.pdf-Norge-Norvege
Apple_TV_Klargjoringsoversikt.pdf-Norge-Norvege
Apple-iMovie_08_Komme_i_gang.pdf-Norge-Norvege
Apple-iPod_classic_160GB_Brukerhandbok.pdf-Norge-Norvege
Apple-Boot_Camp_Installering_10.6.pdf-Norge-Norvege
Apple-Network-Services-Location-Manager-Veiledning-for-nettverksadministratorer-Norge-Norvege
Apple-iOS_Business_Mar12_FR.pdf
Apple-PCIDualAttachedFDDICard.pdf
Apple-Aperture_Installing_Your_Software_f.pdf
Apple-User_Management_Admin_v10.4.pdf
Apple-Compressor-4-ユーザーズマニュアル Japon
Apple-Network_Services_v10.4.pdf
Apple-iPod_2ndGen_USB_Power_Adapter-DE
Apple-Mail_Service_v10.4.pdf
Apple-AirPort_Express_Opstillingsvejledning_5.1.pdf
Apple-MagSafe_Airline_Adapter.pdf
Apple-L-Apple-Multiple-Scan-20-Display
Apple-Administration_du_service_de_messagerie_10.5.pdf
Apple-System_Image_Admin.pdf
Apple-iMac_Intel-based_Late2006.pdf-Japon
Apple-iPhone_3GS_Finger_Tips_J.pdf-Japon
Apple-Power-Mac-G4-Mirrored-Drive-Doors-Japon
Apple-AirMac-カード取り付け手順-Japon
Apple-iPhone開発ガイド-Japon
Apple-atadrive_pmg4mdd.j.pdf-Japon
Apple-iPod_touch_2.2_User_Guide_J.pdf-Japon
Apple-Mac_OS_X_Server_v10.2.pdf
Apple-AppleCare_Protection_Plan_for_Apple_TV.pdf
Apple_Component_AV_Cable.pdf
Apple-DVD_Studio_Pro_4_Installation_de_votre_logiciel
Apple-Windows_Services
Apple-Motion_3_New_Features_F
Apple-g4mdd-fw800-lowerfan
Apple-MacOSX10.3_Welcome
Apple-Print_Service
Apple-Xserve_Setup_Guide_F
Apple-PowerBookG4_17inch1.67GHzUG
Apple-iMac_Intel-based_Late2006
Apple-Installation_de_votre_logiciel
Apple-guide_des_fonctions_de_l_iPod_nano
Apple-Administration_de_serveur_v10.5
Apple-Mac-OS-X-Server-Premiers-contacts-Pour-la-version-10.3-ou-ulterieure
Apple-boot_camp_install-setup
Apple-iBookG3_14inchUserGuideMultilingual
Apple-mac_pro_server_mid2010_ug_f
Apple-Motion_Supplemental_Documentation
Apple-imac_mid2011_ug_f
Apple-iphone_guide_de_l_utilisateur
Apple-macbook_air_11inch_mid2011_ug_fr
Apple-NouvellesfonctionnalitesdeLogicExpress7.2
Apple-QT_Streaming_Server
Apple-Web_Technologies_Admin
Apple-Mac_Pro_Early2009_4707_UG
Apple-guide_de_l_utilisateur_de_Numbers08
Apple-Decouverte_d_Aperture_2
Apple-Guide_de_configuration_et_d'administration
Apple-mac_integration_basics_fr_106.
Apple-iPod_shuffle_4thgen_Guide_de_l_utilisateur
Apple-ARA_Japan
Apple-081811_APP_iPhone_Japanese_v5.4.pdf-Japan
Apple-Recycle_Contract120919.pdf-Japan
Apple-World_Travel_Adapter_Kit_UG
Apple-iPod_nano_6thgen_User_Guide
Apple-RemoteSupportJP
Apple-Mac_mini_Early2009_UG_F.pdf-Manuel-de-l-utilisateur
Apple-Compressor_3_Batch_Monitor_User_Manual_F.pdf-Manuel-de-l-utilisateur
Apple-Premiers__contacts_avec_iDVD_08
Apple-Mac_mini_Intel_User_Guide.pdf
Apple-Prise_en_charge_des_surfaces_de_controle_Logic_Express_8
Apple-mac_integration_basics_fr_107.pdf
Apple-Final-Cut-Pro-7-Niveau-1-Guide-de-preparation-a-l-examen
Apple-Logic9-examen-prep-fr.pdf-Logic-Pro-9-Niveau-1-Guide-de-preparation-a-l-examen
Apple-aperture_photography_fundamentals.pdf-Manuel-de-l-utilisateu
Apple-emac-memory.pdf-Manuel-de-l-utilisateur
Apple-Apple-Installation-et-configuration-de-votre-Power-Mac-G4
Apple-Guide_de_l_administrateur_d_Xsan_2.pdf
Apple-premiers_contacts_avec_imovie6.pdf
Apple-Tiger_Guide_Installation_et_de_configuration.pdf
Apple-Final-Cut-Pro-7-Level-One-Exam-Preparation-Guide-and-Practice-Exam
Apple-Open_Directory.pdf
Apple-Nike_+_iPod_User_guide
Apple-ard_admin_guide_2.2_fr.pdf
Apple-systemoverviewj.pdf-Japon
Apple-Xserve_TO_J070411.pdf-Japon
Apple-Mac_Pro_User_Guide.pdf
Apple-iMacG5_iSight_UG.pdf
Apple-premiers_contacts_avec_iwork_08.pdf
Apple-services_de_collaboration_2e_ed_10.4.pdf
Apple-iPhone_Bluetooth_Headset_Benutzerhandbuch.pdf
Apple-Guide_de_l_utilisateur_de_Keynote08.pdf
APPLE/Apple-Logic-Pro-9-Effectsrfr.pdf
Apple-Logic-Pro-9-Effectsrfr.pdf
Apple-iPod_shuffle_3rdGen_UG_F.pdf
Apple-iPod_classic_160Go_Guide_de_l_utilisateur.pdf
Apple-iBookG4GettingStarted.pdf
Apple-Administration_de_technologies_web_10.5.pdf
Apple-Compressor-4-User-Manual-fr
Apple-MainStage-User-Manual-fr.pdf
Apple-Logic_Pro_8.0_lbn_j.pdf
Apple-PowerBookG4_15inch1.67-1.5GHzUserGuide.pdf
Apple-MacBook_Pro_15inch_Mid2010_CH.pdf
Apple-LED_Cinema_Display_27-inch_UG.pdf
Apple-MacBook_Pro_15inch_Mid2009_RS.pdf
Apple-macbook_pro_13inch_early2011_f.pdf
Apple-iMac_Mid2010_UG_BR.pdf
Apple-iMac_Late2009_UG_J.pdf
Apple-iphone_user_guide-For-iOS-6-Software
Apple-iDVD5_Getting_Started.pdf
Apple-guide_des_fonctionnalites_de_l_ipod_touch.pdf
Apple_iPod_touch_User_Guide
Apple_macbook_pro_13inch_early2011_f
Apple_Guide_de_l_utilisateur_d_Utilitaire_RAID
Apple_Time_Capsule_Early2009_Setup_F
Apple_iphone_4s_finger_tips_guide_rs
Apple_iphone_upute_za_uporabu
Apple_ipad_user_guide_ta
Apple_iPod_touch_User_Guide
apple_earpods_user_guide
apple_iphone_gebruikershandleiding
apple_iphone_5_info
apple_iphone_brukerhandbok
apple_apple_tv_3rd_gen_setup_tw
apple_macbook_pro-retina-mid-2012-important_product_info_ch
apple_Macintosh-User-s-Guide-for-Macintosh-PowerBook-145
Apple_ipod_touch_user_guide_ta
Apple_TV_2nd_gen_Setup_Guide_h
Apple_ipod_touch_manual_del_usuario
Apple_iphone_4s_finger_tips_guide_tu
Apple_macbook_pro_retina_qs_th
Apple-Manuel_de_l'utilisateur_de_Final_Cut_Server
Apple-iMac_G5_de_lutilisateur
Apple-Cinema_Tools_4.0_User_Manual_F
Apple-Personal-LaserWriter300-User-s-Guide
Apple-QuickTake-100-User-s-Guide-for-Macintosh
Apple-User-s-Guide-Macintosh-LC-630-DOS-Compatible
Apple-iPhone_iOS3.1_User_Guide
Apple-iphone_4s_important_product_information_guide
Apple-iPod_shuffle_Features_Guide_F
Liste-documentation-apple
Apple-Premiers_contacts_avec_iMovie_08
Apple-macbook_pro-retina-mid-2012-important_product_info_br
Apple-macbook_pro-13-inch-mid-2012-important_product_info
Apple-macbook_air-11-inch_mid-2012-qs_br
Apple-Manuel_de_l_utilisateur_de_MainStage
Apple-Compressor_3_User_Manual_F
Apple-Color_1.0_User_Manual_F
Apple-guide_de_configuration_airport_express_4.2
Apple-TimeCapsule_SetupGuide
Apple-Instruments_et_effets_Logic_Express_8
Apple-Manuel_de_l_utilisateur_de_WaveBurner
Apple-Macmini_Guide_de_l'utilisateur
Apple-PowerMacG5_UserGuide
Disque dur, ATA parallèle Instructions de remplacement
Apple-final_cut_pro_x_logic_effects_ref_f
Apple-Leopard_Installationshandbok
Manuale Utente PowerBookG4
Apple-thunderbolt_display_getting_started_1e
Apple-Compressor-4-Benutzerhandbuch
Apple-macbook_air_11inch_mid2011_ug
Apple-macbook_air-mid-2012-important_product_info_j
Apple-iPod-nano-Guide-des-fonctionnalites
Apple-iPod-nano-Guide-des-fonctionnalites
Apple-iPod-nano-Guide-de-l-utilisateur-4eme-generation
Apple-iPod-nano-Guide-de-l-utilisateur-4eme-generation
Apple-Manuel_de_l_utilisateur_d_Utilitaire_de_reponse_d_impulsion
Apple-Aperture_2_Raccourcis_clavier
AppleTV_Setup-Guide
Apple-livetype_2_user_manual_f
Apple-imacG5_17inch_harddrive
Apple-macbook_air_guide_de_l_utilisateur
Apple-MacBook_Early_2008_Guide_de_l_utilisateur
Apple-Keynote-2-Guide-de-l-utilisateur
Apple-PowerBook-User-s-Guide-for-PowerBook-computers
Apple-Macintosh-Performa-User-s-Guide-5200CD-and-5300CD
Apple-Macintosh-Performa-User-s-Guide
Apple-Workgroup-Server-Guide
Apple-iPod-nano-Guide-des-fonctionnalites
Apple-iPad-User-Guide-For-iOS-5-1-Software
Apple-Boot-Camp-Guide-d-installation-et-de-configuration
Apple-iPod-nano-Guide-de-l-utilisateur-4eme-generation
Power Mac G5 Guide de l’utilisateur APPLE
Guide de l'utilisateur PAGE '08 APPLE
Guide de l'utilisateur KEYNOTE '09 APPLE
Guide de l'Utilisateur KEYNOTE '3 APPLE
Guide de l'Utilisateur UTILITAIRE RAID
Guide de l'Utilisateur Logic Studio
Power Mac G5 Guide de l’utilisateur APPLE
Guide de l'utilisateur PAGE '08 APPLE
Guide de l'utilisateur KEYNOTE '09 APPLE
Guide de l'Utilisateur KEYNOTE '3 APPLE
Guide de l'Utilisateur UTILITAIRE RAID
Guide de l'Utilisateur Logic Studio
Guide de l’utilisateur ipad Pour le logiciel iOS 5.1
PowerBook G4 Premiers Contacts APPLE
Guide de l'Utilisateur iphone pour le logiciel ios 5.1 APPLE
Guide de l’utilisateur ipad Pour le logiciel iOS 4,3
Guide de l’utilisateur iPod nano 5ème génération
Guide de l'utilisateur iPod Touch 2.2 APPLE
Guide de l’utilisateur QuickTime 7 Mac OS X 10.3.9 et ultérieur Windows XP et Windows 2000
Guide de l'utilisateur MacBook 13 pouces Mi 2010
Guide de l’utilisateur iPhone (Pour les logiciels iOS 4.2 et 4.3)
Guide-de-l-utilisateur-iPod-touch-pour-le-logiciel-ios-4-3-APPLE
Guide-de-l-utilisateur-iPad-2-pour-le-logiciel-ios-4-3-APPLE
Guide de déploiement en entreprise iPhone OS
Guide-de-l-administrateur-Apple-Remote-Desktop-3-1
Guide-de-l-utilisateur-Apple-Xserve-Diagnostics-Version-3X103
Guide-de-configuration-AirPort-Extreme-802.11n-5e-Generation
Guide-de-configuration-AirPort-Extreme-802-11n-5e-Generation
Guide-de-l-utilisateur-Capteur-Nike-iPod
Guide-de-l-utilisateur-iMac-21-5-pouces-et-27-pouces-mi-2011-APPLE
Guide-de-l-utilisateur-Apple-Qadministrator-4
Guide-d-installation-Apple-TV-3-eme-generation
User-Guide-iPad-For-ios-5-1-Software
Shake 4
User Manual
Shake Homepage.qxp 5/20/05 6:25 PM Page 1 Apple Computer, Inc.
© 2005 Apple Computer, Inc. All rights reserved.
Under the copyright laws, this manual may not be
copied, in whole or in part, without the written consent
of Apple. Your rights to the software are governed by
the accompanying software license agreement.
The Apple logo is a trademark of Apple Computer, Inc.,
registered in the U.S. and other countries. Use of the
keyboard Apple logo (Option-Shift-K) for commercial
purposes without the prior written consent of Apple
may constitute trademark infringement and unfair
competition in violation of federal and state laws.
Every effort has been made to ensure that the
information in this manual is accurate. Apple Computer,
Inc. is not responsible for printing or clerical errors.
Apple Computer, Inc.
1 Infinite Loop
Cupertino, CA 95014-2084
408-996-1010
www.apple.com
Apple, the Apple logo, Final Cut, Final Cut Pro, FireWire,
Mac, Macintosh, Mac OS, Nothing Real, QuickTime,
Shake, and TrueType are trademarks of Apple Computer,
Inc., registered in the U.S. and other countries. Exposé
and Finder are trademarks of Apple Computer, Inc.
Adobe is a trademark of Adobe Systems Inc.
Cineon is a trademark of Eastman Kodak Company.
Maya, Alias, Alias/Wavefront, and O2 are trademarks of
SGI Inc.
3ds Max is a trademark of Autodesk Inc.
Softimage and Matador are registered trademarks of
Avid Technology, Inc.
Times is a registered trademark of Heidelberger
Druckmaschinen AG, available from Linotype Library
GmbH.
Other company and product names mentioned herein
are trademarks of their respective companies. Mention
of third-party products is for informational purposes
only and constitutes neither an endorsement nor a
recommendation. Apple assumes no responsibility with
regard to the performance or use of these products.
ACKNOWLEDGEMENTS
Portions of this Apple software may utilize the following
copyrighted material, the use of which is hereby
acknowledged.
Double Negative Visual Effects (OpenEXR): Portions of
the OpenEXR file translator plug-in are licensed from
Double Negative Visual Effects.
FilmLight Limited (Truelight): Portions of this software
are licensed from FilmLight Limited. © 2002-2005
FilmLight Limited. All rights reserved.
FLEXlm 9.2 © Globetrotter Software 2004. Globetrotter
and FLEXlm are registered trademarks of Macrovision
Corporation.
Framestore Limited (Keylight): FS-C Keylight v1.4 32 bit
version © Framestore Limited 1986-2002.
Industrial Light & Magic, a division of Lucas Digital Ltd.
LLC (OpenEXR): Copyright © 2002 All rights reserved.
Redistribution and use in source and binary forms, with
or without modification, are permitted provided that
the following conditions are met:
Redistributions of source code must retain the above
copyright notice, this list of conditions and the following
disclaimer.
Redistributions in binary form must reproduce the
above copyright notice, this list of conditions and the
following disclaimer in the documentation and/or other
materials provided with the distribution.
Neither the name of Industrial Light & Magic nor the
names of its contributors may be used to endorse or
promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS
AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED
WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
Oliver James (Keylight 32-bit support): © 2005 Apple
Computer, Inc. All rights reserved.
This new version has been updated by Oliver James, one
of Keylight's original authors, to provide full support for
floating point images.
Thomas G. Lane ( JPEG library ): © 1991-1998 Thomas G.
Lane. All rights reserved except as specified below.
The authors make NO WARRANTY or representation,
either express or implied, with respect to this software, its
quality, accuracy, merchantability, or fitness for a
particular purpose. This software is provided AS IS, and
you, its user, assume the entire risk as to its quality and
accuracy. Permission is hereby granted to use, copy,
modify, and distribute this software (or portions thereof)
for any purpose, without fee, subject to these conditions:
(1) If any part of the source code for this software is distributed, then this README file must be included, with
this copyright and no-warranty notice unaltered; and any
additions, deletions, or changes to the original files must
be clearly indicated in accompanying documentation.
(2) If only executable code is distributed, then the
accompanying documentation must state that this
software is based in part on the work of the Independent
JPEG Group. (3) Permission for use of this software is
granted only if the user accepts full responsibility for any
undesirable consequences; the authors accept NO
LIABILITY for damages of any kind. These conditions
apply to any software derived from or based on the IJG
code, not just to the unmodified library. If you use our
work, you ought to acknowledge us. Permission is NOT
granted for the use of any IJG author's name or company
name in advertising or publicity relating to this software
or products derived from it. This software may be
referred to only as the Independent JPEG Group's
software. We specifically permit and encourage the use of
this software as the basis of commercial products,
provided that all warranty or liability claims are assumed
by the product vendor.
Sam Leffler and Silicon Graphics, Inc. (TIFF library):
© 1988-1996 Sam Leffler. Copyright © 1991-1996 Silicon
Graphics, Inc.
Permission to use, copy, modify, distribute, and sell this
software and its documentation for any purpose is
hereby granted without fee, provided that (i) the above
copyright notices and this permission notice appear in
all copies of the software and related documentation,
and (ii) the names of Sam Leffler and Silicon Graphics
may not be used in any advertising or publicity relating
to the software without the specific, prior written
permission of Sam Leffler and Silicon Graphics.
© THE SOFTWARE IS PROVIDED AS-IS AND WITHOUT
WARRANTY OF ANY KIND, EXPRESS, IMPLIED OR OTHERWISE,
INCLUDING WITHOUT LIMITATION, ANY WARRANTY OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
IN NO EVENT SHALL SAM LEFFLER OR SILICON GRAPHICS BE
LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT OR
CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
PROFITS, WHETHER OR NOT ADVISED OF THE POSSIBILITY OF
DAMAGE, AND ON ANY THEORY OF LIABILITY, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF
THIS SOFTWARE.
Photron USA, Inc. (Primatte Keyer): © 2004 Photron, USA
Glen Randers-Pehrson, et al. ( png ): libpng version 1.0.8
- July 24, 2000. © 1998-2000 Glenn Randers-Pehrson,
© 1996, 1997 Andreas Dilger, © 1995, 1996 Guy Eric
Schalnat, Group 42, Inc.
COPYRIGHT NOTICE, DISCLAIMER, and LICENSE
For the purposes of this copyright and license,
Contributing Authors is defined as the following set of
individuals: Andreas Dilger, Dave Martindale, Guy Eric
Schalnat, Paul Schmidt, Tim Wegner.
The PNG Reference Library is supplied AS IS. The
Contributing Authors and Group 42, Inc. disclaim all
warranties, expressed or implied including, without
limitation, the warranties of merchantability and of
fitness for any purpose. The Contributing Authors and
Group 42, Inc. assume no liability for direct, indirect,
incidental, special, exemplary, or consequential
damages, which may result from the use of the PNG
Reference Library, even if advised of the possibility of
such damage.
Permission is hereby granted to use, copy, modify, and
distribute this source code, or portions hereof, for any
purpose, without fee, subject to the following restrictions:
1. The origin of this source code must not be
misrepresented. 2. Altered versions must be plainly
marked as such and must not be misrepresented as
being the original source. 3. This Copyright notice may
not be removed or altered from any source or altered
source distribution. The Contributing Authors and Group
42, Inc. specifically permit, without fee, and encourage
the use of this source code as a component to
supporting the PNG file format in commercial products.
If you use this source code in a product, acknowledgment
is not required but would be appreciated.
Julian R. Seward ( bzip2 ): © 1996-2002 Julian R Seward.
All rights reserved. Redistribution and use in source and
binary forms, with or without modification, are permitted
provided that the following conditions are met:
1. Redistributions of source code must retain the above
copyright notice, this list of conditions and the following
disclaimer. 2. The origin of this software must not be
misrepresented; you must not claim that you wrote the
original software. If you use this software in a product, an
acknowledgment in the product documentation would
be appreciated but is not required. 3. Altered source
versions must be plainly marked as such, and must not
be misrepresented as being the original software. 4. The
name of the author may not be used to endorse or
promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
OF SUCH DAMAGE.
Julian Seward, Cambridge, UK.
jseward@acm.org
bzip2/libbzip2 version 1.0.2 of 30 December 2001 5
1 Contents
Preface 15 Shake 4 Documentation and Resources
15 What Is Shake?
16 Using the Shake Documentation
16 Onscreen Help
17 Contextual Help
17 Apple Websites
18 Keyboard and Mouse Conventions on Different Platforms
19 Using a Stylus
20 Using Dual-Head Monitors
Chapter 1 23 An Overview of the Shake User Interface
23 Opening Shake
24 Overview of the Shake User Interface
27 Making Adjustments to the Shake Window
28 Navigating in the Viewer, Node View, and Curve Editor
30 Working With Tabs and the Tweaker
31 Menus and the Title Bar
35 Script Management
38 The File Browser
45 Using and Customizing Viewers
72 The Parameters Tabs
78 Using Expressions in Parameters
81 The Parameters Tab Shortcut Menu
82 The Domain of Definition (DOD)
88 The Time Bar
90 Previewing Your Script Using the Flipbook
Chapter 2 91 Setting a Script’s Global Parameters
91 About Global Parameters
92 The Main Global Parameters
98 guiControls
101 Monitor Controls
102 Colors6 Contents
102 enhancedNodeView
104 Application Environmental Variables
104 Script Environmental Variables
Chapter 3 107 Adding Media, Retiming, and Remastering
107 About Image Input
110 Using the FileIn (SFileIn) Node
117 Retiming
123 The TimeX Node
125 Manual Manipulation of Time
126 Remastering Media
130 Working With Extremely High-Resolution Images
132 Using Shake With Final Cut Pro
Chapter 4 137 Using Proxies
137 Using Proxies
139 Using interactiveScale
141 Using Temporary Proxies
144 Permanently Customizing Shake’s Proxy Settings
148 Using Pre-Generated Proxy Files Created Outside of Shake
150 Pre-Generating Your Own Proxies
163 When Not to Use Proxies
164 Proxy Parameters
Chapter 5 167 Compatible File Formats and Image Resolutions
167 File Formats
170 Table of Supported File Formats
173 Format Descriptions
178 Support for Custom File Header Metadata
180 Table of File Sizes
180 Controlling Image Resolution
183 Nodes That Affect Image Resolution
186 Cropping Functions
Chapter 6 191 Importing Video and Anamorphic Film
191 The Basics of Processing Interlaced Video
196 Setting Up Your Script to Use Interlaced Images
200 Displaying Individual Fields in the Viewer
204 Integrating Interlaced and Non-Interlaced Footage
205 Video Functions
209 About Aspect Ratios and Nonsquare PixelsContents 7
Chapter 7 217 Using the Node View
217 About Node-Based Compositing
218 Where Do Nodes Come From?
219 Navigating in the Node View
221 Using the Enhanced Node View
224 Noodle Display Options
226 Creating Nodes
228 Selecting and Deselecting Nodes
231 Connecting Nodes Together
235 Breaking Node Connections
235 Inserting, Replacing, and Deleting Nodes
240 Moving Nodes
240 Loading a Node Into a Viewer
241 Loading Node Parameters
243 Ignoring Nodes
243 Renaming Nodes
244 Arranging Nodes
246 Groups and Clusters
251 Opening Macros
251 Cloning Nodes
253 Thumbnails
257 The Node View Shortcut Menu
Chapter 8 261 Using the Time View
261 About the Time View
262 Viewing Nodes in the Time View
263 Clip Durations in the Time View
263 Adjusting Image Nodes in the Time View
270 The Transition Node
Chapter 9 277 Using the Audio Panel
277 About Audio in Shake
278 Loading, Refreshing, and Removing Audio Files
280 Previewing and Looping Audio
282 Playing Audio With Your Footage
283 Viewing Audio
283 Slipping Audio Sync in Your Script
285 Extracting Curves From Sound Files
288 Exporting an Audio Mix8 Contents
Chapter 10 291 Parameter Animation and the Curve Editor
291 Animating Parameters With Keyframes
294 Using the Curve Editor
298 Navigating the Curve Editor
300 Working With Keyframes
316 More About Splines
Chapter 11 323 The Flipbook, Monitor Previews, and Color Calibration
323 Cached Playback From the Viewer
323 Launching the Flipbook
324 Flipbook Controls
325 Viewing, Zooming, and Panning Controls
325 Memory Requirements
326 Creating a Disk-Based Flipbook
330 Viewing on an External Monitor
331 Monitor Calibration With Truelight
Chapter 12 333 Rendering With the FileOut Node
333 Attaching FileOut Nodes Prior to Rendering
336 Rendering From the Command Line
337 Using the Render Parameters Window
339 The Render Menu
339 Support for Apple Qmaster
Chapter 13 343 Image Caching
343 About Caching in Shake
343 Cache Parameters in the Globals Tab
344 Using the Cache Node
349 Commands to Clear the Cache
349 Memory and the Cache in Detail
352 Customizing Image Caching Behavior
Chapter 14 355 Customizing Shake
355 Setting Preferences and Customizing Shake
355 Creating and Saving .h Preference Files
359 Customizing Interface Controls in Shake
371 Customizing File Path and Browser Controls
375 Tool Tabs
378 Customizing the Node View
379 Using Parameters Controls Within Macros
386 Viewer Controls
392 Template Preference Files
392 Changing the Default QuickTime ConfigurationContents 9
393 Environment Variables for Shake
400 Interface Devices and Styles
401 Customizing the Flipbook
401 Configuring Additional Support for Apple Qmaster
Chapter 15 405 Image Processing Basics
405 About This Chapter
405 Taking Advantage of the Infinite Workspace
408 Bit Depth
414 Channels Explained
417 Compositing Basics and the Alpha Channel
421 About Premultiplication and Compositing
437 The Logarithmic Cineon File
Chapter 16 451 Compositing With Layer Nodes
451 Layering Node Essentials
452 Compositing Math Overview
453 The Layer Nodes
470 Other Compositing Functions
Chapter 17 473 Layered Photoshop Files and the MultiLayer Node
473 About the MultiLayer Node
473 Importing Photoshop Files
477 Importing a Photoshop File Using the FileIn Node
478 Using the MultiLayer Node
Chapter 18 485 Compositing With the MultiPlane Node
485 An Overview of the MultiPlane Node
487 Using the Multi-Pane Viewer Display
493 Connecting Inputs to a MultiPlane Node
494 Using Camera and Tracking Data From .ma Files
500 Transforming Individual Layers
506 Attaching Layers to the Camera and to Locator Points
512 Parameters in the Images Tab
517 Manipulating the Camera
Chapter 19 527 Using Masks
527 About Masks
528 Using Side Input Masks to Limit Effects
530 Using Masks to Limit Color Nodes
533 Masking Concatenating Nodes
534 Masking Transform Nodes
536 Masking Layers10 Contents
539 Masking Filters
540 The -mask/Mask Node
542 Masking Using the Constraint Node
Chapter 20 545 Rotoscoping
545 Options to Customize Shape Drawing
546 Using the RotoShape Node
548 Drawing New Shapes With the RotoShape Node
550 Editing Shapes
556 Copying and Pasting Shapes Between Nodes
557 Animating Shapes
562 Attaching Trackers to Shapes and Points
564 Adjusting Shape Feathering Using the Point Modes
566 Linking Shapes Together
567 Importing and Exporting Shape Data
567 Right-Click Menu on Transform Control
568 Right-Click Menu on Point
568 Viewer Shelf Controls
572 Using the QuickShape Node
Chapter 21 579 Paint
579 About the QuickPaint Node
580 Toggling Between Paint and Edit Mode
580 Paint Tools and Brush Controls
583 Modifying Paint Strokes
585 Animating Strokes
587 Modifying Paint Stroke Parameters
591 QuickPaint Hot Keys
591 QuickPaint Parameters
594 StrokeData Synopsis
Chapter 22 597 Shake-Generated Images
597 Generating Images With Shake
597 Checker
598 Color
599 ColorWheel
600 Grad
601 Ramp
602 Rand
603 RGrad
604 Text
609 TileContents 11
Chapter 23 611 Color Correction
611 Bit Depth, Color Space, and Color Correction
612 Concatenation of Color-Correction Nodes
615 Premultiplied Elements and CG Element Correction
617 Color Correction and the Infinite Workspace
620 Using the Color Picker
625 Using a Color Control Within the Parameters Tab
627 Customizing the Palette and Color Picker Interface
627 Using the Pixel Analyzer
631 The PixelAnalyzer Node
635 Color-Correction Nodes
637 Atomic-Level Functions
646 Utility Correctors
659 Consolidated Color Correctors
674 Other Nodes for Image Analysis
Chapter 24 681 Keying
681 About Keying and Spill Suppression
682 Pulling a Bluescreen or Greenscreen
683 Combining Keyers
687 Blue and Green Spill Suppression
691 Edge Treatment
696 Keying DV Video
702 Keying Functions
Chapter 25 717 Image Tracking, Stabilization, and SmoothCam
717 About Image Tracking Nodes
720 Image Tracking Workflow
728 Strategies for Better Tracking
733 Modifying the Results of a Track
739 Saving Tracks
740 Tracking Nodes
754 The SmoothCam Node
Chapter 26 763 Transformations, Motion Blur, and AutoAlign
763 About Transformations
764 Concatenation of Transformations
766 Inverting Transformations
766 Onscreen Controls
775 Scaling Images and Changing Resolution
778 Creating Motion Blur in Shake
783 The AutoAlign Node
794 The Transform Nodes12 Contents
Chapter 27 807 Warping and Morphing Images
807 About Warps
807 The Basic Warp Nodes
821 The Warper and Morpher Nodes
830 Creating and Modifying Shapes
845 Using the Warper Node
854 Using the Morpher Node
Chapter 28 861 Filters
861 About Filters
861 Masking Filters
864 The Filter Nodes
Chapter 29 895 Optimizing and Troubleshooting Your Scripts
895 Optimization
899 Problems With Premultiplication
900 Unwanted Gamma Shifts During FileIn and FileOut
902 Avoiding Bad Habits
Chapter 30 905 Installing and Creating Macros
905 How to Install Macros
907 Creating Macros—The Basics
914 Creating Macros—In Depth
Chapter 31 935 Expressions and Scripting
935 What’s in This Chapter
935 Linking Parameters
937 Variables
939 Expressions
941 Reference Tables for Functions, Variables, and Expressions
947 Using Signal Generators Within Expressions
951 Script Manual
Chapter 32 963 The Cookbook
963 Cookbook Summary
963 Coloring Tips
967 Filtering Tips
968 Keying Tips
974 Layering Tips
977 Transform Tips
979 Creating Depth With Fog
980 Text Treatments
984 Installing and Using Cookbook Macros
985 Command-Line MacrosContents 13
986 Image Macros
989 Color Macros
993 Relief Macro
993 Key Macros
994 Transform Macros
996 Warping With the SpeedBump Macro
996 Utility Macros
1001 Using Environment Variables for Projects
Appendix A 1005 Keyboard Shortcuts and Hot Keys
1005 Keyboard Shortcuts in Shake
Appendix B 1015 The Shake Command-Line Manual
1015 Viewing, Converting, and Writing Images
Index 103114 Contents 15
Preface
Shake 4 Documentation and
Resources
Welcome to the world of Shake 4 compositing. This
chapter covers where to find help, how the keyboard and
mouse work on different platforms, and how to set up
Shake for use with a stylus.
What Is Shake?
Shake is a high-quality, node-based compositing and visual effects application for film
and video. Shake supports most industry-standard graphics formats, and easily
accommodates high-resolution and high bit depth image sequences and QuickTime
files (Mac OS X only).
Among Shake’s many built-in tools are industry-standard keyers for pulling bluescreens
and greenscreens, a complete suite of color-correction tools, features for high-quality
motion retiming and format remastering, motion tracking, smoothing, and stabilization
capabilities, integrated procedural paint tools, and a rotoscoping and masking
environment that provides complete control over animated and still mattes. Shake also
supports an extensive list of third-party plug-ins, and is compatible across both the
Mac OS X and Linux platforms.
Shake is also an image-processing tool that can be used as a utility for media being
passed along a pipeline of many different graphics applications. Large facilities can use
Shake to process and combine image data from several different departments—for
example, taking a project from initial film recording; providing processed images and
tools for use by the 3D animation, digital matte, and roto departments; recombining
the output from all these groups with the original plates for compositing; and
ultimately sending the final result back out for film recording.
Shake’s tools can be accessed in several different ways. While most artists work within
the graphical interface, advanced users can access a command-line tool running from
the Terminal. Likewise, more technically oriented users can perform complex image
processing by creating scripts (the Shake scripting language is similar to C), thereby
using Shake as an extensive image-manipulation library.16 Preface Shake 4 Documentation and Resources
Using the Shake Documentation
There are several components to the documentation accompanying Shake, including
printed user manuals and tutorials, onscreen documentation in PDF and HTML formats,
and contextual help available directly from within the Shake interface.
User Manual
The Shake 4 User Manual is divided into two volumes:
• Volume I—The Interface: Explains the basics of the Shake interface and provides
instructions for working with media, file formats, nodes, and so on.
• Volume II—Compositing: Discusses the specific features Shake provides for image
compositing. Part I of this volume covers such topics as image processing,
rotoscoping, color correction, and so on. Part II delves into Shake’s advanced
functionality, including optimizing, creating macros, and using expressions. This
section also includes “The Cookbook,” a repository of useful Shake tips and
techniques.
Tutorials
If you are new to Shake, you are encouraged to work through the Shake 4 Tutorials.
These interactive lessons provide you with a solid introduction to Shake’s functionality
and workflow.
Onscreen Help
Onscreen help (available to Mac OS X users in the Help menu) provides easy access to
information while you’re working in Shake. Onscreen versions of the Shake 4 User
Manual and Shake 4 Tutorials are available here, along with other documents in PDF
format and links to websites.
To access onscreen help in Mac OS X:
m
In Shake, choose an option from the Help menu.
Note: You can also open PDF versions of the user manual and tutorials from the
Shake/doc folder.
Viewing Shake Onscreen Documentation on Linux Systems
To view Shake onscreen documentation on a Linux system, you’ll need to download
and install Adobe Acrobat Reader, then configure the PDF browser path in the Shake
application.
To configure the PDF browser path in Shake:
1 Open the Globals tab.
2 Open the guiControl subtree (click the “+” sign).
The subtree expands.Preface Shake 4 Documentation and Resources 17
3 Click the folder icon next to the pdfBrowser Path parameter.
The Choose Application window appears.
4 In the Choose Application window, browse to and select the Adobe Acrobat Reader
application.
To save your PDF browser settings in Shake:
1 Choose File > Save Interface Settings.
The “Save preferences to” window appears.
2 In the “Save preferences to” window, save your settings to a defaultui.h file.
Contextual Help
In addition to the onscreen help, the Shake interface provides immediate contextual
help from within the application. Moving the pointer over most controls in Shake
displays their function in the Info field, located at the bottom-right side of the Shake
interface. The Info field provides immediate information about each control’s function.
For example, moving the pointer over the Warp tool tab displays the following
information in the Info field.
In addition to the information available from the Info field, each node in Shake has a
corresponding HTML-based contextual help page, available via a special control in the
Parameters tab.
To display a node’s contextual help page:
m
Load a node’s parameters into the Parameters tab, then click the Help button to the
right of the node name field.
Note: Contextual help pages are opened using your system’s currently configured
default web browser.
Apple Websites
There are a variety of discussion boards, forums, and educational resources related to
Shake on the web.18 Preface Shake 4 Documentation and Resources
Shake Websites
The following websites provide general information, updates, and support information
about Shake, as well as the latest news, resources, and training materials.
For more information about Shake, go to:
• http://www.apple.com/shake
To get more information on third-party resources, such as third-party tools and user
groups, go to:
• http://www.apple.com/software/pro/resources/shakeresources.html
An useful listserver, archive, and extensive macro collection are accessible at the
unofficial Shake user community site, HighEnd2D.com:
http://www.highend2d.com/shake
For more information on the Apple Pro Training Program, go to:
• http://www.apple.com/software/pro/training
Keyboard and Mouse Conventions on Different Platforms
Shake can be used on the Mac OS X and Linux platforms. Functions or commands that
are platform-specific have been documented whenever possible. This section
summarizes the main differences.
• Keyboard: Hot keys or keyboard commands that vary between the Macintosh and
Linux platforms are documented when possible. In most cases, the Command and
Control keys are interchangeable. The Macintosh Delete key located below the F12
key is the equivalent of the Linux Backspace key; the Macintosh Delete key grouped
with the Help, Home, and End keys is the equivalent of the Linux Delete key.
Important: Macintosh users should remember that the Delete key used in Shake is
not the key located below the F12 key but, rather, the one grouped with the Help,
Home, and End keys.
• Mouse: Shake requires the use of a three-button mouse. A three-button mouse
provides quick access to shortcut menus and navigational shortcuts. Shake also
supports the middle scroll wheel of a three-button mouse.
Shake documentation refers to the three mouse buttons as follows:
Mouse Button Documentation Reference
Left mouse button Click
Middle mouse button Middle mouse button or middle-click
Right mouse button Right-clickPreface Shake 4 Documentation and Resources 19
Note: This manual uses the term “right-click” to describe how to access shortcut menu
commands.
The following table lists the user manual notation system.
Using a Stylus
Shake is designed to be used with a graphics tablet and stylus.
To optimize the Shake interface for use with a tablet and stylus:
1 In the guiControls subtree of the Globals tab, enable virtualSliderMode.
2 Set the parameter virtualSliderSpeed to 0.
Notation Example
Hot keys/keyboard commands To break a tangent handle in the Curve Editor, Control-click the
handle.
Some hot keys/keyboard
commands vary depending on
the platform. The Mac OS X
command appears first, followed
by the Linux command. The two
hot keys/commands are
separated by a forward slash.
In general, the Command and
Control keys are
interchangeable.
In the Node View, you can press Control-Option-click / Control-Altclick to zoom in and out.
Menu selections are indicated
by angle brackets.
To open a script, choose File > Open Script.
File paths and file names appear
in italics. Also, directories and
file paths are divided by forward
slashes.
Temp files are saved in the ..//var/tmp/ directory.
Node groups (Tool tabs) appear
in the default font, followed by
the name of the node in italics.
A dash appears between the tab
and node names.
In the Node View, select the Cloud node, and insert a Transform–
CornerPin node.
Command-line functions appear
in italics.
shake -exec my_script -t 1-240
Modifications to preferences
files appear in italics.
Add the following lines to a .h file in your startup directory:
script.cineonTopDown = 1;
script.tiffTopDown = 1;20 Preface Shake 4 Documentation and Resources
When virtualSliderMode is enabled, the left button always uses the virtual sliders when
when you click a value field. Normally, you have to press Control and drag. However,
when virtualSliderMode is on, dragging left or right in a value field adjusts the value
beyond normal slider limits.
Note: The stylus does not allow you to use your desk space the same way as with a
mouse; consequently, you have to enable virtualSliderMode.
Window Navigation Using a Stylus
Shake makes extensive use of the middle-mouse button to facilitate navigation within
each tab of the interface. To navigate and zoom within Shake easily using a stylus, you
should map the middle mouse button to one of the stylus buttons. Once mapped, you
can use that button to pan around within any section of the Shake interface, or
Control-click and drag with that button to zoom into and out of a section of the
interface.
Using Dual-Head Monitors
You can choose View > Spawn Viewer Desktop to create a new Viewer window that
floats above the normal Shake interface. You can then move this Viewer to a second
monitor, clearing up space on the first for node editing operations.
Important: This technique only works when both monitors are driven by the same
graphics card.I
Part I: Interface, Setup, and Input
Part I presents information about the Shake graphical
user interface as a whole, with detailed information
about all the major interface components.
Chapter 1 An Overview of the Shake User Interface
Chapter 2 Setting a Script’s Global Parameters
Chapter 3 Adding Media, Retiming, and Remastering
Chapter 4 Using Proxies
Chapter 5 Compatible File Formats and Image Resolutions
Chapter 6 Importing Video and Anamorphic Film
Chapter 7 Using the Node View
Chapter 8 Using the Time View
Chapter 9 Using the Audio Panel
Chapter 10 Parameter Animation and the Curve Editor
Chapter 11 The Flipbook, Monitor Previews, and Color Calibration
Chapter 12 Rendering With the FileOut Node
Chapter 13 Image Caching
Chapter 14 Customizing Shake 1
23
1 An Overview of the Shake User
Interface
This chapter provides a fast introduction to all aspects of
the Shake graphical user interface. It also provides indepth information about navigating the interface, and
customizing it to suit your needs.
Opening Shake
When you open the Shake interface, a blank Shake script appears. Shake scripts
(otherwise known as project files) are unique in that they’re actually a text document
containing the command-line script representation of the node tree that you assemble
in the interface. You can open Shake scripts in any text editor to examine their
contents, and if you’re a power user, you can make modifications to your composite
right within the text of the script itself (this is only recommended if you’re conversant
with Shake’s scripting language, covered in more detail in Part III of this book).
Most of the time, however, you’ll likely stay within Shake’s graphical interface, which
provides specialized controls for performing a wide variety of compositing tasks (many
of which would be far too unwieldy to manipulate from the command line).
Opening Two Scripts at Once
Shake is designed to have only one script open at a time. Typically, each script is used to
create a single compositing project, with a single frame range and a single node tree.
Although Shake supports multiple independent node trees within the same script, all
trees share the same duration, defined by the timeRange parameter in the Globals tab.
If necessary, it is possible to open two scripts simultaneously into interface windows. In
this case, what you’re really doing is launching two instances of Shake at once. This is
primarily useful if you need to copy information from one script to another.
Important: When youopen Shake twice, the first instance of Shake is the only one
that’s able to write to and read from the cache. (For more information on caching in
Shake, see Chapter 13, “Image Caching,” on page 343.)24 Chapter 1 An Overview of the Shake User Interface
Overview of the Shake User Interface
The Shake user interface is divided into five main areas: the Viewer, the Tool tabs, the
Parameters/Globals tabs, the Node View/Curve Editor/Color Picker/Audio Panel/Pixel
Analyzer tabs, and the Time Bar at the bottom.
Node View
The Node View is the heart of Shake, and displays the tree of connected nodes that
modify the flow of image data from the top of the tree down to the bottom. Every
function in Shake is represented as a separate node that can be inserted into the node
tree. You use the Node View to modify, select, view, navigate, and organize your
composite.
For more information on the Node View, see Chapter 7, “Using the Node View,” on
page 217.
Viewer Area
The Viewer area is capable of containing one or more Viewers, which display the image
of the currently selected node. You have explicit control over which part of the node
tree is displayed in the Viewer—in fact, the ability to separate the node that’s displayed
in the Viewer from the node being edited in the Parameters tabs is central to working
with Shake. Each Viewer allows you to isolate specific channels from each image. For
example, you can choose to view only the red channel of an image while you make a
color correction, or only the alpha channel when you’re adjusting a key.
Viewer area
Displays the image at the
selected node in the
node tree.
Node View
One of many tabs that
can be displayed here.
The Node View displays
the node tree, which
defines the flow and
processing of image data
in your project.
Time Bar area
Lets you navigate among the frames of
your project using the playback
buttons and the playhead.
Tool tabs
All of the available nodes
in Shake are organized
into eight tabs. Click a
node’s icon to add that
node to the node tree.
Parameters tabs
The parameters of
selected nodes appear in
the Parameters1 and 2
tabs. The global
parameters of your
project appear in the
Globals tab.
Images from The Saint provided courtesy of
Framestore CFC and Paramount British Pictures Ltd.Chapter 1 An Overview of the Shake User Interface 25
Tool Tabs
The Tool tabs contain groups of nodes, organized by function. Nodes you click in these
tabs are added to the node tree. For example, to add a Keylight node, click the Key tool
tab, and click the Keylight node. The Keylight node then appears in the node tree. If you
right-click a node in any of the Tool tabs, you can choose to insert that node into the
node tree in a variety of different ways, using the shortcut menu.
The Tool tabs area can also display the Curve Editor, Node View, or Time View.
The Time Bar Area
The Time Bar area, at the bottom of the Shake window, displays the currently defined
range of frames. Three fields to the right of the Time Bar show the displayed number of
frames in the Time Bar (not the time range), the current position of the playhead, and
the Increments (Inc) in which the playhead moves. To the right of these fields, the
Viewer playback controls let you step through your composite in different ways.
Command and Help Lines
Underneath the Time Bar area are two additional fields. The Command Line field lets
you enter Shake script commands directly, effectively bypassing the graphical interface.
The Info field provides immediate information about interface controls that you roll the
pointer over.
Parameters Tabs
The two Parameters tabs can be set to display the parameters within a selected node.
You can load two different sets of parameters into each of the two Parameters tabs. The
Globals tab to the right contains the parameters that affect the behavior of the entire
script (such as proxy use, motionBlur, and various interface controls).
Curve Editor
The Curve Editor is a graph on which you view, create, and modify the animation and
Lookup curves that are associated with parameters in the nodes of your script. In
addition to adding and editing the control points defining a curve’s shape, you can
change a curve’s type, as well as its cycling mode.
For more information on using the Curve Editor, see Chapter 10, “Parameter Animation
and the Curve Editor.”
Color Picker
The Color Picker is a centralized interface that lets you assign colors to node color
parameters by clicking the ColorWheel and luminance bar, clicking swatches from a
color palette, or by defining colors numerically using a variety of color models. You can
also store your own frequently-used color swatches for future use in the Palette.
For more information on how to use the Color Picker, see “Using the Color Picker” on
page 620.26 Chapter 1 An Overview of the Shake User Interface
Audio Panel
The Audio Panel lets you load AIFF and WAV audio files for use by your project. Several
different files can be mixed down to create a single file. The audio waveforms can be
displayed inside the Curve Editor. Sound playback can be activated in the Time Bar
playback controls (Mac OS X only).
Note: Because audio playback is handled through the use of Macintosh-specific
QuickTime libraries, you can only hear audio playback on Mac OS X systems. You can
still analyze and visualize audio in Linux.
For more information on the Audio Panel, see Chapter 9, “Using the Audio Panel,” on
page 277.
Pixel Analyzer
The Pixel Analyzer is a tool to find and compare different color values on an image. You
can examine minimum, average, current, or maximum pixel values on a selection (that
you make), or across an entire image.
For more information on how to use the Pixel Analyzer, see “Using the Pixel Analyzer”
on page 627.
Console
The Console tab displays the data that Shake sends to the OS while in operation. It’s a
display-only tab. Two controls at the top of the Console tab let you change the color of
the text, and erase the current contents of the console. The maximum width of
displayed text can be set via the consoleLineLength parameter, in the guiControls
subtree of the Globals tab.
Getting Help in Shake
There are three ways you can get more information about the Shake interface:
• As you pass the pointer (no need to click) over a node or Viewer, information for
the node appears either in the title bar of the Viewer, or in the bottom-right Info
field. The displayed information includes node name, type, resolution, bit depth,
and channels.
• You can also right-click most buttons to display a pop-up menu listing that button’s
options. You can use this to select a function or to find out what a button does.
• The Help menu contains detailed information on how to use Shake, including the
full contents of this user manual, specifics on new features introduced with the
current release, and late-breaking news about last-minute changes and additions
made to Shake.Chapter 1 An Overview of the Shake User Interface 27
Making Adjustments to the Shake Window
As you work with Shake, there are several methods for resizing and customizing the
various areas of the Shake interface.
To resize any area of the interface:
m
Position the pointer at any border between interface areas and drag to increase or
decrease the size of that area. If you drag an intersection, you can resize multiple areas
at once.
To expand any one area to take up the full screen:
m
Position the pointer in the area you want to expand, and press the Space bar.
m
Press the Space bar again to shrink the area back to its original size.
Note: Use of the Space bar is especially helpful in the Curve Editor, when you are
working with high-resolution elements or large scripts.
To temporarily hide an area, do one of the following:
m Drag the top border of the Tool tab or Parameters tab areas down to the bottom.
m Drag the bottom border of the Viewer, Node View, or any other area up to the top.
That area remains hidden until you drag its top or bottom border back out again.
Before collapsing Tool tabs After collapsing Tool tabs28 Chapter 1 An Overview of the Shake User Interface
Navigating in the Viewer, Node View, and Curve Editor
The Viewer, Node View, and Curve Editor are all capable of containing much more
information than can be displayed at one time. You can pan and zoom around within
each of these areas in order to focus on the elements you want to adjust in greater
detail.
Important: Shake requires the use of a three-button mouse—the middle mouse
button is key to navigating the Shake interface. If, in Mac OS X, you map Exposé
functionality to the middle mouse button, this will interfere with navigation in Shake,
and you should disable this functionality.
To pan across the contents of an area, do one of the following:
m
Press the middle mouse button and drag.
m Option-click (Mac OS X) or Alt-click (Linux) and drag.
To zoom into or out of an area, do one of the following:
m Hold down the Control key and drag while holding down the middle mouse button.
m
Control-Option-drag or Control-Alt-drag.
m Use the + or - key to zoom in or out based on the position of the pointer.
To reset an area to 1:1 viewing, do one of the following:
m
In the Viewer, click the Home button in the Viewer shelf.
m Move the pointer to an interface area, then press Home.
To fit the contents to the available space within an area:
m
In the Viewer, click the Fit Image to Viewer button in the Viewer shelf.
m Move the pointer to an interface area, and press F.
Saving Favorite Views
If you find yourself panning back and forth within a particular area to the same regions,
it might be time to create a Favorite View within that area.
• In the Node View, you could save several views in your node tree where you’ll be
making frequent adjustments.
• If you’re doing paint work on a zoomed-in image in the Viewer, you can save the
position and zoom level of several different regions of the image.
• In the Curve Editor, you can save several different pan, zoom-level, and displayedcurve collections that you need to switch among as you adjust the animation of
different nodes in your project.
• In the Parameters tab, you can save the parameters being tweaked, as well as the
node being displayed in the Viewer.
Once you’ve saved one or more Favorite Views in each interface area, you can instantly
recall the position, zoom level, and state of that area by recalling the Favorite View that
you saved. You can save up to five Favorite Views.Chapter 1 An Overview of the Shake User Interface 29
To define a Favorite View:
1 Pan to a position in an area that contains the region you want to save as a Favorite
View. If necessary, adjust the zoom level to encompass the area that you want to
include.
2 Depending on the area you’re adjusting, you can save additional state information
particular to that area. Make additional adjustments as necessary so that you can recall
the desired project elements:
• In the Node View, you can save the state of the nodes that are currently loaded into
the Viewer and Parameters tabs.
• In the Viewer, you can save the node that’s currently being viewed.
• In the Curve Editor, you can save the curves that are currently loaded and displayed.
• In the Parameters tab, you can save the parameters that are being tweaked, as well
as the node displayed in the Viewer.
3 To save a Favorite View, move the pointer into that area and do one of the following:
• Right-click anywhere within the area, then choose Favorite Views > View N > Save
from the shortcut menu (where N is one of the five Favorite Views you can save).
• Press Shift-F1-5, where F1, F2, F3, F4, and F5 correspond to each of the Favorite
Views.
Restoring Favorite Views
Once you’ve defined one or more Favorite Views, you can restore them in one of two
ways. Simply restoring the framing results in the current contents of that area being
panned and zoomed to the saved position. Restoring the framing and state, on the
other hand, results in the restoration of additional state information that was adjusted
in step 2.
To restore the framing of a Favorite View, do one of the following:
• Right-click in the Viewer, Node View, or Curve Editor, then choose Favorite Views >
View N > Restore Framing from the shortcut menu (where N is one of the five
Favorite Views you can save).
• Press F1-5, where F1, F2, F3, F4, and F5 correspond to each of the Favorite Views.
That area is set to the originally saved position and zoom level.
To restore the framing and state of a Favorite View, do one of the following:
• Right-click in the Viewer, Node View, or Curve Editor, then choose Favorite Views >
View N > Restore Framing & State from the shortcut menu (where N is one of the five
Favorite Views you can save).
• Press Option-F1-5 or Alt-F1-5, where F1, F2, F3, F4, and F5 correspond to each of the
Favorite Views.30 Chapter 1 An Overview of the Shake User Interface
Depending on the area, the originally saved position and zoom level are recalled, as
well as the following state information:
• In the Node View, the node or nodes that were loaded into the Viewer and
Parameters tabs when you saved the Favorite View
• In the Viewer, the node that was viewed when you saved the Favorite View
• In the Curve Editor, the curves that were loaded and displayed when you saved the
Favorite View
• In the Parameters tab, the parameters that were being tweaked, as well as the node
that was displayed, when you saved the Favorite View
Working With Tabs and the Tweaker
Each area of the Shake window has several tabs that reveal more of the interface. These
tabs can also be customized. For example:
To move a tab to another area:
m
Select a tab using the middle mouse button or Option-click / Alt-click, and then drag
the tab into a new window pane.
To detach a tab and use it as a floating window:
m
Shift-middle-click or Shift-Option-click / Shift-Alt-click the tab.
A good example of this last operation is to detach a Parameters tab, then press the
Space bar while the pointer is positioned over the Viewer. You can then tune your
image in full-screen mode.
Using the Tweaker
The parameters of individual nodes can be opened into a floating window, called the
Tweaker.Chapter 1 An Overview of the Shake User Interface 31
To open a floating Tweaker window:
m
Select the node you want to tune and press Control-T. A movable, floating Tweaker
window for the node appears.
Note: To save your window settings for later use, choose File > Save Interface Settings.
Menus and the Title Bar
This section discusses the Shake title bar and the Shake, File, Edit, Tools, Viewers,
Render, and Help menus.
Title Bar Information
The title bar of the full Shake window displays the current version of Shake, the name
of the currently open script, and the current proxy resolution in use.
OS Window Functions
Shake responds to OS windowing, so you can resize the entire window, expand it to
full screen, or stow it as an icon by clicking the standard buttons in the upper-right
corner of the Shake Viewer title bar.32 Chapter 1 An Overview of the Shake User Interface
Shake Menu (Mac OS X Only)
The following table shows the Shake menu options. The Shake menu appears only in
the Macintosh version of Shake.
File Menu
The following table shows the File menu options.
Menu Option Description
About Shake Displays the Shake version number and copyright information.
Services Services provide a quick way to perform tasks with several
applications.
Hide Shake (Command-H) Hides Shake. To show Shake again, click the Shake icon in the Dock.
Hide Others (Option-OptionCommand-H)
Hides all running applications other than Shake. To show the
applications again, choose Shake > Show All.
Quit Shake Quits the Shake application.
Menu Option Description
New Script
(Command-N or Control-N)
Deletes all nodes currently in the Node View. (You can also press
Command-A or Control-A in the Node View to select all nodes and
then press Del.)
Open Script
(Command-O or Control-O)
Opens the Load Script window. The script selected in the Browser
replaces what is already in the Node View. You can also use the
Load button in the title bar.
Import Photoshop File Imports a Photoshop file. If the Photoshop file contains multiple
layers, you can import the layers as separate FileIn nodes that are
fed into a MultiLayer node, or as a single, composited image by
using a normal FileIn node.
Reload Script Reloads the script listed in the title bar.
Add Script Opens the Load Script window. Adds a second set of nodes to
those currently in the Node View. The added nodes are renamed if
a naming conflict arises. (For example, FileIn1 becomes FileIn2 if
FileIn1 already exists.) Global settings are taken from the added
script, as is the new script name.
Save Script
(Command-S or Control-S)
Saves the script without prompting you for a script name (if you
have already saved). You can also use the Save button in the title
bar.
Save Script As (Shift-CommandS or Shift-Control-S)
Opens the Save Script window. Enter the new script name, and
click OK to save the script.
Save Selection As Script Saves the currently selected nodes in the Node View as a separate
script.Chapter 1 An Overview of the Shake User Interface 33
Recover Script (Shift-CommandO or Shift-Control-O)
Loads the last autoSave script and is usually done when the user
has forgotten to save a script and quits Shake, or when Shake has
unexpectedly quit. The script is found under $HOME/nreal/
autoSave. (The $HOME directory is your personal Home directory,
for example, the /Users/john directory.)
If you have environment variables set, you can launch Shake on the
command line with the same option using -recover:
shake -recover
For more information on environment variables, see Chapter 14,
“Customizing Shake.”
Load Interface Settings Opens the Load Preferences From window. Select an interface
settings file from disk, and click OK to load the file.
Save Interface Settings Opens the Save Preferences To window. This lets you save the
various default Shake settings, including your window layout to a
file in your $HOME/nreal/settings file.
If you call it defaultui.h, it is automatically read next time you
launch Shake. You can save the settings file anywhere, but it is not
read automatically unless the file is in the settings directory.
Flush Cache When you choose Flush Cache, all appropriate images are copied
from the memory cache to the disk cache (depending on how the
cacheMode parameter is set), but the memory cache is not cleared.
This command is similar to what Shake does when you quit (the
delay that occurs when you quit is Shake flushing the memory
cache to disk).
Purge Memory Cache Similar to the Flush Cache command, but the memory cache is
cleared afterwards. This is useful if most of your RAM is filled with
cache data, and you want to free it up to create and play a Flipbook
without needing to exit Shake first in order to clear the memory
cache.
The cacheMode parameter in the Globals tab controls whether or
not images in the cache are used (regardless of whether they are
coming from the disk or memory).
Recent Scripts Lists the last five scripts you worked on. Choosing a script from this
list opens it within Shake.
Exit (Linux only) Exits the program. You can also use the standard OS exit buttons in
the upper corner of the interface.
Menu Option Description34 Chapter 1 An Overview of the Shake User Interface
Edit Menu
The following table shows the Edit menu options.
Tools Menu
The Tools menu provides a menu listing for each of the nodes in the Tool tabs (for
example, Image, Color, Filter, and so on). You can also right-click a tab to display the
tools list. More information about each of these nodes is available in Part II of this
manual.
Menu Option Description
Undo (Command-Z or
Control-Z)
Redo (Command-Y or Control-Y)
Undoes previous commands; up to 100 levels of undo. Layout,
viewing, and parameter changes are saved in the Undo list. You can
also click the Undo/Redo button.
You can change the amount of undo/redo levels in your ui.h file.
See “Menus and the Title Bar” on page 31 for more information. If
you do an Undo and you have not changed anything, click Redo to
go back to your previous settings.
Find Nodes (Command-F or
Control-F)
Opens the Select Nodes by Name window that allows you to
dynamically select nodes that match your criteria in the search
string.
• Select by name. Nodes that match the search string are
immediately activated. For example, if you enter f, FileIn1 and
Fade1 are selected. If you enter fi, just FileIn1 is selected.
• Select by type. Select nodes by type. For example, enter Move,
and all Move2D and Move3D nodes are selected.
• Select by expression. Allows you to enter an expression. For
example, to find all nodes with an angle parameter greater than
180, enter:
angle>180
• Match case. Sets case sensitivity.Chapter 1 An Overview of the Shake User Interface 35
Viewers Menu
The following table shows the Viewers menu options.
Render Menu
The following table shows the Render menu options. For more information on
rendering, see Chapter 12, “Rendering With the FileOut Node,” on page 333.
Script Management
The following section discusses the buttons in the upper-right corner of the Shake
interface, which let you Load and Save scripts, undo and redo changes you’ve made,
and control when and how the Viewer updates the images generated by your script.
Menu Option Description
New Viewer Creates a new Viewer in the Viewer area, and automatically
stretches it to fill the Viewer area. While in a Viewer, you can also
right-click and select New Viewer, or press N.
Spawn Viewer Desktop Launches a floating Viewer window that can be moved
independently of the interface. The Viewer Desktop is ideal for
dual-monitor setups.
Menu Option Description
Render Flipbook Renders a Flipbook of the current Viewer. Opens the Flipbook
Render Parameters window, which allows you to override the
Global parameters (if necessary). To cancel the render, press Esc
(Escape) when in the Flipbook window.
Render Disk Flipbook Mac OS X only. Launches a disk-based Flipbook into QuickTime.
This has several advantages over normal Flipbooks. It allows for
extremely long clips, allows you to attach audio (loaded with the
Audio Panel in the main interface), and lets you write out the
sequence as a QuickTime file after viewing, bypassing the need to
render the sequence again. For more information, see “Creating a
Disk-Based Flipbook” on page 326.
Render FileOut Nodes Renders FileOut nodes in the Node View. In the Node View, press F
to frame all active nodes. You have the option to render only the
active FileOut nodes, or all FileOut nodes.
Render Cache Nodes Immediately caches sections of the node tree where Cache nodes
have been inserted. This command lets you cache all Cache nodes
in the Node View over a specific duration. For more information on
using Cache nodes, see Chapter 13, “Image Caching,” on page 343.
Render Proxies Renders your proxy files for your FileIn nodes, and leaves your
FileOuts untouched. For more information on proxies, see “Using
Proxies” on page 137.36 Chapter 1 An Overview of the Shake User Interface
To load or save a Shake script:
m
Click Load or Save to open the Load Script window, or to save the current script with
the same name.
You can also press Command-S or Control-S to save the script quickly. If the script is not
yet named, the Save Script window opens.
To save a script with a new name:
m
Choose File > Save Script As, and enter a new file name in the Save Script window.
To reload the same script:
m
Choose File > Reload.
The script that appears in the Shake title bar is reloaded.
To undo or redo, do one of the following:
m
Click the Undo or Redo button.
m
Press Command-Z or Control-Z to undo, or press Command-Y or Control-Y to redo.
Customizing AutoSave
A backup script is stored automatically every 60 seconds in your $HOME/nreal/
autoSave directory. The last saved script can be accessed with the File > Recover
menu command (shake -recover in the Terminal), or browsed to under the Directories
pull-down menu in the File Browser.
The backup time interval can be changed in your ui.h files in include/startup/ui/
myPreferenceFile.h. Enter the following line (with the desired time interval, in seconds,
in place of the “60”):
script.autoSaveDelay = 60;
Four other autosave behaviors can be customized within a .h preference file in the
include/startup directory:
• script.autoSaveDirectory: Setting a directory with this declaration overrides the
default behavior of placing autosave scripts in ~/nreal/autosave/.
• script.autoSavePrefix: Defines text to be prepended to autosave script names.
• script.autoSaveNumSaves: Sets the total number of autosave scripts to be saved.
Undo
RedoChapter 1 An Overview of the Shake User Interface 37
By default, there are 100 steps of undo and redo in Shake.
Update
The Update button controls what is updated in the Viewer, and when. The Update
button has three modes:
• Always: Updates the Viewer with every change that’s made to a parameter, and
every time you move the playhead in the Time Bar.
• Manual: The scene is not updated until you do one of the following:
• Click Update.
• Click the left side of a node in the Node View.
• Press the U key.
• Release: Waits until you finish adjusting a parameter, or moving the playhead in the
Time Bar, before updating the image in the Viewer.
By default, clicking Manual once toggles this setting to Always. Click and hold this
control to see the pop-up menu, from which you can choose Always, Manual, or Release.
Proxy
The proxy button, labeled “Base,” allows you to quickly get to one of your four proxy
settings.
Click Base once to toggle to P1. Click and hold the Base button for other proxy options.
For more information on proxies, see Chapter 4, “Using Proxies,” on page 137.
Changing the Possible Levels of Undo
To change the level of undos, enter the following line (with your desired number of
undos in place of the “100”) in one of your ui.h files:
gui.numUndoLevels = 100;
For more information, see Chapter 30, “Installing and Creating Macros,” on page 905.38 Chapter 1 An Overview of the Shake User Interface
The File Browser
The File Browser is an interactive browser that serves many purposes. It lets you
navigate the local volumes (both fixed and removable media) on your computer, or
remote volumes over your network. You use it to open or save scripts, load images via a
FileIn node, and to load and save lookup files and expressions.
Using the File Browser, image sequences can be listed either as a long list of individual
files or as a single object. You can bookmark favorite directories. You can also use it to
create and delete directories, and delete files directly in the Browser.
Opening the File Browser
There are several operations that open the File Browser.
To open the File Browser, do one of the following:
m
Create a FileIn or a FileOut node.
m
Click the Load Script button, located in the upper-right section of the Shake interface.
m
Click the Save Script button, located in the upper-right section of the Shake interface.
m
To open the File Browser from an existing FileIn or FileOut node (for example, if the
source media becomes disconnected), click the folder icon next to the file path in the
Parameters tab.Chapter 1 An Overview of the Shake User Interface 39
The Browser opens. If you’re using Mac OS X, this window appears very different from
the standard file navigation sheet, but it has much of the same functionality, and
includes additional options that are particularly useful to Shake projects.
Navigating in the File Browser
There are several ways you can navigate to the directory you need using the File
Browser:
Using the File List
A list of directories and files appears in the center of the File Browser. You can doubleclick any directory in this list to make it into the current directory. The following list of
controls lets you navigate the volumes accessible to your computer.
Icon, Button, or Key Description
Indicates a folder.
Indicates a drive.
Takes you to the last viewed directory.
Takes you up one directory. You can also press Delete (Macintosh)
or Backspace (Linux).40 Chapter 1 An Overview of the Shake User Interface
Using the Pull-Down Menu at the Top
The pull-down menu reveals the entire directory tree, including your root directory, the
directory you launched Shake from, the $HOME directory, the Shake installation, and
any favorite directories you have entered. This menu also automatically lists recently
visited directories.
Using a File Path
You can also type an entire path, with or without the file name itself, into the File Name
field at the bottom of the Browser.
You can format absolute file paths in any of several styles:
• /my_proj/my_directory/my_file.iff
• /d4/my_proj/my_directory/my_file.iff
• //MyMachine/D/my_proj/my_directory/my_file.iff
Enable/Disable Local File Paths
The Relative Path control, to the left of the Add to Favorites control in the File Browser,
gives you the option to enter a relative file path into the File Name field.
Relative file paths can take one of two forms:
• ./myDirectory/myFile/
• ../myDirectory/myFile/
Adding Directories to the Favorites List
If there are one or more directories with content you frequently need to access, you can
add them to the Favorites list. The Favorites list is a customizable list of directories that
you can add to at any time. You can explicitly add directories to the list in two ways:
Note: As of Shake 4, entries you add to the Favorites list are permanent.
To add an entry to the Favorites list:
1 Open the File Browser.
Up Arrow/Down Arrow key Moves up and down in the list.
Any letter key Once you have clicked in the file listings, press a letter key on the
keyboard to jump to the next occurrence of a file or directory that
starts with that letter.
Icon, Button, or Key Description
Disable local
file paths.
Enable local
file paths.Chapter 1 An Overview of the Shake User Interface 41
2 Click the Bookmark button.
The currently open directory is added to the Favorites list. All favorite directory paths
you add are saved in the favoritePaths.h file, located in the username/nreal/settings/
directory. By default, the favoritePaths.h file contains:
• Your home directory
• The nreal directory
• The Shake application directory
When you add more directories to the Favorites, they’re automatically appended to the
code in the favoritePaths.h file. For example, if you add following directory to the
Favorites:
/Users/MyAccount/Media/
The resulting favoritePaths.h file looks like this:
// User Interface settings
SetKey(
"globals.fileBrowser.favorites", "/;$HOME;/Users/MyAccount//nreal/;/
Applications/shake-v4.00.0201/;/Applications/shake-v4.00.0201/doc/pix;/
Users/MyAccount/Media/;"
);
Note that each directory path is separated by a semicolon. MyAccount is the name of
the user directory.
To remove directories from the Favorites list:
1 Open the favoritePaths.h file (located in the /nreal/settings/ directory).
2 Delete the paths you want to remove from the Favorites list, and save the file.
You can also instruct Shake to look in certain directories when you start the software,
using the following ui.h settings. Each listing is for a type of file—images, scripts,
expressions, and so on. Note the slash at the end of the path:
gui.fileBrowser.lastImageDir= “/Documents/my_directory/”;
gui.fileBrowser.lastScriptDir= “$MYPROJ/shakeScripts/”;
gui.fileBrowser.lastExprDir= “//Server/shakeExpressions/”;
gui.fileBrowser.lastTrackerDir= “$MYPROJ/tracks/”;
gui.fileBrowser.lastAnyDir= “/”;
For more information on a ui.h file, see Chapter 14, “Customizing Shake,” on page 355.
Selecting Files
If you’re selecting one or more files for a FileIn operation, you can select them in
several ways.42 Chapter 1 An Overview of the Shake User Interface
To select single files, do one of the following:
m Double-click the file.
m
Press the Up Arrow or Down Arrow, then click OK (or press Return).
m
Press the first letter of the file you want. Press it again to jump to the next file that
starts with that letter. Click OK (or press Return).
To select multiple files in the same directory, do one of the following:
m
To select multiple files, drag to select the files, then click OK.
m
To select multiple individual files, press Shift and select the files.
To select multiple images in different directories, do one of the following:
m
Click Next in the Browser to load the current image or images and keep the Browser
open to continue to add files. When you have reached the last file, click OK. At any time
in this process, the Node View may be accessed to examine FileIn nodes.
m
Select one or more files, and press the Space bar to add them to an invisible queue of
files to be added to your script, without closing the File Browser. Once you click OK,
every file in this invisible queue is added to the currently open script.
Viewing Controls
There are several tools to help you identify files in the Browser. Chapter 1 An Overview of the Shake User Interface 43
The File List
Click the title of a column to arrange the list according to that type of information. For
example, click Modified to list files by creation date. Click Modified again to reverse the
order of information.
Toggle Buttons
The following buttons also change what is listed in the Browser:
Button Description
Short Listing Lists only file names, type, and size.
Sequence Listing Toggles the listing of an image sequence as one listing or as
several. To read in the entire sequence, ensure that this Sequence
Listing is enabled. These icons signify single or sequence files:
Images Only Lists only recognized image types.
Show Exact Sizes Shows the exact file size in kilobytes, rather than rounded off in
megabytes.
Show Full Path Lists the entire path of the selected file.
Filter Filters out information.
Use * and ? as your wildcards:
Wildcard
*
Example
*.cin
*.cin.*
*.cin*
image.*.tga
Wildcard
?
Example
?.cin
??.iff
a.???
image.????.iff
image.???1.iff
Means
Any set of characters for any
length.
Lists
a.cin, image.cin, image.0001.cin,
...
a.cin.0001, image.cin.hr
a.cin, image.0001.cin,
image.cin.0001
image.1.tga, image.10.tga,
image.0100.tga
Means
Any character in that single
position.
Lists
a.cin, 1.cin
ab.iff, 01.iff
a.cin, a.iff, a.tga
image.0001.iff, image.9999.ifff
image.0001.iff, image.0011.iff,
image.1111.iff
Indicates a single file.
Indicates an image sequence.44 Chapter 1 An Overview of the Shake User Interface
Updating the File Browser
Click the Update button to refresh the listing of the current directory in case files have
been added or deleted while the File Browser has been open.
Specifying Media Placement
Three buttons let you set the first frame at which new media is placed when it’s read
into your Shake script. This affects the timing of the media inside of your script, and can
be seen in the Time View tab.
• frame 1: The first frame of media is placed at the first frame of your script.
• seq. start: The first frame of media is placed to the corresponding frame of the script,
depending on its frame number. If you import frames 9-50 of an image sequence,
the first frame of media appears at frame 9 of the Time View.
• current frame: The first frame of media is placed at the current position of the
playhead.
Additional Controls for Image Output
When you’re writing out a file using the FileOut node, you also use the File Browser to
select a directory and enter the file name for the rendered output. For more
information about exporting media from Shake, see Chapter 12, “Rendering With the
FileOut Node,” on page 333.
File Management Controls
There are additional file management controls that are primarily useful when exporting
(“writing out”) media.
Naming Files for Output
If you are writing out an image sequence, be sure to insert a # or an @ sign where you
want the frame number to go in the name. When you are finished, click OK to validate.
Button Description
Creates a new directory using the current path.
Deletes currently selected files and directories.Chapter 1 An Overview of the Shake User Interface 45
The following is a table of examples.
Using and Customizing Viewers
Shake displays the currently selected image for your project in the Viewer, located in
the Viewer workspace at the upper-left quadrant of the interface. Additional controls in
the Viewer shelf at the bottom of the Viewer workspace let you customize how the
image is displayed. For example, you can view each channel of an image individually.
You can also change the Gamma of the Viewer to simulate how the image might look
on other output devices. Other tools such as the Histogram and Plot Scanline Viewer
scripts can help you analyze your image.
Some nodes feature onscreen controls (which also appear in the Viewer) that let you
make adjustments to an image. For example, the Move 2D and Rotate nodes display
transformation controls you can drag to manipulate the image directly in the Viewer.
Other nodes make additional tools available in the Viewer shelf. For example, the
RotoShape node places drawing and editing tools in the Viewer shelf that let you create
and manipulate shapes directly in the Viewer.
Files Shake Notation
image.0001.cin, image.0002.cin image.#.cin
image.1.tif, image.2.tif image.@.tif
image.iff.0001, image.iff.0002 image.iff.#
image1.tga, image2.tga image@.tga
image.001.tga, image.002.tga image.@@@.tga
image.01, image.02 image.@@
Image from The Saint provided courtesy of Framestore CFC and
Paramount British Pictures Ltd.46 Chapter 1 An Overview of the Shake User Interface
Using Multiple Viewers
You can create as many Viewers within the Viewer workspace as you need. Each
additional Viewer you create appears within the Viewer workspace area, and each
Viewer can be set to independently display any image channel from any node in the
current node tree. Each Viewer has its own Viewer shelf with its own controls, and each
Viewer you create has two buffers that you can use to compare images.
Note: Because each Viewer has two buffers, using multiple Viewers at once can
consume a lot of RAM, depending on the resolution of the images you’re working with.
When additional Viewers are displayed, each Viewer is updated dynamically. You can
use this capability to simultaneously view the results of a downstream node in one
Viewer, while also seeing the image from an upstream node in another Viewer. A good
example of this is when you’re refining a key in one Viewer, while watching the effect
this has on the finished composite in another.
To create additional Viewers:
1 Position the pointer anywhere within the Viewer area.
2 Do one of the following:
• Press N.
• Right-click, then choose New Viewer from the shortcut menu.
• Choose Viewers > New Viewer to create a new Viewer.
A new Viewer named “Viewer2” is created above Viewer1 in the Viewer workspace.
Additional Viewers are numbered in the order in which they’re created.Chapter 1 An Overview of the Shake User Interface 47
Note: Each Viewer you create uses additional memory, so you may want to close
higher-resolution Viewers when rendering. Also, more open Viewers can slow the
display rate due to the increased processing demands of updating each Viewer
simultaneously.
To close a Viewer:
m
Click the Close Window button in the selected Viewer’s title bar.
When you close a Viewer, you release whatever RAM that Viewer was utilizing.
To bring a Viewer to the foreground:
m
Click anywhere within that Viewer.
When a Viewer is selected, its title bar is highlighted.
To display the image from another node in a Viewer:
1 Double-click anywhere within the Viewer you want to use.
Its title bar becomes highlighted to show that it is selected.
2 Do one of the following:
• Double-click a node in the Node View to display its image in the currently selected
Viewer and to display its parameters in the Parameters1 tab.
• Click the left side of a node to display its image in the currently selected Viewer
without loading its parameters into the Parameters tab.
Close Window button
This viewer is selected.48 Chapter 1 An Overview of the Shake User Interface
When a node is loaded into the Viewer, an indicator appears on the left side of the
node. Additionally, a number and a letter appear below it, specifying which Viewer and
compare buffer that node’s image occupies.
To collapse a node to a minimized state:
m
Click the Iconify Viewer button in the Viewer title bar to minimize its size.
To expand a node from a minimized state:
m
Click the Iconify Viewer button of the minimized Viewer to restore it to its original size.
Minimized Viewers are only as large as the title and the upper-right window controls.
To resize a Viewer, do one of the following:
m Drag a Viewer’s left, right, or bottom side.
This node is displayed in
viewer 2, buffer A
This node is displayed in
viewer 1, buffer A
Iconify Viewer buttonChapter 1 An Overview of the Shake User Interface 49
m Drag a Viewer’s bottom-right corner to resize its width and height simultaneously.
To resize a Viewer to fit the image within:
m
Click the Fit Viewer to Image button in the Viewer title bar.
To lock a Viewer to the full size of the Viewer workspace:
m
Click the Grip to Desktop button in the Viewer title bar.
The full-size Viewer now obscures any other Viewers underneath it, and resizes itself to
match the total size of the Viewer workspace. To see other Viewers underneath, click
the Grip to Desktop button again to release the Viewer, then resize it or move it to
make room in the Viewer workspace for the other Viewers.
Note: By default, the first Viewer that was created along with your project is locked to
the full size of the Viewer workspace.
To exand the Viewer workspace to the full size of the Shake window:
m
Press the Space bar.
Afterwards, you can press the Space bar again to reset the Viewer workspace to its
original size.
Fit Viewer to Image button
Grip to Desktop button50 Chapter 1 An Overview of the Shake User Interface
To reposition all Viewers within the Viewer workspace at once:
m
Click the middle mouse button anywhere in the Viewer workspace outside of any of
the Viewers, then drag to move all of the Viewers around the workspace.
To create a separate Viewer workspace for use on a second monitor:
m
Choose Viewers > Spawn Viewer Desktop. The second monitor must be run from the
same graphics card as the primary monitor.
To help prevent accidentally rendering an enormous image (for example, if you enter
200 into your zoom parameter instead of 20), the Viewer’s resolution is limited to 4K.
This limit is configurable. The Viewer resolution has no effect on rendered images—it
only crops the view in the Shake interface to the set resolution.
Looking at Images in a Viewer
To load a node into the current Viewer, click the left side of the node. The green Viewer
indicator appears. Double-click the node to simultaneously load the node’s parameters
in the Parameters tab.
In the following image, the Grad node is loaded into the Viewer.
Warning: If you get strange Viewer behavior, delete the Viewer (right-click, then
choose Delete Viewer from the shortcut menu), and create a new Viewer.
Viewer indicator: Click once to display
node in Viewer. Double-click to load
node parameters.Chapter 1 An Overview of the Shake User Interface 51
The information (node name, channels, bit depth, size, and so on) for the selected node
appears in the Viewer title bar. When the Grad node is loaded into the Viewer, the
following appears in the Viewer title bar.
Note: The channels displayed in the Viewer are the non-zero channels. Non-zero
channels are not the same as active channels. For example, a Color node set to black
(black and white values are zero) displays the alpha channel (A) in the Viewer title bar:
When the Color node is adjusted toward gray (the black and white values are no longer
zero), the black, white, and alpha channels (BWA) are displayed:
Every Viewer has the built-in capability to analyze color.
Viewer title bar52 Chapter 1 An Overview of the Shake User Interface
To quickly analyze colors in the Viewer:
m
Click and scrub with the mouse in the Viewer to display the X, Y, R, G, B, and alpha
values in the Viewer title bar.
These values are also displayed in the Info field in the bottom-right corner of the
interface. You can also use the Pixel Analyzer and Color Picker windows to analyze this
data with more extensive options.
Note: To display the values in the Terminal window that launched Shake, press Control
and scrub in the Viewer.
Suspending Rendering and Viewer Redraw
There are two ways you can suspend rendering the node tree. This can help if you’re
making adjustments to a render intensive node and you don’t want to wait for the
image to render and display in the Viewer every time you make an adjustment.
To immediately stop rendering at any time:
m
To stop any processing at any time, press Esc.
Rendering is suspended until another operation occurs that requires rendering.
To suspend rendering and redraw in the Viewer altogether:
m
Change the Viewer’s update mode to Update Off via the Update Mode control in the
Viewer shelf.
Rendering is suspended until the Viewer’s update mode is changed to Update On or
Progress.
Controls in the Viewer Shelf
The Viewer shelf has many controls that let you customize how images are displayed in
the Viewer. These controls can be used directly, but many Viewer controls also
correspond to shortcut menu items and keyboard shortcuts with the same function.Chapter 1 An Overview of the Shake User Interface 53
To use the two different click-and-hold button behaviors:
m
Click the View Channel button in the Viewer to toggle between the RGB and the alpha
views.
m
Click and hold the View Channel button to open a pop-up menu from which you can
choose a specific channel view.
You can override the default channel display progression when the View Channel
button is clicked. For example, clicking the button in RGB view displays the alpha view.
When you click again, the RGB view is displayed.
To go from RGB to red to alpha channels:
1 Command-click or Control-click and hold the View Channel button, then select the Red
Channel button.
2 Command-click or Control-click and hold the View Channel button, then select the
Alpha Channel button.
When you click the View Channel button, you now toggle through RGB, red, alpha, and
back to RGB.
3 To save this behavior, choose File > Save Interface Settings.
Note: You can cycle through some Viewer functions using number keys. Press 2 to
cycle forward through the channel views, and press Shift-2 to toggle backward through
the channel views. Press 1 to toggle between the A and B compare buffers.
Click. Click and hold.54 Chapter 1 An Overview of the Shake User Interface
The following table shows the Viewer buttons, the keyboard or hot key shortcuts, and
describes the button functions.
Button Shortcut Description
Pointer Drag the pointer in the Viewer to display the X
and Y position values, and the RGBA color
values in the Viewer title bar. The values are
also displayed in the Info field.
Iconify Viewer Stows the current Viewer.
Fit Viewer to
Image
Control-F Fits the Viewer to the image.
Grip to
Desktop
Shift-F Fits the frame to the Viewer workspace. When
enabled, the Viewer “sticks” to the workspace.
You can then resize the workspace and the
Viewer expands to match.
Close Window Right-click menu
> Delete
Deletes the Viewer. (A good strategy if a Viewer
is misbehaving. Press N to create a new
Viewer.)
Buffer Tabs 1 You can have two different buffers in a Viewer
to compare images. See “Using the Compare
Buffers” on page 57.
View Channel R, G, B, A, C;
2/Shift-2 cycle;
right-click menu
Toggles through the color channels: RGB
(color), red, green, blue, alpha.
Update Mode–
On
Right-click menu;
3/Shift-3
Update mode that displays a rendered image
only after it is finished rendering. This is for
relatively fast renders.
Update Mode–
Progress
Right-click menu;
3/Shift-3
Scrolling update mode that displays each line
(starting from the bottom) as the image
renders. Used for slower renders.
Update Mode–
Off
Right-click menu;
3/Shift-3
The Viewer does not update. Use this to load
an image into a Viewer, then switch to the
second buffer (see below) and do some
changes. You can then compare it with the
original.Chapter 1 An Overview of the Shake User Interface 55
Incremental
Update
Updates the changing portion of the image.
For example, if Toggle Incremental Update is
disabled and you composite a 10 x 10-pixel
element on a 6K plate and pan the element,
the entire 6K plate updates. When enabled,
only the 10 x 10-pixel area is updated. To fix
this, turn off the Incremental Update and
adjust again—the glitches are corrected. This
button has no effect on the output file or
batch rendering speed, only on the image in
the Viewer.
VLUT Off Right-click menu VLUTs (Viewer lookup tables) differ from
Viewer scripts in that you can scrub from the
unmodified plate. See “Viewer Lookups, Viewer
Scripts, and the Viewer DOD” on page 61.
Truelight VLUT Right-click menu The Truelight VLUT combines monitor
calibration with the previsualization of film
recorders and other output devices. See
“Viewer Lookups, Viewer Scripts, and the
Viewer DOD” on page 61.
Gamma/Offset/
LogLin VLUT
Right-click menu The Gamma/Offset/LogLin VLUT allows you to
apply different quick lookups to your image.
See “Viewer Lookups, Viewer Scripts, and the
Viewer DOD” on page 61.
Viewer DOD Right-click menu Turns on Region of Interest (ROI) rendering
(limits your rendering area). See “Viewer
Lookups, Viewer Scripts, and the Viewer DOD”
on page 61.
Viewer Script–
Off
Right-click menu;
4/Shift-4
See “Viewer Lookups, Viewer Scripts, and the
Viewer DOD” on page 61.
Viewer Script–
Aperture
Markings
Right-click menu Applies aperture markings. You can also rightclick the Viewer Script button, then choose
Aperture Markings. See “Viewer Lookups,
Viewer Scripts, and the Viewer DOD” on
page 61.
Viewer Script–
Plot Scanline
Right-click menu Applies a plot scanline. You can also right-click
the Viewer Script button, then choose Plot
Scanline. See “Viewer Lookups, Viewer Scripts,
and the Viewer DOD” on page 61.
Viewer Script–
Histogram
Right-click menu Applies a Histogram. You can also right-click
the Viewer Script button, then choose
Histogram. See “Viewer Lookups, Viewer
Scripts, and the Viewer DOD” on page 61.
Button Shortcut Description56 Chapter 1 An Overview of the Shake User Interface
Viewer Script–
Z Channel
Right-click menu Views the Z channel. You can also right-click
the Viewer Script button, then choose ViewZ.
See “Viewer Lookups, Viewer Scripts, and the
Viewer DOD” on page 61.
Viewer Script–
Superwhite/
Subzero
Right-click menu Displays superwhite and subzero pixels. You
can also right-click the Viewer Script button,
then choose Float View. See “Viewer Lookups,
Viewer Scripts, and the Viewer DOD” on
page 61.
Viewer Script–
Frames/
Timecode
Right-click menu Displays frames or timecode in the active
Viewer. See “Viewer Lookups, Viewer Scripts,
and the Viewer DOD” on page 61.
Compare
Mode–No
Compare
5/Shift-5 Only one buffer is displayed. See “Using the
Compare Buffers” on page 57.
Compare
Mode–
Horizontal (Y
Wipe)
5/Shift-5 You can also right-click the Compare Mode
button, then choose Y Wipe. See “Using the
Compare Buffers” on page 57.
Compare
Mode–Vertical
(X Wipe)
5/Shift-5 You can also right-click the Compare Mode
button, then choose X Wipe. See “Using the
Compare Buffers” on page 57.
Compare
Mode–Blend
(Fade)
5/Shift-5 You can also right-click the Compare Mode
button, then choose Blend. See “Using the
Compare Buffers” on page 57.
Show/Hide
DOD Border
Right-click menu Displays the green DOD (Domain of Definition)
border and the red frame border. It has no
effect on processing or the rendered image.
Reset Viewer Home key Centers the image and sets the zoom level to
1:1.
Fit Image to
Viewer
F Fits the image to the frame. Be careful, since
you may get a non-integer zoom (for example,
instead of 2:1, you get 2.355:1), which may
result in display artifacts. Do not use this
option when “massaging” pixels.
Button Shortcut DescriptionChapter 1 An Overview of the Shake User Interface 57
For a table of additional common buttons related to onscreen controls, see “NodeSpecific Viewer Shelf Controls” on page 70.
Using the Compare Buffers
You can use the A and B buffers in the Viewer to load two images at once. The
following example uses two images from Tutorial 5, “Using Keylight,” in the Shake 4
Tutorials book.
To load two images at once into the A and B compare buffers:
1 Using two FileIn nodes, read in two images from the “Using Keylight” tutorial.
The images are located in the $HOME/nreal/Tutorial_Media/Tutorial_05/images
directory.
2 In the Viewer, ensure that buffer A is open.
3 Load one of the tutorial images into the Viewer by clicking the left side of the node.
The Viewer indicator appears on the left side of the node, and the node’s image is
loaded into the Viewer.
Launch
Flipbook
Renders a RAM-based image player.
Left-mouse click: Renders with the current
settings.
Right-mouse click: Displays the Render
Parameters window.
Broadcast
Monitor
Mirrors the selected node in the Viewer on a
video broadcast monitor.
The broadcast monitor option is only available
in the Mac OS X version of Shake.
For more information, see “Viewing on an
External Monitor” on page 330.
Button Shortcut Description58 Chapter 1 An Overview of the Shake User Interface
4 To switch to buffer B, click the A tab, or press 1 (above the Tab key, not on the numeric
keypad).
The A tab switches to B when clicked.
5 Load the second image into buffer B by clicking the right side of the image’s node.
6 Press 1 to toggle between buffers. You can also click the A and B tabs.
You can also put the Viewer into split-screen mode to more directly compare two
images.
To create a vertical split screen in the Viewer:
m Drag the Compare control (the small gray “C” the lower-right corner of the Viewer) to
the left.
Images from The Saint provided courtesy of Framestore CFC and
Paramount British Pictures Ltd.Chapter 1 An Overview of the Shake User Interface 59
The Compare Mode button in the Viewer shelf indicates that you are in vertical
compare mode.
To create a horizontal split screen:
m Drag the Compare control up on the right highlighted edge.
The Compare Mode button in the Viewer shelf indicates that you are in horizontal
compare mode.
Alternatively, you can toggle between vertical and horizontal split screens by using the
Compare Mode button.
Note: A common mistake is to slide the Compare control all the way to the left or right
(or top or bottom)—one image disappears and only the second image is revealed. The
result is that changes to a node’s parameters don’t update the Viewer. To avoid this,
turn off the Compare Mode to ensure you are looking at the current image.
To turn off split-screen viewing:
m
Click the Compare Mode button in the Viewer shelf.
The split screen is removed and the button is no longer highlighted.
If the Compare Mode button is set to No Compare and the Viewer is still not updating,
make sure that the Update Mode is set to On.60 Chapter 1 An Overview of the Shake User Interface
If the Update Mode is not the problem, check to make sure that the manual Update
button at the top of the interface is not set to “manual.”Chapter 1 An Overview of the Shake User Interface 61
Viewer Lookups, Viewer Scripts, and the Viewer DOD
There are three similar controls that affect how your images are viewed:
• Viewer lookup tables (VLUTs)
• The Viewer DOD
• Viewer scripts
These functions modify the image for efficiency or previsualization purposes, and do
not affect the output image. If necessary, it’s possible to apply these settings to a
render that is launched from the interface.
The following is an example of using a VLUT with a log image.62 Chapter 1 An Overview of the Shake User Interface
When LogLin conversion is enabled in VLUT 2, you still work on the log image in the
process tree, but you see the linearized plate. (For more information about
logarhtymic-to-linear conversion, see “The Logarithmic Cineon File” on page 437.)
To activate the VLUT or Viewer Script controls:
1 Apply your VLUT or script.
2 Right-click the VLUT (Viewer Lookup Table) button and select one of the three Load
Viewer Lookup options.
3 In the designated window or tab (selected in step 2), adjust any necessary parameters.
VLUTs have an additional right-click function to specify whether pixel values are
scrubbed from before or after the VLUT. Right-click and hold the VLUT button, and
enable (or disable) “Scrub before lookup.”Chapter 1 An Overview of the Shake User Interface 63
The following table includes the current default scripts and VLUTs.
Button Description
VLUT: Three options let you turn on or off a VLUT for the Viewer.
• By default, the VLUT is turned off.
• A second option lets you use the Truelight VLUT control, combining monitor
calibration with the previsualization of film recorders and other output devices.
• A third option, VLUT 2, allows Gamma, Add, and LogLin operators to be applied to
the Viewer.
Viewer Script–Aperture: Displays a field chart with safe zones. To load script controls
into the Parameters tab, right-click the button, then choose Load Viewer Script
Controls from the shortcut menu.
Viewer Script–Plot Scanline: Displays a plot scanline of your image. For more
information, see “Using the PlotScanline to Understand Color-Correction Functions”
on page 674.
Displays pixel values along the horizontal axis (where the light gray line is). The
sample image bluescreen is evenly lit. You can view RGB, A, or RGBA, and calculate
according to color, luminance, or value.64 Chapter 1 An Overview of the Shake User Interface
Viewer Script–Histogram: Displays a Histogram of your image.
Viewer Script controls (right-click the Viewer Script button to select Load options):
• ignore: Ignores pixels with a 0 or 1 value.
• maxPerChannel: Pushes the values up on a per-channel basis.
• fade: Fades the display of the Histogram.
The colors are squeezed down in a limited range, an indication that this is probably
a logarithmic image.
Notice the big healthy chunk of blue near the high end. That is good.
Viewer Script–Z Channel: Displays the Z depth of an image either normalized or
between a set range. A very important note: Closer pixels are white, so the image
can fade to infinity (black) without a visual discontinuity.
Viewer Script controls (right-click Viewer Script button to select Load options):
• floatZinA: Puts the Z values in the alpha channel to scrub and retrieve these
values. The values are either Off, the Original values, or Distance (normalized
between 0 and 1). If you have an object that moves from far away toward the
screen over several frames, Original returns your Z values relative to each other;
Normalized indicates only the Z values within that frame.
• zNormalize: Indicates whether the render came from Maya or 3ds max. The
subparameter zInfinity sets the limit at which point pixels are considered infinite,
and are therefore clipped.
• zRangeSource: Evaluates the original values, or the near/far Input values.
Button DescriptionChapter 1 An Overview of the Shake User Interface 65
Viewer Script–Superwhite/Subzero: Displays pixel values above 1 or below 0 for float
images. The alpha channel is also tested.
Viewer Script controls (right-click the Viewer Script button to select Load options):
• view: This parameter controls how the pixels are displayed:
per channel: Sets subzero pixels to 0, sets pixels between 0 and 1 to 0.5, and pixels
above 1 to 1. This is applied on a per-channel basis.
per image: Turns subzero pixels black, pixels between 0 and 1 gray, and pixels
above 1 to 1. This is applied across the entire image, so if any channel is beyond 0
or 1, it is indicated.
on Image: Mixes the subzero and superwhite pixels back onto the image. The
colors are controlled with the two color controls.
• SubZero Color: Only active when view is set to “on Image,” it indicates the subzero
pixels.
• SuperWhite Color: Only active when view is set to “on Image,” it indicates the
superwhite pixels.
In this example, do the following:
• Read in the saint_bg.@ from the $HOME/nreal/ Tutorial_Media/Tutorial_05/images
directory.
• Apply an Other–Bytes node and a Color–LogLin node.
• Toggle Bytes from 8 to float.
• Apply the Float View Viewer Script. Since LogLin pushes values above 1, the sky
loses its punch when you go back out to Log and you process the image in only
16 bits.
The per-channel view indicates that most of the superwhite values are in the blue
channel. The per-image view indicates the dark areas more clearly. The on-image
view codes the highlights yellow and the darks blue.
Button Description
Tree Input Log image LogLin (linear) image
per channel float view per image float view on image float view66 Chapter 1 An Overview of the Shake User Interface
More About Using VLUTs
The VLUTs and the Viewer scripts are similar in that they apply an arbitrary set of
functions that modify the image. A typical example is a color lookup table to
compensate for the display properties of your computer’s monitor.
The key difference is that VLUTs allow you to scrub pixel values from the unmodified
image (this feature can be disabled) whereas you always scrub the modified pixel
values when using Viewer scripts. For example, you may want to work on Cineon plates
in logarithmic space without converting the plates to linear space. However, you want a
rough idea of what the images look like in linear color space. To do this, apply a VLUT
to convert the images to linear space. The results when scrubbing colors in the Viewer
are still derived from the original, unmodified input logarithmic plates, which ensures
accurate processing for your output images.
VLUTs are typically used during color correction, and Viewer scripts are typically used
for unusual operations—for example, when creating an image for stereoscopic viewing.
Both methods let you use any series of pre-made functions.
Shake includes two VLUTs, the Truelight VLUT, and VLUT 2, which can be customized
any way you need. You can also create as many additional VLUTs as you need, for
different situations. You can only turn on one VLUT and one Viewer script at the same
time, but both can be activated simultaneously. To apply multiple color corrections,
build your VLUTs and scripts to have multiple controls.
Viewer Script–Frames/Timecode: Displays frames or timecode in the active Viewer.
To show and modify the frames/timecode display:
• Right-click the Viewer Script button and select timecode, or click and hold the
Viewer Script button and select the Timecode button. By default, timecode is
displayed in the Viewer.
• Right-click the Viewer Script button and select Load Viewer Script into Parameters2
tab. The timecode parameters are loaded into the tab.
• Click the mode pop-up menu to choose Frames, Padded Frames, Timecode, or
Timecode Dropped Frame.
• Use the Time Offset subtree to offset by hours, minutes, seconds, or frames.
• Color: Click the color control to change the color of the text display.
• BgColor: Click the color control to change the color of the timecode display
background box.
• BgOpacity: Controls the opacity of the timecode display background box.
• size: Controls the size of the frames/timecode display.
• xPos: Controls the X position of the frames/timecode display. You can also use the
onscreen controls to reposition the display.
• yPos: Controls the Y position of the frames/timecode display. You can also use the
onscreen controls to reposition the display.
Button DescriptionChapter 1 An Overview of the Shake User Interface 67
Note: The Truelight VLUT control in the Viewer shelf lets you set the Viewer’s lookup
table to use calibration profiles that you can create with the TLCalibrate node, or that
are created using Truelight’s monitor probe. Use the Load Viewer Lookup Controls into
Parameters1 Tab command to make adjustments to the Truelight VLUT parameters. For
more information on using Truelight, see the Truelight documentation, located in the
Documentation folder on the Shake installation disc.
Important: Currently there is no version control of the Viewer script. If you extend the
functionality of existing Viewer scripts that may have been saved in existing Shake
scripts, you should rename the new version of the Viewer script to something other
than the original name.
Using the Viewer’s Domain of Definition (DOD)
The Viewer Domain of Definition (DOD) limits the area of the image that is rendered to
the interior of a user-definable rectangle, in order to reduce the amount of unnecessary
processing. For example, if you are doing a head replacement, you may want to
activate the Viewer DOD and limit the Domain of Definition to a box surrounding just
the head, eliminating the need for your computer to render the rest of the image.
When using the Viewer DOD, keep the following in mind:
• The Viewer DOD limits the rendering area on the Viewer, but does not affect the
output image.
• Display DOD displays the green internal DOD box associated with each node and the
red frame boundary (does not affect processing).
The following image has both the VLUT 2 and the Viewer DOD applied.68 Chapter 1 An Overview of the Shake User Interface
Right-click the Viewer DOD button to access the DOD control options. For example,
using Frame DOD to Viewer (sets the DOD to the Viewer frame), you can zoom in on an
area you want to focus on and limit your DOD to that area. Note that the DOD is not
dynamic, as it would need to constantly recalculate as you pan. For more information,
see “The Domain of Definition (DOD)” on page 82.
Creating Your Own VLUTs and Viewer Scripts
The preset examples are stored in the end of the nreal.h file. To roll your own, you first
declare them in a startup directory following the same guidelines as for macros. The
following functions do absolutely nothing:
image ViewerLookup1_(image img)
{
return img;
}
image ViewerScript1_(image img)
{
return img;
}
Next, also in a startup file, hook them into the Viewers:
nfxDefViewerLookup(“Lookup1”, “ViewerLookup1_()”, “default”);
nfxDefViewerScript(“Script1”, “ViewerScript1_()”, “default”);
The first argument (“Lookup1”, “Script1”) is the name of the VLUT/Script as it appears in
the list in the interface. The second arguments (“ViewerLookup1_()”, “ViewerScript1_()”)
are the actual functions they call when activated. These must be declared in a startup .h
file. The third arguments are the optional icon files, relative to the icons/viewer
directory. It is assumed there is an .nri extension and that you also have a focused
version called [icon].focus.nri.
Therefore, if you want to load a button called icons/viewer/vluts/dufus.nri, you also
create a focused version called icons/viewer/vluts/dufus.focus.nri. You then use “vluts/
myVLUT” as your icon name. “default” means it is looking for vlut.@.nri, vlut.@.focus.nri,
vscript.@.nri, and vscript.@.focus.nri (@ = 1, 2, 3, etc.). All paths are relative to icons/
viewer. The icons for Viewer scripts are 30 x 30 pixels, no alpha. The standard VLUT
buttons are 51 x 30 pixels, no alpha. See “Other Macros–VLUT Button” in Chapter 32,
“The Cookbook.” Other macros required to run the VLUT Button macro can be found in
doc/html/cook/macros.
Viewer Keyboard Shortcuts
The following table contains additional Viewer hot keys.
Keyboard Function
N Create/Copy New Viewer.
F Fit Image to Viewer.Chapter 1 An Overview of the Shake User Interface 69
Also, see the table on page 54 for keyboard equivalents to Viewer buttons.
The Viewer Shortcut Menu
Shortcut menus differ depending on the location of the pointer in the interface, or
what function/button the pointer is on. The following table shows the shortcut menu
commands available in the Viewer.
Control-F Fit Viewer to Image.
Shift-F Fit Viewer to Desktop.
Alt-drag Pan image.
+ or - Zoom image in Viewer.
Home Reset view.
R, G, B, A, C Toggle Red, Green, Blue, alpha, and Color views.
Keyboard Function
Menu Option Keyboard Description
Edit Undo Command-Z or
Control-Z
Undo the last operation. Does not work with
RotoShape or QuickPaint.
Redo Command-Y or
Control-Y
Redo the last undo command.
View Zoom In/Out + or - (next to
the Delete
(Mac) /
Backspace
(Linux) key
Zooms in and out by increments. You can also
Control-middle drag or Control-Alt-drag to zoom
in or out with non-integer increments.
Reset View Home Sets the Viewer ratio to 1:1. The Viewer ratio is
listed in the upper-left corner of the Viewer title
bar.
Fit Image
to Viewer
F Resizes the image to the Viewer boundaries.
Fit Viewer
to Desktop
Shift-F Fits the Viewer window to the larger desktop
window. Does not change the Viewer zoom; it
just helps you when resizing the larger Desktop
pane.
Fit Viewer
to Image
Control-F Snaps the Viewer to the image size.
Render Render
Flipbook
. (period) Renders a non-permanent Flipbook.
Render Disk
Flipbook
Renders a disk-based Flipbook.
Render FileOut
Nodes
Renders FileOut nodes to disk.
Render Proxies Renders proxy images.70 Chapter 1 An Overview of the Shake User Interface
Node-Specific Viewer Shelf Controls
Some nodes, mainly transformations, have onscreen controls to help you interactively
control your images in the Viewer. These controls appear whenever the node’s
parameters are loaded.
When you adjust an active node with onscreen controls, a second row of controls
appears in that Viewer, at the top of the Viewer shelf. These controls disappear when
you load a different node’s parameters.
Clear Buffer A/
B
Clears buffer A or B.
New Viewer N Creates a new Viewer. If the mouse is over a
Viewer, it clones that Viewer.
Delete Viewer Deletes that Viewer. Helps to clear up graphic/
refresh problems.
Minimize or
Restore Viewer
Stores the Viewer as a small bar.
Viewer
Lookups
Lets you load Viewer lookup controls into the
Parameters1 or Parameters2 tab, or into a floating
window.
Viewer Scripts Lets you load Viewer script controls into the
Parameters1 or Parameters2 tab, or into a floating
window.
Viewer DOD Lets you load Viewer DOD controls into the
Parameters1 or Parameters2 tab, or into a floating
window.
View Channel Like the View Channel button, views the channel
you select.
Menu Option Keyboard DescriptionChapter 1 An Overview of the Shake User Interface 71
The following table shows the common onscreen control buttons.
Button Description
Onscreen Controls–
Show
Displays the onscreen controls. Click to toggle between
Show and Hide mode.
Onscreen Controls–
Show on Release
Hides onscreen controls while you modify an image. To
access this mode, click and hold the Onscreen Controls
button, then choose this button from the pop-up menu,
or right-click the Onscreen Controls button, then choose
this option from the shortcut menu.
Onscreen Controls–Hide Turns off the onscreen controls. To access this mode, click
and hold the Onscreen Controls button, then choose this
button from the pop-up menu, or right-click the
Onscreen Controls button, then choose this option from
the shortcut menu.
Autokey Auto keyframing is on. A keyframe is automatically
created each time an onscreen control is moved. To
enable, you can also right-click, then choose Onscreen
Control Auto Key On.
To manually add a keyframe without moving an onscreen
control, click the Autokey button off and on.
Delete Keyframe Deletes the keyframe at the current frame. This is used
because controls for functions such as Move2D, keyframes
for xPan, yPan, xScale, yScale, and angle are created
simultaneously. Delete Key deletes the keyframes from all
the associated parameters at the current frame.
To delete all keyframes for a parameter, such as Move2D
on all frames, right-click the Delete Keyframe button and
select Delete All Keys.
Lock Direction–Off Allows dragging of onscreen controls in both the X and Y
directions.
Lock Direction to X Allows dragging of onscreen controls in the X direction
only. To enable, click and hold the Lock Direction button,
then choose this button from the pop-up menu.
Lock Direction to Y Allows dragging of onscreen controls in the Y direction
only. To enable, click and hold the Lock Direction button,
then choose this button from the pop-up menu.
Onscreen Color Control Click this swatch to change the color of onscreen
controls.
Path Display–Path and
Keyframe
Displays motion path and keyframe positions in the
Viewer. You can select and move the keyframes onscreen.72 Chapter 1 An Overview of the Shake User Interface
The Parameters Tabs
The controls that let you adjust the parameters for each of the nodes in the node tree,
as well as the global parameters of your script, are located in the Parameters tabs. Two
Parameters tabs let you load parameters and make adjustments for two nodes at once.
Accessing a Node’s Controls Using the Parameters Tabs
You must first load a node’s parameters into the Parameters1 or Parameters2 tab in
order to make changes to them. The Parameters tabs are empty until you load a node’s
parameters into them.
To load a node’s parameters into the Parameters1 tab:
m
Click the right side of the node.
The parameters indicator appears on the right side of the node, and the node’s
parameters are loaded into the Parameters1 tab. The node does not have to be
selected in order to load its parameters into the Parameters1 tab.
Double-click anywhere on the node to load its parameters into the Parameters1 tab
and its image into the Viewer.
To load a node’s parameters into the Parameters2 tab:
m
Shift-click the right side of the node.
Path Display–Keyframe Displays only the keyframe positions in the Viewer. To
access this mode, click and hold the Path Display button,
then choose this button from the pop-up menu.
Path Display–Hide The motion path and keyframes are not displayed in the
Viewer. To access this mode, click and hold the Path
Display button, then choose this button from the pop-up
menu.
Button DescriptionChapter 1 An Overview of the Shake User Interface 73
The parameters indicator appears on the right side of the node, and the node’s
parameters are loaded into the Parameters2 tab. The node does not have to be
selected in order to load its parameters into the Parameters2 tab.
Loading a node’s parameters into a tab automatically clears out whatever previous
parameters were loaded. If necessary, you can clear a Parameters tab at any time.
To clear a tab so that no parameters are loaded into it:
m
Right-click the Parameters1 or Parameters2 tab, then choose Clear Tab from the
shortcut menu.
It’s important to bear in mind that you can load the image from one node into the
Viewer, while loading another node’s parameters into the Parameters tab.
Click to load node
parameters.
Click once to display node in Viewer.
Double-click to load node parameters.74 Chapter 1 An Overview of the Shake User Interface
For example, you can view the resulting image from the bottommost node in a tree,
while adjusting the parameters of a node that’s farther up in that tree.
The indicator on the left shows which nodes are loaded into Viewers, and the indicator
on the right shows which nodes have been loaded into one of the Parameters tabs.
Using Tweaker Windows
You can also open a node’s parameters in a floating “Tweaker” window.
To open a Tweaker window:
m
Select a node and press Control-T.
The Tweaker window appears, floating above the Shake window.
Adjusting Parameter Controls
The Shake interface incorporates a variety of parameter controls. Many parameters
found in the Parameters tab and Globals tab have subparameters. A plus sign (+)
beside a parameter indicates that there are related subparameters. Click the plus sign
to open the parameter subtree and access the subparameters.
Global Parameters
You can double-click an empty area in the Node View to open the Globals tab, or click
the Globals tab itself. For more information on the global parameters, see Chapter 2,
“Setting a Script’s Global Parameters,” on page 91.Chapter 1 An Overview of the Shake User Interface 75
Each parameter has several types of controls that you can use to change that
parameter’s numerical value.
• Sliders: Move the slider (if available) to modify the parameter’s value.
• Virtual sliders: These sliders—controlled by dragging in a value field—allow you to
increase or decrease a parameter’s value beyond the limits of a standard slider. Drag
left or right in a value field to decrease or increase a parameter’s numeric value.
Note: If you are using a Wacom tablet, open to the Globals tab, open the guiControls
subtree, enable virtualSliderMode, then set virtualSliderSpeed to 0.
When a parameter has an expression attached to it, a plus sign (+) appears beside the
parameter name. (The plus sign is also used to indicate a parameter that has hidden
subparameters.) An expression can be an animation curve, a link to a different
parameter, or a function. Clicking the plus sign beside a parameter that’s linked to an
expression reveals the expression field. To clear a non-curve expression, move the
slider, enter a new value in the value field, or right-click and select Clear Expression. For
an introduction to expressions, see “Using Expressions in Parameters” on page 78.
• Some parameters have associated toggle buttons. You can enter a value in the value
field, or click the toggle button. You can also enter expressions in the value field.
• Press Tab or Shift-Tab to advance or retreat into adjacent value fields.
Keyframing and Curve Editor Controls
Two controls let you load a parameter into the Curve Editor, and enable it to be
animated using keyframes.
The Load Curves button (the clock-shaped button to the immediate right of a
parameter name) loads parameters into the Curve Editor. 76 Chapter 1 An Overview of the Shake User Interface
When the Load Curves button is enabled (checked), the parameter is displayed in the
Curve Editor. When disabled, the parameter does not appear in the Curve Editor.
The Autokey button enables keyframing for that parameter.
For more information on animating parameters, see Chapter 10, “Parameter Animation
and the Curve Editor,” on page 291.
Locking a Parameter
Most parameters have a lock button next to the Autokey button. This control lets you
lock that parameter so that it can’t be modified.
When you lock a parameter, its value field turns red to indicate that it’s locked.
Locked parameters cannot be edited, but if they contain keyframes, an expression, or a
link to another parameter, these values continue to animate that parameter.
Using Color Controls
Some parameters have associated Color controls.
To use a Color control, do one of the following:
m
Click the Color control (the color swatch)—the Color Picker opens, and you select your
color from the Color Picker or Viewer.Chapter 1 An Overview of the Shake User Interface 77
m
Click the plus (+) sign to the left of the Color control to access color subparameters. The
first row in the subtree contains a slider to modify one channel at a time. Select the
button that corresponds to the desired channel: (R)ed, (G)reen, (B)lue, (O)ffset, (H)ue,
(S)aturation, (V)alue, (T)emperature, (M)agenta-Cyan, or (L)uminance. Move the slider
to calculate according to the selected channel, but convert the numbers back to RGB.
m
Edit the individual channels or add expressions in the subtree.
m
You can also keep the subtree closed, and use the Color control (the color swatch) itself
as a virtual slider. Using the channel buttons as the keyboard guide, press and hold the
desired key (R, G, B, H, S, V, and so on) and drag left or right in the Color control. In the
following illustration, when G is pressed and the pointer is dragged, the green channel
increases or decreases.
Note: To adjust the red, green, and blue color channels at the same time, press O and
drag in the Color control (O represents Offset).
Using Pop-Up Menus
Some parameters have associated pop-up menus, such as the font parameter in the
Text node.
There are two ways to choose items from a pop-up menu:
m
Click a menu item to choose that item, then close the menu.
m
Right-click a menu item to choose it and remain in the menu. When you first click the
menu item, hold down the left mouse button and move the pointer off of the menu.
Then return the pointer to the menu and right-click. This allows you to quickly test
different parameters. 78 Chapter 1 An Overview of the Shake User Interface
Using Expressions in Parameters
An expression is any non-numeric entry, such as a variable or a mathematical
calculation. Any parameter can use an expression. Some expressions, such as time, are
extremely simple. When you type the expression variable “time” into a value field,
Shake returns the numeric value of the current playhead position. For example, if the
playhead (in the Time Bar) is parked at frame 1, typing “time” into a value field returns a
value of 1 in that field.
Entering an expression consisting of any letter (whether valid or not) activates a plus
sign (+) to the left of the parameter name. To edit a parameter’s expression, click the
plus sign to open an expression field underneath.
If the parameter has an expression and you make an adjustment to that parameter’s
slider, the expression is removed in favor of your numerical change. If the parameter is
animated, however, these special expressions are recognized by Shake and are not
removed when the slider is adjusted. For more information on animating parameters,
see Chapter 10, “Parameter Animation and the Curve Editor,” on page 291.
Note: You can also remove an expression by right-clicking the field and selecting Clear
Expression from the shortcut menu.
You can modify expressions in various ways:
• To load or save an expression, use the right-click menu.
• To create extra sliders to build complex expressions (and still allow interactive input),
right-click the field and select Create Local Variable. To remove a local variable, rightclick and select Delete Local Variable.
For a lesson on using local variables and expressions, see Tutorial 4, “Working With
Expressions,” in the Shake 4 Tutorials.
For a list of mathematical expressions and variables, see Chapter 30, “Installing and
Creating Macros,” on page 905. Chapter 1 An Overview of the Shake User Interface 79
Linking One Parameter to Another
You can link any parameter to any other parameter.
To link parameter A to parameter B within the same node:
m
Enter the name of parameter A into the value field of parameter B, then press Return. A
plus sign appears to indicate that the parameter now contains an expression.
For example, in a Move2D node, you would link yPan to xPan by typing the following
into the yPan parameter:
xPan
Note: The default state of many parameters is an expression which links them to an
accompanying parameter. This is most common for pairs of parameters that define X
and Y values. For example, the default argument for yScale is a link to xScale.
To link to a parameter in a different node, preface the link with the following syntax:
nodeName.parameter
For example, to link the red channel parameter of the Add1 node to the red channel
parameter of the Add2 node, enter the following expression in the Add1 red channel:
Add2.red
To interactively copy a parameter from one field to another in the same node,
do one of the following:
m
Click the parameter name you want to copy, and drag it to the parameter name (a
value or expression) that you want to copy the value to. This copies the value from the
first field to the second.
Note: This drag and drop behavior also works when you drag color from one Color
control to another.
m
Select text in a value field and press Command-C or Control-C to copy the information.
Go to the second value field and press Command-V or Control-V to paste.
m
To interactively link two parameters together, Shift-drag a parameter name and drop it
onto the parameter you want to link to. This creates an expression linking back to the
first parameter by listing its name.
Combining Links With Expressions
Parameter links can be used in conjunction with mathematical expressions as well. For
example, to double the value of a link, enter:
Add2.red*2
To link to a parameter at a different frame than the current one, use the @@ signs. For
example, to link to Mult1’s red parameter from two frames earlier, use the following
expression:
Mult1.red@@(time-2)80 Chapter 1 An Overview of the Shake User Interface
Displaying Parameter Values in the Viewer
You can dynamically display the values of parameters using the Text and AddText nodes.
To differentiate a parameter name from regular text in the value field, surround it with
a pair of braces. For example:
The current frame is: {time}
displays the following in the Viewer:
The current frame is: x
where x automatically updates as each frame progresses.
In another example, if there is a node called Gamma1, and its rGamma value is 1.7,
entering the following expression into the text parameter of a Text node:
My red value = {Gamma1.rGamma}
displays the following in the Viewer:
My red value is 1.7
Note: There is a macro called Wedge in the “Cookbook” section of this manual that can
be used to print out wedging values for color timing Cineon files.
For a lesson on linking parameters, see Tutorial 4, “Working With Expressions,” in the
Shake 4 Tutorials.
Copying and Pasting Script Code in Shake
If you copy a group of nodes from the node tree, open a text editor, and paste the
result, you’ll see the actual code that Shake is using to perform the operations those
nodes represent. The following screenshots show a simple composite in the Shake
interface, and those same three nodes copied and pasted into a text editor.
At times, various script code and expressions are featured in the Shake documentation.
Many times, examples and expressions you see presented in a coded format can be
copied from the onscreen documentation and pasted into Shake, for immediate use.Chapter 1 An Overview of the Shake User Interface 81
The Parameters Tab Shortcut Menu
The following table lists the options that appear when you right-click the top portion
of the Parameters tab.
The following table lists the options that are available when you right-click a parameter.
Option Description
Clear Tab Unloads the current parameters from the tab.
Create Local Variable Allows you to create a variable specific to a given node. Use this
option when you want to link one or more parameters to other
parameters. See Tutorial 4, “Working With Expressions,” in the Shake
4 Tutorials.
Delete Local Variable Deletes the local variable for the selected parameter.
Add Notes A dedicated local variable in string format. Allows you to add notes
to any node (to help you remember what you were thinking at the
time).
Reset All Values Resets all values in the node to their default state.
Option Keyboard Description
Copy Command-C or
Control-C
Copies the selected nodes onto the Clipboard.
Paste Command-V or
Control-V
Pastes the Clipboard contents into the Node View. You can also
copy nodes from the Node View and paste the nodes into a text
document, and copy the text and paste it into the Node View.
Load
Expression
Loads an expression from disk. The expression should be in Shake
format. You can use this if you have a translator for another
package’s curve types.
Save
Expression
Saves the current expression as a text file to disk.
Clear
Expression
Clears the current expression.
Clear Tab Clears the current parameters from the tab.
Create Local
Variable
Allows you to create a variable specific to that node. Use this when
you want to drive one or more parameters off of other parameters.
See Tutorial 4, “Working With Expressions,” in the Shake 4 Tutorials.
Delete Local
Variable
Deletes the local variable for the selected parameter.
Add Notes A dedicated local variable in string format. Allows you to add notes
to any node.82 Chapter 1 An Overview of the Shake User Interface
The Domain of Definition (DOD)
The Domain of Definition (DOD) is a rectangular zone that Shake uses to bind the
significant pixels in an image in order to optimize rendering speed. Everything outside
of the DOD is considered as background (black by default), and is therefore ignored in
most computations. Proper handling of the DOD is an extremely powerful way to
speed your render times.
To examine the efficiency of the DOD node:
1 Create an Image–Text node.
2 Ensure that the Display DOD button in the Viewer is on.
The green box that you see around the text in the Viewer is the DOD.
Because of the DOD, nodes attached to this image process in a fraction of the time it
takes to calculate the same nodes with an image that fills the frame.
To test rendering times with DOD:
1 Attach a Filter–RBlur node to the Text node, and set the RBlur oRadius parameter to 360.
2 In a separate branch, create an Image–Rand node (to create an entire frame of pixels).
3 Attach a Color–Monochrome node to the Rand node to turn it into a two-channel
image. The Text node creates a BWA (black and white, with alpha) image by default, so
you must match the channels to compare rendering speeds.
4 Copy and paste the RBlur1 node into the Node View, then attach the copied node
(RBlur2) to the Monochrome1 node.Chapter 1 An Overview of the Shake User Interface 83
There is a significant difference in rendering speed, even though both images are the
same resolution.
Assigning a DOD
All images from disk are automatically assigned a DOD that is equal to the resolution of
the image. There are five ways to alter the DOD:
• Images generated in Shake have a DOD. For example, nodes from the Image tab such
as RGrad, Text, and RotoShape automatically have an assigned DOD.
• The DOD of an image from disk that is transformed or filtered is automatically
recalculated. For example, the following image is read in (imported) and scaled down
and/or rotated with a Move2D.
Also, if an image is blurred, the DOD expands accordingly.
• A rendered .iff file from Shake is embedded with a DOD. When Shake writes an .iff
file, it automatically saves the DOD information. Only the .iff format embeds the DOD.
In the following example, the image that was written out in the previous (above)
example is read back into Shake.84 Chapter 1 An Overview of the Shake User Interface
• The SetDOD node, located in the Transform tab, allows you to manually assign a DOD
to an image. In the following illustration, a SetDOD node is attached to the building
image to limit the effects to the tower.
• You can combine multiple images using a DOD. When you combine two images, the
DODs combine to form a larger rectangle. If, however, you use a node like Inside or
IMult, that node takes the DOD of the second node. If the building image is placed
Inside of the QuickShape image from above, it inherits the DOD of the QuickShape
node.
Note: When using onscreen controls to edit a shape (for example, a Rotoshape or
QuickPaint object) that has control points within the boundaries of the DOD, move the
pointer over the shape inside of the DOD, and then drag to select the points. Chapter 1 An Overview of the Shake User Interface 85
Combining images with a DOD is an excellent way to optimize greenscreen or
bluescreen images that need to be cropped via a garbage matte anyway, because it
simultaneously removes the garbage areas and assigns an efficient DOD to the image.
The node tree above produces the folowing effect:
With a good understanding of the role of the DOD, you can optimize the tree before
and after the node in question. The above example not only optimizes any nodes you
attach to Inside1, but executes the Primatte and reads in the part of the image that is
inside of the DOD, reducing processing and I/O activity.
Keying, Color Correcting, and the Background Color
This section discusses the area outside of the DOD, which is called the Background
Color (BGColor).
Building QuickShape1
Primatte1 Inside186 Chapter 1 An Overview of the Shake User Interface
The two main keyers in Shake, Keylight and Primatte, recognize the background color,
and have a toggle to key the background color in or out. By default, the keyer leaves
the background area black in the alpha channel. To turn the background completely
white, toggle BGColor on.
Shake processes color correction of the BGColor very quickly, as it recognizes there is a
pure correction applied to previously black pixels. If the color correction does not
change black, such as Gamma or Mult, it is ignored. If it does affect the black areas, as
does Add or Compress, it processes these areas, but understands that they are still the
result of a lookup process. Therefore, the DOD does not get reasserted to the resolution
frame. This is the same process that is used when the Infinite Workspace kicks in. So,
even though the pixels outside of the DOD are not visibly different from the pixels
inside, the DOD remains in place. (For more information, see Chapter 7, “Using the Node
View,” on page 217.)Chapter 1 An Overview of the Shake User Interface 87
There may be cases, however, where you want to take advantage of the DOD for
masking purposes. In this tree, an image is scaled down, and the brightness increased
with an Add node. This, however, turns the area outside of the image a medium gray.
Since this area is recognized as outside of the DOD, it can be returned to black with a
Color–SetBGColor node, which sets the color for the area outside of the DOD.
The Layer–Constraint node also limits a process. For more information on masking
using the Constraint node, see Chapter 19, “Using Masks,” on page 527.
Building Move2D1
Add1 SetBGColor188 Chapter 1 An Overview of the Shake User Interface
The Time Bar
The Time Bar, at the bottom of the Shake window, displays the currently defined range
of frames, the playback buttons, and the Info field, which provides a brief description of
each control you move the pointer over.
The Time Bar is a display of the currently defined time range. It neither limits nor
controls the actual parameters that are saved into the script. To set the frame range
that renders via a FileOut or Flipbook operation, go to the Globals tab and enter the
frame range in the timeRange parameter.
Setting a Script’s Frame Range
The number in the field to the left of the bar is the start frame of the Time Bar, and the
number in the field to the right is the end frame. In the above example, frames 1 to 21
are highlighted. This corresponds to an entry in the timeRange parameter of the
globals of 1-21.
Time Bar Navigation Values
The Current Frame field indicates the position of the playhead, which is 49. The
Increment parameter controls how far the playhead advances when you press the Left
Arrow or Right Arrow key.
• The default value of 1 means that every frame is played back.
• A default value of 0.5 enables you to see each field when a video clip is loaded into
the Viewer.
• A value of 2 or higher means that Shake skips frames. At 2, every other frame is
skipped.
When you move the pointer within the Time Bar, the frame number that you’ll jump to
when you click to reposition the playhead is displayed over the pointer.
If you have already set the timeRange parameter in the Globals tab, click Home in the
Time Bar controls to use the timeRange as the Time Bar frame range.
To change the current time and display that frame in the Viewer, do one of the
following:
m
Click or drag the playhead to interactively scrub across the current time range.
m
To jump to a specific frame, type the frame number into the Current Frame field, and
press Return. As with any value field, you can use the virtual sliders—press Control and
drag the pointer left and right in the value field.Chapter 1 An Overview of the Shake User Interface 89
m
To pan across the Time Bar, press the middle mouse button and drag; or Option-click or
Alt-click and drag.
m
To zoom into or out of the frame range displayed by the Time Bar, press Control and
the middle mouse button; or Control-Option-click or Control-Alt-click, then drag.
Playback Controls
The controls illustrated below play through the script according to the Time Bar frame
range, not the global timeRange.
• To play forward, click the forward arrow button.
• To play backward, click the backward arrow button.
Note: Regardless of the speed of your computer, Viewer playback is now limited to
the frame rate specified in the framesPerSecond parameter in the Format section of
the Globals tab.
Assuming your composition is cached so that real-time playback is possible, this
playback rate is not exact, but may vary by around 10 percent.
• To stop playback, click the stop button, or click the left mouse button anywhere on
the screen.
• Shift-click a playback button to render all frames in the current frame range and store
the frames in memory. The sequence will play back much faster next time.
• Click the keyframe buttons to jump to the previous or next keyframe.
The following table lists additional keyboard shortcuts.
For more information, see Chapter 8, “Using the Time View,” on page 261.
Keyboard Description
Left Arrow key or Right Arrow
key
Retreat/advance a frame based on the frame Increment setting
(works in any window).
Up Arrow key or Down Arrow
key
Jump to next/previous keyframe.
. Play forward.
Shift-. Begin cached playback.
Home Fit the current time range into the Time Bar.
T Toggle timecode/frame display.90 Chapter 1 An Overview of the Shake User Interface
Previewing Your Script Using the Flipbook
You can render a temporary Flipbook to preview your work. Once the Flipbook has
rendered into RAM, use the playback buttons (described below) to play back the
Flipbook. The Flipbook is available on Mac OS X and Linux systems.
To launch the Flipbook from the interface:
1 In the Globals tab, set the timeRange, for example, 1-50 or1-50x2.
2 Load the node that contains the image you want to preview into the main Viewer.
3 Click the Flipbook button in the Viewer shelf.
A Flipbook window appears, and the specified timeRange is rendered into RAM for
playback.
4 When the render is finished, press the period or > key to play the result. When you’re
finished viewing the Flipbook, close the window and it disappears from memory.
On a Mac OS X system, you also have the option to create a disk-based QuickTime
Flipbook. For more information on using both RAM and disk-based Flipbooks, see
Chapter 11, “The Flipbook, Monitor Previews, and Color Calibration,” on page 323.2
91
2 Setting a Script’s Global
Parameters
This chapter covers how to set the global parameters
within each script, tailoring your script’s properties to fit
your needs.
About Global Parameters
When you create a new script, you should customize its global parameters before
starting work on your composite. The Globals tab contains parameters that are
commonly found in the Project Properties window of other applications. These
parameters include a script’s time range, default frame width and height, aspect ratio,
proxy settings, global motion-blur controls, bit depth, field rendering settings, and
various ways to customize Shake’s controls.
The global parameters also contain a group of guiControl parameters that let you
customize how Shake works. Using the guiControl parameters, you can specify whether
thumbnails are exposed, how many threads Shake uses on multi-processing
computers, the colors used by shapes and noodles, and the sensitivity of shape
controls in the Viewer.
To access the global parameters, do one of the following:
m
Click the Globals tab.
m Double-click an empty area in the Node View.
Setting Global Variables From the Command Line
Many of the parameters described in this chapter can be set in the command line at
the time you launch Shake, so you don’t have to reset them each time you write out a
script. For example, your timeRange may be 1-10, but you can modify that when you
render on the command line with the -t option:
shake -exec my_script -t 1-24092 Chapter 2 Setting a Script’s Global Parameters
Note: The global controls also appear in the Parameters1 tab when Shake is first
started, or whenever you create a new script.
The global parameters that can be seen in the Globals tab of the Shake interface are
divided into several groups.
The Main Global Parameters
These parameters control the duration and format of the output from your script. While
these parameters can be changed at any time, it’s a good idea to set them to the
proper values before you begin any serious compositing.
timeRange
This parameter defines the number of frames in your project. This parameter can be
changed at any time. The timeRange is generally represented by a starting value and
an ending value, separated by a dash. For example, a 10-second clip in a project that’s
set to a frame rate of 24 fps would have a timeRange parameter set to “1-240.”
The fastest way to set the timeRange is to open the Globals tab and click Auto, which is
located to the right of the timeRange field.
Clicking Auto automatically populates the timeRange parameter by calculating the
duration from the earliest frame in any FileIn node to the last frame in any FileIn node.Chapter 2 Setting a Script’s Global Parameters 93
The starting frame does not always have to be set to 1. For example, to quickly trim off
the first 20 frames of your project, change the timeRange to “21-240.” Doing this
restricts the frame range displayed in the Time Bar and the processing and rendering of
your script to only the frames you need.
Here are some more examples of frame ranges you can define in Shake.
Several parameters and controls in Shake either inherit the timeRange parameter
directly, or allow you to assign its value:
• The Home button in the playback controls
• The Render Parameters window
• The Time Range parameter in the Mixdown Options of the Audio Panel
• The Flipbook Render Parameters window
useProxy
A proxy is a lower-resolution image that can be temporarily substituted for the highresolution plates in your script, allowing you to work and see rendered tests faster.
Because the images are smaller, you drastically decrease the processing time, memory
requirements, and the amount of time spent on reading and writing files as you work.
Naturally, the trade-off is that the quality of the image displayed in the Viewer suffers as
well, which is why proxies are generally used only when creating low-resolution comps
or creating test previews. After assembling a script using proxies, you can return your
script to the original, full resolution in order to render the final output.
These controls are linked to the proxy buttons found at the upper-right corner of the
Shake interface, and allow you to switch among different resolutions to reap the
aforementioned benefits. For more information on using proxies, see Chapter 4, “Using
Proxies,” on page 137.
Time Range Number of Frames Frames Rendered
1-100 100 1, 2, 3... 100
1-100x2 50 1, 3, 5... 99
1-100x20 5 1, 21, 41... 81
1-20, 30-40 31 1, 2, 3... 20, and 30, 31, 32... 40
1-10x2, 15, 18, 20-25 13 1, 3, 5... 9, 15, 18, 20, 21, 22 ... 25
100-1 100 100, 99, 98... 294 Chapter 2 Setting a Script’s Global Parameters
interactiveScale
If the general processing speed for your operations is fine, but the interactivity of
processor-intensive operations is slowing you down, you can turn on the
InteractiveScale option in the Globals tab to use a proxy resolution only while you’re
adjusting parameters. This option does not affect your Flipbooks or FileOut renders. For
more information, see “Using interactiveScale” on page 139.
motionBlur
In Shake, motion blur can be applied to animated transformation parameters. Each
transform node has its own motion blur settings, so you can tune each one individually.
The motionBlur parameters in the Globals tab either adjust or replace the existing
values within each node in the script, depending on the parameter. You can also set
the global motionBlur value to 0 to turn all motion blur within your project off. For
more information on using motion blur, see “Creating Motion Blur in Shake” on
page 778.
The Format Pop-Up Menu
The format pop-up menu provides a fast way of simultaneously setting all the format
subparameters found within the format parameter subtree. The format pop-up menu
contains many of the most popular media formats.
Name
default
Width
default
Height
default
Aspect
default
ViewerAspect framesPerSecond
Academy 1828 1332 1 1 24
CinemaScope 1828 1556 .5 2 24
Full 2048 1556 1 1 24
1.85 1828 1332 1 1 24
HDTV1080i/p
30FPS
1920 1080 1 1 30
HDTV1080i/p
29.97 FPS ND
1920 1080 1 1 29.97
HDTV1080i/p
29.97 FPS DF
1920 1080 1 1 29.97
HDTV1080i/p
25 FPS
1920 1080 1 1 25
HDTV1080i/p
24 FPS
1920 1080 1 1 24
HDTV1080i/p
23.98 FPS
1920 1080 1 1 23.98
NTSC ND (D1 4:3) 720 486 1.1111 .9 29.97
NTSC DF (D1 4:3) 720 486 1.1111 .9 29.97
NTSC ND (16:9) 720 486 .83333 1.2 29.97
NTSC DF (16:9) 720 486 .83333 1.2 29.97Chapter 2 Setting a Script’s Global Parameters 95
If the format you need is not in this list, you can always open up the format parameter
subtree—by clicking the “+” (plus) icon to the left of the parameter name—and create
your own custom format.
These settings are only for Shake-generated image nodes—they have no effect on the
resolution or frame rate of media referenced by FileIn nodes. Shake generated nodes,
such as RotoShape, QuickPaint, Ramp, and Grad inherit the global resolution.
Click the “+” (plus) icon to reveal the format parameter subtree, which contains the
following subparameters:
framesPerSecond
This parameter limits the speed of playback from the Time Bar, and also sets the default
playback rate of launched Flipbooks. Three buttons provide the three most common
frame rates, Film at 24 fps, NTSC video at 29.97 fps, and PAL video at 25 fps. A value
field allows you to enter a custom frame rate to accommodate any other format.
Note: To change the playback rate within the Flipbook, press + and – (on the numeric
keypad). The current frame rate is displayed at the top of the Flipbook.
timecodeMode
Sets how timecode is calculated within your script, as 24 fps, 25 fps, 30 fps drop frame,
or 30 fps non-drop frame. This parameter is unrelated to timecode that might be
present in a QuickTime movie.
Note: Shake does not import timecode associated with QuickTime movies.
defaultWidth, defaultHeight
The width and height of the frame for Shake-generated images. See the above table for
standard frame sizes.
defaultAspect
The pixel aspect ratio used for Shake-generated images. This should be set to match
the format of the images you’re reading into your script. For example, since most
standard-definition video formats have nonsquare pixels, the aspect ratio of NTSC
video is 1.111, while that of PAL video is .9380. Academy ratio film, which has square
pixels, is simply 1. For more information on pixel aspect ratios, see “About Aspect Ratios
and Nonsquare Pixels” on page 209.
PAL (D1 4:3) 720 576 .9380 1.066 25
PAL (16:9) 720 576 .7032 1.422 25
PAL (square) 768 576 1 1 25
Name
default
Width
default
Height
default
Aspect
default
ViewerAspect framesPerSecond96 Chapter 2 Setting a Script’s Global Parameters
defaultViewerAspectRatio
This value corrects the aspect ratio of the image displayed by the Viewer to account for
images using nonsquare pixels. The defaultViewerAspectRatio parameter is for display
only, and has no effect on rendered output.
Changing any format subparameter sets the format pop-up menu to Custom. If there’s
a particular custom format that you use frequently, you can add it to the Format popup list. For more information on adding entries to the format pop-up menu, see
“Customizing the Format Pop-Up Menu” on page 96.
defaultBytes
Sets the default bit rate for Shake-generated images. The defaultBytes parameter has
no effect on images that are read in via FileIn nodes, nor does it affect the rendered
output from your script.
viewerZoom
The zoom level applied to the Viewer. This value has no effect on the output resolution
of your script.
viewerAspectRatio
When set to formatDefault, this parameter scales the X axis of the Viewer by the
defaultViewerAspectRatio parameter, located within the format subtree. When this
parameter is set to custom, you can change it to whatever value you want. This is
usually used to compensate for the nonsquare pixel ratios of video. For anamorphic
film frames, you typically use the proxyRatio to scale down the Viewer’s Y axis.
renderControls
These parameters affect how Shake renders material processed by the currently open
script.
fieldRendering
When fieldRendering is set to 0, progressive scan/full frames are rendered. When set to
1, the odd field takes precedence, meaning it is the first line at the top. For more
information on setting up your script to render fields properly, see “The Basics of
Processing Interlaced Video” on page 191.
Customizing the Format Pop-Up Menu
You can create your own formats in a startup.h file. In $HOME/nreal/include/startup,
add a line in the following format:
DefFormatType(“Name”, defaultWidth, defaultHeight, defaultAspect,
defaultViewerAspectRatio, framesPerSecond, fieldRendering)
For example:
DefFormatType(“NTSC (D1 4:3)”, 720, 486, 1/.9f, 0.9f, 29.97,0); Chapter 2 Setting a Script’s Global Parameters 97
quality
When this parameter is set to lo (0), anti-aliasing is disabled. This results in poorer
image quality, but improved render speed.
maxThread
Set the maxThread to the number of available processors you want to use for rendering
by Shake.
cacheMode
The cache is a directory or precalculated images with script information attached.
When Shake evaluates a node tree at a given frame, it compares the tree to the cache
to see if it has rendered that frame before. If it has, it calls up the cached image rather
than recalculate the entire tree in order to save time. Shake keeps track of how many
times each cached frame has been viewed, eliminating the least viewed frames first
when the cache runs out of room.
You can set the cacheMode to one of four states:
• none: Cache data is neither read from nor written to.
• read-only: Preexisting cache data is read, but no new cache data is generated.
• regular: The cache is both read from and written to, but only nodes with nonanimated values are cached.
• aggressive: The cache is both read from and written to, and nodes with animated
and non-animated parameters are cached.
When setting the cacheMode, consider the following guidelines:
• In most circumstances, the regular cacheMode setting should be used.
• Consider setting the cacheMode to aggressive when you are constantly scrubbing
back and forth between two or three frames (for example, when tweaking tracking
or shape control points).
• You should only set cacheMode to none if you are using Shake on a system with
extremely limited RAM and disk space. By setting the cacheMode to none, Shake is
forced to re-compute each image that you select to view, which is the least efficient
way to run.
For more information on Shake’s caching system, see Chapter 13, “Image Caching,” on
page 343.
macroCheck
If you open or load a script on your system, and the script does not appear, the script
may contain macros that are not on your system. If this is the problem, a message
similar to the following appears in the Console tab:
line 43: unknown function MissingMacro
MissingMacro1=MissingMacro(Keylight_1v41);98 Chapter 2 Setting a Script’s Global Parameters
To open or load a script that contains a missing macro:
1 Click the Globals tab.
2 Expand the renderControls subtree.
3 Set macroCheck to one of the following options:
• abort load: does not load the script
• sub. with text: substitutes a Text node in place of the missing macro
• sub. no text: substitutes a MissingMacro node
4 Open/load the script.
To set the default macroCheck behavior to substitute a MissingMacro node, include the
following in a .h file:
sys.useAltOnMissingFunc = 2
For more information on .h files, see “Creating and Saving .h Preference Files” on
page 355.
guiControls
The parameters within the guiControls subtree allow you to customize the functionality
and display of controls in Shake’s graphical user interface. These settings are
individually saved by each script you create.
displayThumbnails
Turns all thumbnails in the Node View on and off. For more information on customizing
the thumbnail display, see “Customizing Thumbnail Display” on page 253.
displayThumbnails has three subparameters—thumbSizeRelative, thumbSize, and
thumbAlphaBlend.
thumbSizeRelative
Scales all thumbnails to the same size, or leaves them at different sizes relative to the
original sizes of the images. By default, all thumbnails are displayed at the same width.
To display thumbnails at their relative sizes, turn on thumbSizeRelative.
thumbSize
Lets you adjust the size of thumbnails in the Node View. If thumbSizeRelative is turned
on, all nodes are resized relative to one another.
thumbAlphaBlend
Turns thumbnail transparency on and off. When thumbAlphaBlend is on, moving one
thumbnail over another results in a fast look at how the nodes might appear when
composited together in the Viewer. More usefully, it gives you an instant view of which
images have transparency in them.Chapter 2 Setting a Script’s Global Parameters 99
virtualSliderMode
When this parameter is turned off, dragging within any parameter’s value field in Shake
results in an edit bar appearing and the contents of that field being selected. When this
parameter is turned on, dragging within a parameter’s value field results in that
parameter being modified as if you were dragging a slider. This mode is very useful
when using Shake with a graphics tablet. You can also use these virtual sliders in the
value fields simply by dragging with the mouse.
virtualSliderSpeed
Adjusts the speed of the virtual slider. When using a stylus, it is recommended you set
this parameter to 0.
noodleTension
Lets you adjust how much “slack” there is in the way noodles are drawn from knot to
knot. Higher values introduce more slack, and noodles are more curved. Lower values
reduce the slack, and noodles are drawn in more of a straight line.
shapeControls
These subparameters allow you to customize the spline-based shape drawing and
editing behaviors and transform controls in the Viewer. You can change these
parameters to make it easier to use Shake’s controls for your individual needs.
rotoAutoControlScale
An option which, when enabled, increases the size of the transform controls of shapes
based on the vertical resolution of the image to which the shape is assigned. This
makes it easier to manipulate a shape’s transform control even when the image is
scaled down by a large ratio.
rotoControlScale
A slider which allows you to change the default size of all transform controls in the
Viewer when rotoAutoControlScale is turned on.
Note: You can also resize every transform control appearing in the Viewer by holding
the Command key down while dragging the handles of any transform control in the
Viewer.
rotoTransformIncrement
This parameter allows you to adjust the sensitivity of shape transform controls. When
this parameter is set to lower values, transform handles move more slowly when
dragged, allowing more detailed control. At higher values, transform handles move
more quickly when dragged. A slider lets you choose from a range of 1-6. The default
value is 5, which matches the transform control sensitivity of previous versions of
Shake.100 Chapter 2 Setting a Script’s Global Parameters
rotoPickRadius
This parameter provides the ability to select individual points on a shape that fall
within a user-definable region around the pointer. This allows you to easily select
points that are near the pointer which may be hard to select by clicking them directly.
A slider allows you to define how far, in pixels, the pointer may be from a point to
select it.
rotoTangentCreationRadius
This parameter lets you define the distance you must drag the pointer when drawing a
shape point to turn it into a Bezier curve. Using this control, you can make it easier to
create curves when drawing shapes of different sizes. For example, you could increase
the distance you must drag, to avoid accidentally creating Bezier curves, or you can
decrease the distance you must drag, to make it easier to create Bezier curves when
drawing short shape segments.
gridWidth, gridHeight
Specifies, in pixels, how wide and tall each rectangle of the grid is. The gridHeight is
locked to the gridWidth by default, although this expression can be changed. This
default is 40 x 40 pixels.
gridEnabled
Lets you control the grid’s effect on the nodes that you create. There are two settings:
on and off. This parameter also toggles the background grid pattern in the Node View if
gridVisible is turned on.
gridVisible
Displays the grid as a graphical background in the Node View. This graph is only
displayed when gridEnabled is turned on.
layoutTightness
This parameter affects the Layout Arrangement commands described in “Arranging
Nodes” on page 244. It lets you specify how closely nodes should be positioned to one
another when they’re newly created, or whenever you use one of the arrangement
commands. This parameter’s default is 40 pixels.
consoleLineLength
The maximum line length of information displayed in the Console tab. This defaults to
120 characters.Chapter 2 Setting a Script’s Global Parameters 101
multiPlaneLocatorScale
Affects all MultiPlane nodes within the script. This parameter scales the depth of the
virtual space used to distribute the locator points that are displayed in the Viewer
(which represent 3D-tracking data clouds that are imported from .ma files). This
parameter allows you to expand or compress the relative distance from the camera to
the tracked background plate. Adjusting this parameter lets you more easily position
layers in space when camera tracking data comes from a subject that’s either very far
away, or very close. This parameter is for reference only, and has no effect on the data
itself. The default multiPlaneLocatorScale value is 50.
Monitor Controls
The monitorControls parameters affect the image that is output to a video monitor
using a supported video output device. For more information about outputting to a
second display, see “Viewing on an External Monitor” on page 330.
broadcastViewerAspectRatio
By default, this parameter is a link, to script.defaultViewerAspectRatio, which
mirrors the setting in the format subtree of the Globals tab. When first launched, Shake
looks at the system’s monitor card and outputs the proper aspect ratio based on the
format you select in the Globals tab. For example, if you have a D1 card and you select
NTSC D1 from the format parameter, Shake displays nonsquare pixels in the Viewer and
sends square pixels to the video monitor.
Note: If you change the value of the broadcastViewerAspectRatio using the slider or
the value field, the link to defaultViewerAspectRatio is removed. As with all Shake
parameters, you can enter another expression in the broadcastViewerAspectRatio
parameter to reset it.
broadcastHighQuality
When broadcastHighQuality parameter is turned on, the image is fit to the size of the
broadcast monitor in software mode (rather than hardware mode). The
broadcastHighQuality parameter applies a scale node and a resize node, instead of
using OpenGL. The broadcastHighQuality parameter is enabled by default.
broadcastGammaAdjust
Lets you adjust the gamma of your broadcast monitor to insure proper viewing (for
example, if you are sending an SD NTSC signal to an HD monitor).
broadcastMonitorNumber
By default, Shake looks for the first available monitor with an SD or HD resolution to
use as the external monitor. If you have more than one monitor card installed on your
computer, this parameter lets you choose which monitor to use.102 Chapter 2 Setting a Script’s Global Parameters
Note: The external display monitor doesn’t have to be a broadcast display. If you have
more than one computer display connected to your computer, the second one can be
used as the external preview display.
Colors
These parameters allow you to customize the colors Shake uses for different Shake
interface controls.
fontColor
This parameter lets you customize the color used by the text within the Shake interface.
It simultaneously affects the text of settings in the Parameters tabs, the tab names, node
names, and other text in the Node View, Curve Editor, and other Shake tabs.
shapeColors
These subparameters let you customize the colors used by splines generated by various
nodes in the Viewer. A large collection of these parameters is used by the Warper and
Morpher nodes to help you distinguish among the specific uses of each type of spline.
noodleColor
Lets you change the base color of noodles in the Node View. Noodles are white by
default, but you can use the Color control to change this to anything you like.
noodleColorA, -BW, -BWA, -RGB, -RGBA, -Z, -AZ, -BWZ, -BWAZ, -RGBZ, -RGBAZ
When noodleColorCoding is turned on (in the enhancedNodeView subtree), noodles
are color coded and stippled (see “enhancedNodeView” below), based on the bit depth
and number of color channels being propagated by each noodle in your node tree.
When turned off, noodles appear using the default NoodleColor.
Different combinations of color channels are represented by different colors, and this
group of parameters lets you customize the color used for each representation. For
more information on noodle color coding, see “Noodle Display Options” on page 224.
gridColor
When gridVisible is on, the grid in the Node View is drawn using this color.
enhancedNodeView
Unlike most other guiControls parameters, which have two toggle states (off and on),
each of the parameters in the enhancedNodeView subtree—showTimeDependency,
showExpressionLinks, showConcatentationLinks, and noodleColorCoding—has three
states. They can be always off, always on, or set to follow the state of the
enhancedNodeView parameter. Chapter 2 Setting a Script’s Global Parameters 103
enhancedNodeView
This parameter allows you to toggle all four enhanced Node View parameters using a
single button. This parameter can also be toggled using the Enhanced Node View
command from the Node View shortcut menu (Control-E or Command-E). For more
information on the enhancedNodeView parameters, see “Using the Enhanced Node
View” on page 221.
showTimeDependency
This parameter, when turned on, draws a bluish glow around nodes that are animated.
This includes nodes that consist of FileIn nodes that reference a QuickTime movie or
multiple-image sequence, nodes containing keyframed parameters, or nodes utilizing
expressions that change a parameter’s value over time.
showExpressionLinks
Turning this parameter on draws a light purple line connecting a node that uses an
expression to reference another node, and the node it references. An arrow pointing
toward the referenced node indicates the relationship.
ShowConcatenationLinks
When this parameter is turned on, a green line connects a series of nodes that
concatenate together. For example, three transform nodes that have been added to a
node tree in sequence so that they concatenate appear linked with a green line
connecting the left edge of each node. As a result, nodes that break concatenation are
instantly noticeable.
Note: As is often repeated, node concatenation is a very good thing, and you are
encouraged to group nodes that will concatenate together whenever possible to
improve the performance and visual quality of your scripts.
noodleColorCoding
When noodleColorCoding is turned on, noodles are color coded and stippled based on
the bit depth and number of color channels being propagated by each noodle in your
node tree. When this parameter is turned off, noodles appear in the default
NoodleColor.
stipple8Bit, stipple16Bit, stipple32Bit
Bit depths are represented by varying dotted (stippled) lines.These parameters let you
customize the stippling used for each bit depth supported by Shake, choosing from
five stippling patterns.104 Chapter 2 Setting a Script’s Global Parameters
Application Environmental Variables
The default values of many of the global parameters can be customized via .h
preference files. For example, if you consistently set one or more global parameters to a
custom state whenever you create a new script, you can set custom defaults so that
new scripts are created with your preferred settings.
Custom global values you set with .h files are only applied to newly created scripts.
Once a script is saved, these values are saved within that script, and opening that script
recalls the global settings saved within.
Customizing Shake’s Default Settings
Unlike many applications that control user customizable settings with a preferences
window, Shake provides access to a wide variety of functionality using a system of
user-created preference files. For more information on creating and using custom
preference files, see Chapter 14, “Customizing Shake.”
Script Environmental Variables
The following are variables with states that are not customized via .h files, and are only
saved within Shake scripts.
Custom Variable Loading Order
When you create .h files to customize Shake’s functionality, there is an order of
precedence used to load them.
• The default Shake settings found in the nreal.h and nrui.h files are loaded first.
• Settings in custom .h files load second, according to their own load order.
• Settings in .user files are loaded third.
• Finally, any variables found in a Shake script itself are loaded last, overwriting all
previous settings.
Global Parameter Type Purpose
SetTimeRange(const char *range) char Range of frames displayed in the Time Bar.
SetTime(float /*frameNumber*/) float The frame number of the current position of the
playhead.
SetFieldRendering(const char *mode) char Whether or not field rendering is turned on for
rendered output.
SetFps(const char *fps) char The frame rate of the script.
SetMotionBlur(const char *mb, const
char *st, const char *so)
char Whether or not motion blur is turned on.
SetQuality(const char *quality) char The quality of the script, enables antialiasing for
certain functions.
SetProxyScale(const char *proxyScale,
const char *proxyRatio)
char The default proxy scale of the script.Chapter 2 Setting a Script’s Global Parameters 105
SetUseProxy(const char *useProxy) char The default proxy setting.
SetProxyFilter(const char *proxyFilter) char The default filter used to scale proxies.
SetPixelScale(const char *pixelScale,
const char *pixelRatio)
char A temporary setting for the proxy resolution that is
overwritten when useProxy is set.
SetUseProxyOnMissing(const char
*useProxyOnMissing)
char When active, substitutes a proxy generated from
an associated proxy file when a missing image is
encountered.
SetFormat(const char *s) char The format that’s selected by default.
SetDefaultWidth(int width) int The width of Shake-generated images.
SetDefaultHeight(int height) int The height of Shake-generated images.
SetDefaultBytes(int bytes) int The bit depth of Shake-generated images.
SetDefaultAspect(float aspect) float The default aspect ratio of Shake-generated
images.
SetDefaultViewerAspect(float aspect) float The value used to correct the image displayed in
the Viewer to display nonsquare images.
SetTimecodeMode(const char*
timecodeMode)
char How Timecode is calculated in your script.
SetMacroCheck(int mc) int Sets macroCheck. 1 = abort load, 2 = substitute
with text, and 3 = subsititute no text.
SetDisplayThumbnails(const char
*displayThumbnails)
char Whether or not to display thumbnails in the Node
View.
SetThumbSize(const char
*thumbSize)
char Sets the size of thumbnails in the Node View.
SetThumbSizeRelative(const char*
thumbSizeRelative)
char Turns thumbSizeRelative off and on.
SetThumbAlphaBlend(const char
*thumbAlphaBlend)
char Turns thumbAlphaBlend off and on.
Global Parameter Type Purpose3
107
3 Adding Media, Retiming, and
Remastering
This chapter covers adding media to your script using
FileIn nodes, either as individual files, or as media from
Final Cut Pro. Also discussed are the retiming and
remastering functions available from within the FileIn
node itself.
About Image Input
This section discusses importing images into a Shake script using the FileIn node. It also
presents other procedures associated with the FileIn node, including associated file
paths, temporary files and disk space, basic time shifting, and retiming footage. For
more information on supported file formats and other issues regarding media, see
Chapter 5, “Compatible File Formats and Image Resolutions,” on page 167.
Adding Media to a Script
Usually, the first step in any composite is to add one or more FileIn nodes, which import
or “read in” source media files on disk into Shake’s node tree. Although there are many
other image nodes that can be used to generate images directly within Shake (the
Checker, ColorWheel, Ramp, and RGrad nodes are four examples), image sequences and
QuickTime media files must be added to your script using the FileIn node. Each media
element read into the node tree requires a separate FileIn node.
To add media to your script:
1 Create a FileIn node by doing one of the following:
• Click the FileIn node in the Image tab.
• Right-click in the Node View, and then choose FileIn from the Nodes > Image
shortcut submenu.
• On Mac OS X, choose Tools > Image > FileIn from the menu bar.
2 When the File Browser appears, select one or more files, then click OK.108 Chapter 3 Adding Media, Retiming, and Remastering
The selected media appears in the Node View, represented by one or more FileIn nodes.
For more information about finding and selecting files, see “The File Browser” on
page 38.
By default, FileIn nodes appear with a thumbnail of the first frame of the media they
represent. In this example, the two highlighted FileIn nodes at the top of the node tree
provide the source images that are modified and combined further down in the tree.
After you’ve finished creating the necessary effect in Shake, you export your finished
shot by attaching a FileOut node to the section of the node tree that you want to write
to disk. For more information on outputting images using the FileOut node, see
Chapter 12, “Rendering With the FileOut Node.”
Image Sequence Numbering
When referring to an image sequence, you can specify frame padding by adding
special characters to the file name you enter:
• The # sign signifies a four-place padded number.
• The @ sign signifies an unpadded number.
• The %d characters signify either a padded or unpadded number, depending on the
numbers placed between the two characters.
You can also use several @ signs to indicate padding to a different number. (For
example, @@@ signifies 001.)
Dragging and Dropping Media Into Your Script
If you’re running Shake on Mac OS X, you can drag supported media types from the
Finder directly into the Node View tab. This results in the creation of a FileIn node
corresponding to each file you dragged in.Chapter 3 Adding Media, Retiming, and Remastering 109
The following table lists some formatting examples.
The above examples assume an exact relation between the current frame processed in
Shake, and the frame read in. For example, at frame 1, image.1 is read in. If you are
reading the images in from the interface with Sequence Listing enabled in the File
Browser, you see the actual sequence in the “File name” field. For example:
Unlike the previous examples, these offset the clip timing by placing image.4.iff at
frame 1 and image.5.iff at frame 2. In the first example, image.10.iff is placed at frame 7.
In the second example, image.10.iff is placed at frame 4. All sequence gaps are ignored.
To offset or retime a clip, use the Timing subtree in the FileIn parameters or the Time
View tab.
When reading in an image, the File Browser allows you to specify if the first sequence
image (suppose the sequence starts at frame 20) is placed at frame 1, the start frame
(for example, 20), or the current frame.
Referring to Media Using File Paths
Shake can read local or absolute file paths. For example, with a machine named
“myMachine,” suppose you had a directory structure like the following:
/shots/my_directory/my_image.iff
/shots/scr/my_script.shk
The script can access my_image.iff in the following ways:
FileIn1 = FileIn(“../my_directory/my_image.iff”);
FileIn2 = FileIn(“/shots/my_directory/my_image.iff”);
FileIn3 = FileIn(“//myMachine/shots/my_directory/my_image.iff”);
Note: Local file paths in a script are local to where the script is located, not from where
Shake is started.
Shake Format Reads
image.#.iff image.0001.iff, image.0002.iff
image.%04d.iff image.0001.iff, image.0002.iff
image.@.iff image.1.iff, image.2.iff
image.%d.iff image.1.iff, image.2.iff
image.@@@.iff image.001.iff, image.002.iff
image.%03d.iff image.001.iff, image.002.iff
Image Shake Syntax With Sequence Listing
image.4.iff, image.5.iff ...
image.10.iff
image.4-10@.iff
image.4, image.5.iff, image.6.iff,
image.10.iff
image.4-6,10@.iff110 Chapter 3 Adding Media, Retiming, and Remastering
When Shake reads in an image, it converts the file path of the image to the UNC naming
convention. This labels the machine name first, and then the file path. The third listing
above is an example of this convention. This behavior can be turned off in a preferences
file. For more information, see “Customizing File Path and Browser Controls” on
page 371.
Shake looks for its files in the User directory ($HOME) when launched from the
application icon, or the current directory if launched from the Terminal. This affects
how manually entered paths for FileIns are read.
• If the image paths are local (for example, “ImagesDirectory/image.#.iff”), images are
read relative to where the script is saved.
• If paths are global (for example, “//MyMachine/MyBigHardDisk/ImagesDirectory/
image.#.iff”), then images have no relation to where the script is saved, and thus the
script may be easily moved into different directories.
If the script and the image are on different disks, you must specify the proper disk—
local file paths do not work.
• For a URL address, place a // in front of the path. To read from another computer,
write //myMachine/Drive/Directory/etc.
Using the FileIn (SFileIn) Node
The FileIn node is used to read in media (from disk for processing by your script.
Note: Although labeled “FileIn” in the Tool tab, this node actually represents the more
advanced SFileIn function. The SFileIn node includes internal features not available in the
older FileIn node used in previous versions of Shake. The enhanced functions include
FIBlend, FINearest, FIPullUpDown, and IRetime. These functions are “internal” because they do
not appear in the Shake interface, but are saved inside of each script. For the purposes of
this manual, unless otherwise stated, “FileIn” refers to the enhanced “SFileIn” functionality.
The FileIn and SFileIn functions include:
• FileIn: A pre-v2.5 function, convenient for scripting, given its brevity, it can only be
shifted in time with IRetime.
• SFileIn: From v2.5 and later, this node can be hooked up to have preset proxies,
shifted with IRetime, or modified by FIBlend, FINearest, or FIPullUpDown.
• FIBlend: This invisible function does non-linear retiming of a sequence, blending
frames together. It modifies SFileIn only.
• FINearest: This invisible function does non-linear retiming of a sequence with no
frame blending. It modifies SFileIn only.
• FIPullUpDown: Does pullup or pulldown operations on an SFileIn node.
• FISpeed: Similar to Blend, except instead of a curve, you control speed with a slider,
for example, 2x, .5 speed, and so on.Chapter 3 Adding Media, Retiming, and Remastering 111
• IRetime: Sets the start/stop frame of a clip, can slip sync, and controls how the clip
behaves for frames outside of the frame range. Works for FileIn and SFileIn.
FileIn Source Parameters
The parameters for each FileIn node are divided into subtabs: the Source tab and the
Timing tab.
The Source tab, located on the left side of the Parameters tab, contains the following
controls:
ImageName
Specifies the local or absolute path to the image or media file on disk.
Note: You can get away with supplying only the imageName and omitting the other
path information within a script.
The ImageName subtree contains the following subparameters:
• BaseFile: Specifies the original, high-resolution image sequence or movie file
associated with this node. Opening this subtree reveals one additional parameter:
• baseFileType: A subparameter of BaseFile that tells Shake what the format is if the
file suffix is ambiguous. Generally, you do not need to set this, as “Auto”
automatically detects the format. If you have a problem reading an image, try
setting the format to correct this.
• ProxyXFile: Additional parameters appear if you have created proxies to use in this
script. For more information on using proxies, see Chapter 4, “Using Proxies,” on
page 137.
firstFrame
Lets you trim the beginning of a clip.
lastFrame
Lets you trim the end of a clip.
Note: QuickTime clips do not display either the firstFrame or lastFrame parameters.112 Chapter 3 Adding Media, Retiming, and Remastering
increment
This parameter controls how frames in the referenced image sequence are advanced,
providing an unsophisticated method for retiming the clip by either skipping or
multiplying frames being read in from an image sequence.
• The default value of 1 means that every frame plays back, and the clip’s duration in
the script is identical to its duration on disk.
• A value of 2 or higher means that frames are skipped. At 2, every other frame is
skipped, and the clip duration is halved.
Note: QuickTime clips do not display this parameter.
autoAlpha
If this parameter is on (1), then Shake creates a solid alpha channel for images not
containing an alpha channel. If the image already has an alpha channel, this parameter
is ignored.
deInterlacing
This parameter is to be used when importing interlaced images that you intend to
render with fieldRendering on. When deInterlacing is on, Shake takes the odd or even
field of the image (counted from the top) and copies it over the other remaining field.
It then does the same thing half a frame forward. You are therefore left with two
images the same height as your input image, but squeezed into the same time frame.
For more information, see Chapter 5, “Compatible File Formats and Image Resolutions,”
on page 167.
force8Bit
This parameter appears for FileIn nodes reading in QuickTime media. When force8Bit is
turned on, 10- and 16-bit media is downsampled to 8-bits per channel.
Paths
Both FileIn and SFileIn recognize local, absolute, variables, or URL paths:
• Absolute Path: /usr/people/myDirectory/myimage.iff
• Local Path: . ./myimage.iff
• Environment Variables: $myimages/myimage.iff
• URL Address: //wherever/usr/people/myDirectory/myimage.iff
Note: For more information on variables, see “Environment Variables for Shake” on
page 393.
Missing Frames in an Image Sequence
If one or more frames from an image sequence is missing, Shake handles this in one of
two ways, depending on the file name format specified in the imageName parameter
of the FileIn node. Chapter 3 Adding Media, Retiming, and Remastering 113
• If the file name format is filename.1-30#.tiff, Shake expects an uninterrupted sequence
of frames to exist on disk. If individual frames are accidentally deleted or moved from
the specified path, each missing frame results in a gap in the image sequence in
Shake. Each gap results in a black frame being displayed in the Viewer. For example,
if frame 17 goes missing after a tiff sequence has been imported, moving the
playhead to frame 17 in the Time Bar displays a black frame in the Viewer.
• If frame 17 is already missing on disk when you first select the image sequence, the
File Browser will show a segmented frame range, and the resulting imported image
sequence will appear as a continuous, unbroken frame sequence. For example, if
frame 17 is missing in a selected tiff sequence, its name appears as filename.1-16,18-
30#.tiff. Scrubbing to frame 17 in the Time Bar displays frame 18, instead, because the
sequence name lets Shake know to skip that frame and close the gap.
Unlinked Files
If, for any reason, a FileIn node cannot find any of the media it was originally linked to,
that node becomes unlinked. This happens with both image sequences and QuickTime
files. Unlinked FileIn nodes are red in the Node Viewer.
In the Source tab of the Parameters tab, the imageName parameter field of an unlinked
FileIn node also turns red. The original path name still appears.
FileIn nodes can become unlinked if the media they originally referenced has been
moved to another directory or volume, renamed, or deleted. FileIn nodes can also
become unlinked if the Shake script has been moved to another machine. In any case,
FileIn nodes can be easily relinked to the original source media at any time.
To relink a FileIn node to the original files:
1 Load the FileIn node’s parameters into the Parameters tab by double-clicking the node,
or clicking on the right side of the node once.114 Chapter 3 Adding Media, Retiming, and Remastering
2 Click the File Browser icon in the ImageName parameter.
3 Use the File Browser to find the original media files, then click OK.
Note: The File Name field displays the name of the file that was originally linked to that
FileIn node.
FileIn Timing Parameters
The second subtab under the FileIn Parameters tab is the Timing tab. The parameters in
this tab control the timing of image sequences and movie files used in your Shake
script. Many of these timing parameters, including timeShift, inPoint, and outPoint, can
also be manipulated directly in the Time View.
The Timing tab also has options for retiming images, creating fast, slow, or variablespeed motion effects. These are covered in more detail in “Retiming” on page 117.
To access the timing parameters of a FileIn node:
1 Load the selected FileIn node’s parameters into the Parameters tab by double-clicking
it, or clicking the icon on the right side of the node.
2 Click the Parameters1 tab, and then click the Timing tab to reveal the timing
parameters.
Parameters in the Timing Tab
The Timing tab has the following parameters:
timeShift
Slides the entire duration of the image sequence or movie forward or backward in
time.
inPoint
Lets you extend the duration of a clip at its beginning, to loop or freeze the sequence.
outPoint
Lets you extend the duration of a clip at its end, to loop or freeze the sequence. Chapter 3 Adding Media, Retiming, and Remastering 115
inMode
If media has been time-shifted or the In point changes so that there are blank frames
prior to the first frame of the media in the Time View, this parameter lets you set how
those empty frames should be filled.
outMode
This parameter lets you set how the empty frames after the last frame of the media in
this FileIn node are filled.
Both the inMode and outMode parameters have the following options:
pulldown
Lets you introduce or remove 3:2 pulldown from the media referenced by this FileIn
node.
reTiming
The reTiming parameter provides options for changing the speed of clips. By default,
this is set to none, and the media remains at its original speed. For more information
on retiming, see “Retiming” on page 117.
For more information on shifting clips, see “Adjusting Image Nodes in the Time View”
on page 263.
Icon Name Notes
Example (assumes
a 5-frame sequence)
Black All frames before or after the
image frames are black.
1, 2, 3, 4, 5, black, black, black...
Freeze The first and last frames are
repeated before and after the
clip.
1, 2, 3, 4, 5, 5, 5, 5...
Repeat The sequence is repeated,
looping the image sequence
from the first frame.
1, 2, 3, 4, 5, 1, 2, 3...
Mirror The sequence is repeated,
looping the image sequence by
flipping the order each time. The
first and last frames are not
doubled up.
1, 2, 3, 4, 5, 4, 3, 2...
InclusiveMirror The sequence is repeated,
looping the image sequence by
flipping the order each time. The
first and last frames are shown
twice in a row.
1, 2, 3, 4, 5, 5, 4, 3...116 Chapter 3 Adding Media, Retiming, and Remastering
Pulldown and Pullup
3:2 Pulldown is a technique to temporally convert the framerate of noninterlaced film
footage to that of video, and back again. The pulldown parameter in the Timing tab of
SFileIn allows you to manage your pulldown/pullup of a sequence. There are two
options:
30 to 24
This option removes pulldown from a media file that has been telecined to 30 fps. Use
this setting to return it to 24 fps for compositing in Shake.
24 to 30
This option converts 24 fps film footage to 30 fps—adding 3:2 pulldown.
dominance
When you select either option, an additional dominance parameter appears which lets
you select the field dominance of the output.
More About 3:2 Pulldown
Film uses frames, running at 24 frames per second (fps). Video uses interlaced fields,
with the frame rate of NTSC video running at 29.97 fps, and the frame rate of PAL video
running at 25 fps. To convert a film sequence into a video sequence, you need to split
the film frames into fields, and double up two out of every five frames in order to make
24 film frames fill the space of 30 video frames per second. To use the classic graph:
The third and fourth frames have fields that blend to stretch time. It’s called 3:2
because you have three solid frames and two mixed frames.
To fully reconstruct the original four film frames (in time, not resolution—the original
resolution is already lost), you must extract the field data from the five video frames.
But there is usually a complication—when you receive your footage, it has probably
already been edited. As a result, there is no guarantee that frames 3 and 4 are the
mixed frames because all of the clips have been shifted in the edit. As a result, you
need to determine what the first frame is.
To determine the first frame of an image sequence with 3:2 pulldown:
1 Double-click the FileIn node to load its image in the Viewer and its parameters into the
Parameters1 tab.
2 In the Time Bar, move the playhead to the first frame in the sequence, and scrub
through the first five frames while looking at the Viewer to determine the first frame
that shows two fields from different frames that are blended together.
1/6 of a Second Equals
4 Film Frames A B C D
5 Video Frames AA BB BC CD DDChapter 3 Adding Media, Retiming, and Remastering 117
3 Choose the firstFrame value that corresponds to this frame number in the following
chart:
4 Click the Timing tab in the Parameters1 tab, and set the firstFrame parameter according
to the table in step 3.
Note: If the first several frames in a shot are a solid color and you are unable to
determine the first mixed frame, jump forward to a time range of frames that displays
the blending, and start guessing what firstFrame is until the fields disappear.
Reintroducing 3:2 Pulldown After Transforming Media
Removing pulldown from an image prior to transforming it within a Shake script is
simple. If you need to reintroduce it after the transformation, however, this must be
performed in a second step with another script.
After removing the original 3:2 pulldown and performing all necessary transformations
and other effects in the initial script, render out the result as a 24 fps media file.
Afterwards, read the resulting media file into a second script using a FileIn node, adding
3:2 pulldown back to the shot by clicking the 24 to 30 option in the pulldown
parameter of the FileIn Timing tab.
You can also simply process the output via a command-line render, using the following
commands:
• -pulldown [offset]
• -pullup [offset]
For more information on processing media from the command line, see Chapter 3,
“Adding Media, Retiming, and Remastering,” on page 107.
Retiming
You can also squeeze, stretch, or nonlinearly retime your clip when you activate
reTiming in SFileIn parameters. The reTiming Parameter can be set to Speed or Remap.
First Frame With Field Blending firstFrame Setting
1 BC
2 BB
3 AA
4 DD
5 CD118 Chapter 3 Adding Media, Retiming, and Remastering
The reTiming parameter has four options:
• None: No retiming is applied, and the clip plays at its original speed.
• Speed: Lets you change the speed of a clip using a simple multiplier. For example, .5
returns a clip twice the length of the original, that plays in slow motion.
• Remap: Allows you to change the speed of a clip with a curve, with the X axis
representing the input frame number and Y representing the frame it’s remapped to.
• Convert: The Convert option of the reTiming parameter provides an advanced
processing method for converting media to other formats. For more information on
using the Convert option for format conversion, see “Remastering Media” on
page 126.
The Remap option creates an ease-in effect, slowing the first part of a clip and
accelerating it as the clip near the end.
Both the Speed and Remap options can smooth the effect of slow-motion strobing by
blending frames using either the Blend or Nearest option in the retimeMode pop-up
menu. Blend averages frames together and Nearest takes the frame right below the
specified frame. For example, if frame 5.7 is needed, frame 5 is used.
The Convert option provides a high-quality method of processing speed changes.
Speed Parameters
If you select the Speed option, the following parameters appear:
Speed
The speed of the clip is multiplied by this value. For example, a value of .5 slows the clip
to 50 percent. A value of 2 speeds up the clip by 200 percent.Chapter 3 Adding Media, Retiming, and Remastering 119
retimeMode
By default, you are given three options for frame blending:
Nearest
No frame blending is applied, and Shake simply uses a copy of the nearest available
frame to fill the new in-between frames.
Blend
Averages neighboring frames together to create in-between frames that are a
combination of both to soften the strobing effect that can result from slow motion
effects. If the retimeMode parameter has been set to Blend, three additional
parameters appear underneath.
• retimeBytes: Affects frame blending. When multiple frames are blended together,
setting retimeBytes to a higher bit depth will result in a more accurately calculated
image by doing the math at a higher bit depth. For example, if you have a blending
mode that is blending together many frames, and two of them happen to be at 50%,
in 8-bit, you get .504 or .496. In a 16-bit calculation, you can get much closer to
exactly .5.
• weight: A gamma curve is applied to source frames that are blended together in
order to create the in-between frame. A weight of 0 means each source frame
needed to create the in-between frame contributes equally, while a higher gain, such
as 2, causes the center frame to give the greatest contribution and frames farther
away proportionately less.
• range: Controls how many frames are blended together to create the final result. For
example, if you want to extend a source clip of 20 frames to 40 frames, each source
frame is applied to two output frames. With a range of 2, it is applied in four output
frames, resulting in more blending. If you apply only this value with no other
modifications, Shake inserts repetitions of neighboring frames to help you with
degraining.
Note: The FileIn node has been written so that it’s possible to create a custom frameblending algorithm, if you happen to have a spare developer.120 Chapter 3 Adding Media, Retiming, and Remastering
Adaptive
This option in the retimeMode pop-up menu uses advanced image analysis to
generate new in-between frames, creating seamless slow and fast-motion effects.
Selecting Adaptive reveals additional parameters.
• Motion: Two options determine the trade-off between image quality and processing
time—Fast and Best. Fast makes motion interpolations using a mesh warp, and works
well in most situations. Best, the most processor-intensive mode, uses motion vectors
to track each pixel from frame to frame, interpolating new frames.
• DeInterlacing: Similar to the Motion parameter, two options let you determine the
trade-off between image quality and processing time—Fast and Good. These
settings represent two levels of mesh warping used to interpolate data from both
fields of each frame. Here are some tips for using the DeInterlacing parameter:
• Good is actually a better setting for frames without any moving subjects.
• If you’re working with standard definition video, there is no difference between
using Fast or Good.
• The DeInterlacing operation can also be used to improve clips that were poorly
deinterlaced in other applications prior to importing into Shake.
Fast versus Best Settings for Adaptive Retiming
When setting up an adaptive timing operation, you might be tempted to simply
choose Best across the board for every parameter. This would probably be a
mistake—producing dramatically longer render times in exchange for a potentially
undetectable increase in quality, especially at higher resolutions.
That said, every clip‘s requirements are different. One group of settings is unlikely to
produce equal results for shots with widely different subjects and movement. You’re
encouraged to do some limited tests prior to committing yourself to a particular
group of retiming settings. As you experiment with different settings, be sure you
always compare output from the Fast settings to that from the Better or Best settings,
to make sure that it’s worth committing yourself to the more computationally
intensive Best settings.Chapter 3 Adding Media, Retiming, and Remastering 121
• AlwaysInterpolate: With AlwaysInterpolate turned off, the final result of a retiming
operation is a mix of original, unprocessed frames, and interpolated frames. For
example, setting the speed to 0.5 to slow an image sequence down by 50 percent
results in an alternating series of original frames and interpolated frames. In cases
where there is a significant visual difference (softening, for example) between the
unprocessed and reprocessed frames resulting from a retiming operation, the image
sequence may appear to flicker. Turning on AlwaysInterpolate forces Shake to
process every single frame in a retimed sequence, resulting in a series of frames with
consistent visual quality.
If the Motion parameter is set to Best, three additional parameters become available:
• BackwardFlow: Turning on BackwardFlow evaluates the flow of images in both
directions in order to generate interpolated in-between frames. This mode is usually
visually superior, but significantly slower.
• FlowSmoothness: Higher or lower values may improve the quality of interpolated
frames, depending on the shape of the subject in the image.
• Use low values to improve the quality of subjects in the frame that change shape,
like a person or animal that’s running or jumping.
• Use high values to improve the quality of static objects that don’t change shape,
such as trees, buildings, or cars.
• FlowPrecision: This is the last parameter you should adjust, after obtaining as much
quality as possible with all of the above settings. Increasing the value of this
parameter increases the overall precision of the Adaptive retiming operation by
increasing the resolution at which the optical flow is estimated. A value of 0 is fine for
most situations.122 Chapter 3 Adding Media, Retiming, and Remastering
Remap Parameters
If you select the Remap button in the reTiming parameter, the following additional
parameters appear:
• retimeMode: By default, you are given two options for frame blending:
• Nearest: No frame blending is applied, and Shake simply uses a copy of the nearest
available frame to fill the new in-between frames.
• Blend: Averages neighboring frames together to create in-between frames that are
a combination of both to soften the strobing effect that can result from slow
motion effects.
• Adaptive: An interpolation method for retiming that uses advanced image
processing to generate new in-between frames. This is the highest-quality method
for many images, and is the most processor-intensive. With the retimeMode
parameter set to Adaptive, two additional parameters appear to let you adjust the
trade-off between quality and processing time. For more information, see the
section on Adaptive parameters on page 120.
If the retimeMode parameter has been set to Blend, two additional parameters
appear underneath.
• weight: A gamma curve is applied to the mixture of source frames that are
blended together in order to create the in-between frame. A weight of 0 means
that each source frame needed to create the destination contributes equally, while
a higher gain, such as 2, causes the center frame to give the greatest contribution
and frames farther away proportionately less.Chapter 3 Adding Media, Retiming, and Remastering 123
• range: Controls how many frames should be blended together to create the final
result. For example, if you want to extend a source clip of 20 frames to 40 frames,
each source frame contributes to two output frames. With a range of 2, each
source frame contributes to four output frames, resulting in more blending. If you
only apply this value with no other modifications, you get repetitions of
neighboring frames to help you with degraining.
Note: The FileIn node has been written so that it’s possible to create a custom frameblending algorithm, if you happen to have a spare developer sitting around.
• retimeBytes: This parameter affects frame blending. When multiple frames are
blended together, setting retimeBytes to a higher bit depth will result in a more
accurately calculated image by doing the math at a higher bit depth. For example, if
you have a blending mode that is blending together many frames, and two of them
happen to be at 50 percent, in 8-bit, you get .504 or .496. In a 16-bit calculation, you
can get much closer to exactly .5.
• startFrame, endFrame: Specifies the frame range used for retiming calculations.
• Curve Graph: The retime parameter appears in a graph within the Parameters tab. The
X axis represents the input frame number, and the Y axis represents the frame it is
remapped to.
The TimeX Node
You can also retime a clip using the TimeX node, located in the Other tab. This node lets
you use mathematical rules to change timing on the input clip.
By default, the value of the newTime parameter is the expression time—which is the
frame at the current location of the playhead in the Time Bar. Using the time expression
makes no change. Typically, you’ll modify this expression in order to remap frames from
their original position in the clip, to create new timings.
Understanding the Retiming Parameters
If you are having difficulty understanding the multitude of Retiming parameters, copy
this string, paste it into the Node View, and then render out 100 frames.
Text1 = Text(300, 100, 1, “%f”, “Courier”, 44.3, xFontScale, 1,
Hermite(0,[10,76.08,76.08]@1,[261,76.08,76.08]@100), Hermite(0,[90,-
34.69,
-34.69]@1,[30,-34.69,-34.69]@100), 0, 2, 2, 1, 1, 1, 1, 0, 0, 0, 45,
0, 1);
Then, read in the rendered clip and test the retiming.124 Chapter 3 Adding Media, Retiming, and Remastering
Parameters
The TimeX node has one parameter in the Parameters tab:
newTime
This parameter defaults to a spline that maps every input frame to a corresponding
frame in time, such that the clip plays forward normally at 100 percent speed. Typically,
you’ll enter a new expression into this field, using the expression time, to remap the
frames of the image sequence or movie file to create new timing effects.
Similar to Lookup and ColorX, you can duplicate most other Time functions with TimeX.
These other functions are simply macros that include TimeX. They are included because
TimeX can be counter-intuitive.
A more complex example is to animate 360 3D frames with an animated light. The light
is from the right at frame 0, from the top at frame 90, from the left at frame 180, and so
on. You then position a fake light source in the composite. By figuring out the angle of
the light to the 3D element (using trigonometry), you can pick one of the 360 input
frames to fake the lighting change.
Timing Expression Explanation
time-10 Shifts the clip 10 frames forward. While processing frame 50, it
reads input frame 40.
101-time Assumes frame 100 is the last frame. At frame 1, 100 used.
time%10+1 Loops every 10 frames. Takes the remainder of time/10, and adds 1
(otherwise frame 10 = 0).
time>10?10:time A conditional expression. Freezes the clip at frame 10. Any frame
before that is processed normally.
time Do nothing.
100 (or any integer) Picks one frame. In this example, at every frame, the node returns
100, so only input frame 100 is used.
time*2 Double the rate. In this expression, at frame 10, frame 20 is used.
“CSpline(0,
1@1,
30@25,
40@50,
90@75, 100@100
)”
Using a curve to speed the clip up and down.The arbitrary curve
shown here returns different frame values. You can use any spline
type, with as many keyframes as you want.
Multiple Branches
You can only have one branch traced up to the FileIn with a TimeX in it. To get
multiple time shifts on the same clip, copy the FileIn. Note also that FileIn has timing
controls of its own that you may find easier to use.Chapter 3 Adding Media, Retiming, and Remastering 125
Manual Manipulation of Time
This section explains the notation Shake uses for a FileIn node, and the available FileIn
options. It also discusses the notation for the timeRange parameter in the Globals tab,
or the -t option on the command line. For a discussion of the interactive controls of
time, see Chapter 8, “Using the Time View,” on page 261, and “Using the FileIn (SFileIn)
Node” on page 110.
Time Notation for a FileIn
This section focuses on manual manipulation of time. For most interactive time
manipulation, Shake relies on the Time View and its associated timing subtree in the FileIn
parameters. You can manipulate time in other ways, specifically on the command line.
When Shake reads in a clip, it inserts the start and end frame of the clip in the clip
name, and gives an indication of the padding style, denoted here with the number
sign #:
image.1-50#.iff
In the above example, this notation indicates that only frames 1 through 50 are loaded,
even though there may be more files. The other frames are black when read in with the
default settings.
Shake puts the start of the range at frame 1. If you have:
image.20-50#.iff
at frame 1, image.0020.iff is read.
You can also shift the clip to frame 20 in the Time View of the interface.
Shake can recognize a series of frames when reading in a file without using the clip
range. When looking at a sequential series of files, use a placeholder in the file name to
represent the frame number. This placeholder is typically either a # sign (padded
images, image.0001.iff, image.0002.iff, and so on.) or an @ sign (unpadded images,
image.1.iff, image.2.iff, and so on). If your numbers are padded to a number other than 4,
you can substitute multiple @ signs.
You can also use the %d placeholder, which specifies the number of decimal spaces
applied to padded images. For example, image.%03d.iff produces padding of three
decimal places—image.001.iff, image.002.iff, and so on).
The following are some examples of frame number placeholders:
Shake Format Reads/Writes
image.#.iff image.0001.iff, image.0002.iff
image.%04d.iff image.0001.iff, image.0002.iff
image.@.iff image.1.iff, image.2.iff126 Chapter 3 Adding Media, Retiming, and Remastering
Time Notation Setting the Script Range
The script range can be set in the timeRange field of the Globals tab, or on the batch
command line with the -t option, which overrides the script.
The range description is extremely flexible. The following are some examples:
To set time range in the command line when rendering a script, use the -t option:
shake -exec my_script.shk -t 50-60 -v
For command line examples of time manipulation, see “Frequently Used Functions” on
page 1023 of Appendix B, “The Shake Command-Line Manual.”
For more information on using the Time View, see Chapter 8, “Using the Time View.”
Remastering Media
The Convert option of the reTiming parameter provides a method for converting media
from one format to another using advanced image processing to rescale and retime
the incoming media. For example, if you have a high definition image sequence that
you want to convert into a standard definition image sequence, or a PAL clip that you
need to change to NTSC, the Convert option provides the tools to do so.
Choosing Convert reveals a series of parameters within the FileIn node that allow you
to change the frame rate, resize the output resolution, anti-alias and sharpen the
resulting image, and deinterlace the media being referenced by that FileIn. These
options provide the highest-quality means of resizing and deinterlacing available in
Shake, with results that are superior to the transform nodes that are available from the
Tool tabs. These options are only available within the FileIn node.
image.%d.iff image.1.iff, image.2.iff
image.@@@.iff image.001.iff, image.002.iff
image.%03d.iff image.001.iff, image.002.iff
Shake Format Reads/Writes
Time Range Number of Frames Frames Rendered
1-100 100 1, 2, 3... 100
1-100x2 50 1, 3, 5... 99
1-100x20 5 1, 21, 41... 81
1-20, 30-40 31 1, 2, 3... 20, and 30, 31, 32... 40
1-10x2, 15, 18, 20-25 13 1, 3, 5... 9, 15, 18, 20, 21, 22 ... 25
100-1 100 100, 99, 98... 2Chapter 3 Adding Media, Retiming, and Remastering 127
You can use these options to convert individual shots that you’re compositing within
Shake, or you can read in an edited sequence from an application like Final Cut Pro for
format conversion using Shake.
Important: If you’re converting a clip from a video frame rate to that of film with the
intention of adding 3:2 pulldown back to the video (to achieve a film look for video),
render the 24 fps conversion first. Add 3:2 pulldown to the shot in another operation
by processing it in a second script, or by adding 3:2 pulldown with a command-line
render. For more information, see “Reintroducing 3:2 Pulldown After Transforming
Media” on page 117.
Automatic Scene Detection for Multiple Shots
If you’re reading in a sequence of pre-assembled shots, the remastering operators in
Shake use automatic scene detection to eliminate artifacts at the frame boundaries
between shots. This edge detection works well for cuts and dissolves. but other types
of transitions may produce unwanted artifacts.
Fast versus Best In Remastering Parameters
When setting up a remastering operation, you might be tempted to simply choose
Best across the board for every parameter. This would probably be a mistake—
producing dramatically longer render times in exchange for a visually undetectable
increase in quality, especially at higher resolutions.
That said, every clip‘s requirements are different. One group of settings is unlikely to
produce equal results for shots with widely different exposures, grain, and camera
movement. You’re encouraged to do some limited tests prior to committing yourself
to a particular group of mastering settings. As you experiment with different settings,
be sure you always compare the output from the Fast settings to that of the Better or
Best settings, to make sure that it’s worth committing yourself to the more
computationally intensive Best settings.128 Chapter 3 Adding Media, Retiming, and Remastering
Convert Parameters
The Convert mode has the following parameters:
InputFrameRate
Specify the original frame rate of the input media here. This parameter is also a subtree
with two additional subparameters.
InputFrameInterlaced
If the input media is video, enable this parameter if it’s interlaced.
InputFrameDominance
If the input media is interlaced, specify the field dominance here.
OutputFrameRate
Specify the output frame rate here for format conversion. This parameter is a subtree
with two additional subparameters. For advanced retiming options to produce slow
and fast motion, see “Retiming” on page 117.
OutputFrameInterlaced
If you want interlaced video output, turn this parameter on. Leaving it off results in
Shake outputting progressive-scan (non-interlaced) media.Chapter 3 Adding Media, Retiming, and Remastering 129
OutputFrameDominance
If OutputFrameInterlaced is turned on, specify the field dominance of the output image
here.
OutputRes
Two fields where you enter the horizontal and vertical output resolution you want to
convert the media to, to scale it up or down. Scaling an image sequence using the
OutputRes parameter of the Convert options results in higher-quality output than
using Shake’s Transform nodes.
Recursive
Turning this parameter on enables a different resizing method, which can be sharper
when enlarging some kinds of images. Try it on one or more representative frames to
see if it helps.
Note: The recursive setting may also enhance unwanted noise, depending on the
image.
AntiAlias
Turning this parameter on improves the quality of conversions when you’re scaling
media up. For example, when converting standard definition video to high definition,
turning on AntiAlias smooths out jagged edges that might appear in the image.
Details
A built-in sharpening control that lets you add detail back to an image being enlarged.
Unlike other sharpening operations, the details setting is able to distinguish between
noise and feature details, and generally doesn’t increase unwanted grain. Increasing
this parameter may introduce jagged edges, however, which can be eliminated by
turning on the AntiAlias parameter.
Motion
Two options determine the trade-off between image quality and processing time. Fast
makes motion interpolations using a mesh warp, and is generally the only setting
necessary for purposes of remastering.
Note: Best, the most processor-intensive mode, uses motion vectors to track each pixel
from frame to frame to interpolate new frames, and should only be necessary when
doing retiming. For more information on the parameters available for the Best setting,
see page 120.
DeInterlacing
Similarly to the Motion parameter, two options let you determine the trade-off
between image quality and processing time—Fast and Good. These settings represent
two levels of mesh warping used to interpolate data from both fields of each frame.130 Chapter 3 Adding Media, Retiming, and Remastering
AspectRatio
This parameter is a multiplier that allows you to convert pixels of one aspect ratio into
another aspect ratio—for example, from NTSC to PAL, or from high definition (square)
to NTSC. The default value of 1 makes no change to the output. The following table
contains common conversion values:
Note: In some conversion cases, AspectRatio may be left at 1 if the Fit parameter is set
to Resize, which rescales the image horizontally and vertically to match the new
OutputRes.
Fit
The Fit parameter determines how an image fits into the new frame that’s determined
by the OutputRes parameter if the aspect ratio of the FileIn image is different than that
of the final resolution set by the OutputRes parameter. There are two options:
• Fit: Enlarges the image until either the vertical or horizontal resolution of the image
matches that of the outputRes, depending on which is greater. This option maintains
the original aspect ratio of the image, enlarging it as much as possible and leaving
black to fill in the resulting gaps at the sides, or the top and bottom.
• Resize: Rescales the vertical and horizontal resolution of the image so that it fits
within the entire frame of the OutputRes. The image is stretched as necessary to fit
the new resolution.
Working With Extremely High-Resolution Images
These guidelines are specifically for high-resolution images of 4K and 6K. The following
discussion is based on the premise that you have a massive amount of RAM for your
interactive workstation (at least 1 GB).
Although Shake works with any resolution, there is a default crop on Viewers in the
interface of 4096 x 4096 pixels. This protects the user in case a zoom of 1000 is applied
to a 2K plate. Instead of trying to render an enormous image, only the lower-left corner
up to 4096 pixels is rendered in the interface. This is fine for normal HD or film
production, but the cropping takes effect if you read in 6K IMAX plates. This limitation
is only in the interface—images rendered to disk are at the uncropped full resolution.
Operation Conversion Value
Square to NTSC (4:3) 0.9140, or 1 if Fit is set to Resize
Square to PAL (4:3) 1.1111, or 1 if Fit is set to Resize
NTSC (4:3) to Square (4:3) 1.1111, or 1 if Fit is set to Resize
NTSC (4:3) to PAL (4:3) 1, set Fit to Resize
PAL (4:3) to Square .9375, or 1 if Fit is set to Resize
PAL (4:3) to NTSC (4:3) 1, set Fit to ResizeChapter 3 Adding Media, Retiming, and Remastering 131
There are two ways you can get around this safety feature.
Using Proxies
The first is to use proxies with a proxyScale of less than 1. For example, at a proxyScale
of .5, you can potentially look at images up to 8K x 8K resolution.
Changing the Viewer Limits
The other workaround is to change the default Viewer limits by customizing a ui
preference file. Add the following lines:
gui.viewer.maxWidth = 4096;
gui.viewer.maxHeight = 4096;
These lines set the maximum resolution to 4K. If you want a larger resolution, enter it
here.
For more information on creating and editing these preference files, see Chapter 14,
“Customizing Shake.”
Adjusting the Cache for High-Resolution Images
When working with high-resolution images, it’s also necessary to adjust the cache
settings. By default, only images under 2K resolution are cached. By not automatically
caching large files, Shake conserves cache capacity, enabling you to add more files. You
can override this default with the following two lines, which adjust the default values.
The first line sets the maximum size by listing the X resolution, Y resolution, number of
channels, and amount of bytes. The second line sets the maximum amount of disk
space for the cache directory. You can assume that if you are working on 6K plates, you
can allow for more than 512 MB of disk space for your cache. These lines go in your
startup preference files. You modify the numbers to suit your production situation:
diskCache.cacheMaxFileSize = 2048*2048*4*2;
diskCache.cacheSize = 512;
Keep in mind that if you set your maximum file size to 6K x 6K x 4 channels in float, you
are saving massive files. The return you have on swapping this in and out of cache is
extremely limited, at best. It is recommended you use proxies when interactively
working with 4K and 6K images.
If you need to work at full resolution, try putting a Crop at the end of the chain to focus
on an area of interest, or using the Viewer DOD. This retains full pixel resolution, but
keeps your image resolution within the framework of your computer.132 Chapter 3 Adding Media, Retiming, and Remastering
Tuning the Amount of RAM Shake Uses
Finally, you need to tune the amount of RAM used by Shake. By default, 96 MB are
assigned to the nodes and 64 MB to the images themselves. You need to increase the
second setting. It is recommended that you allocate one-third of your memory to each
of the two following settings, to reserve memory for other applications and Flipbooks.
However, the first setting rarely needs to exceed 96 MB. For example, if you have 1 GB
of RAM, you might want to have memory settings like the following:
cache.cacheMemory = 96;
diskCache.cacheMemory = 500;
The first line is associated with nodes, and does not affect image resolution. The second
setting is associated with the images themselves, so you want to increase it as your
images get larger. The default setting is 64 MB—not useful for large resolutions. These
settings also go in your startup preference file.
For more information about resolution, see Chapter 5, “Compatible File Formats and
Image Resolutions.” For more information about caching, see Chapter 13, “Image
Caching,” on page 343.
Using Shake With Final Cut Pro
A new command in Final Cut Pro, Send to Shake, provides an automated way to move
media back and forth between both applications. Using the Send to Shake command
in Final Cut Pro exports one or more selected clips into a Shake script, opening it
immediately in Shake while Final Cut Pro is running. When you do this, a placeholder is
created in the originating Final Cut Pro project file that automatically corresponds to
the media that will be output from Shake.
Note: Each exported clip from Final Cut Pro is brought into the Shake script using
individual FileIn nodes. This is true even if two or more clips originate from the same
master clip in the original Final Cut Pro project.
For example, you can use Final Cut Pro to superimpose a group of clips that you want
to turn into a single composite using Shake. Final Cut Pro makes it easy to set the In
and Out points of each clip, and how they overlap. You can then send the media to
Shake along with each shot’s edit decision information, freeing you from having to
reconstruct the media arrangement within Shake.
You can also move an entire sequence of clips into a Shake script. For example, you
might do this to add operations to each individual clip in that scene to perform color
correction, or keying.
Once you’re finished in Shake, you can render the FileIn node that was automatically
created when you used the Send command from Final Cut Pro, and easily relink the
resulting media in the original Final Cut Pro project.Chapter 3 Adding Media, Retiming, and Remastering 133
How Sent Clips Are Arranged in Shake
Regardless of how you move Final Cut Pro clips into Shake, how they’re assembled in
the newly created Shake script depends on whether they were sequentially arranged
within a single video track, or vertically superimposed using several video tracks.
Imported Final Cut Pro clips are arranged within the node tree using Select and
MultiLayer nodes:
• Clips edited sequentially on the same video track in Final Cut Pro are connected to a
single Select node when exported to Shake. The Select node switches between clips
at their In and Out points, reflecting the editing decisions made on the track in Final
Cut Pro. If the clips were originally superimposed across multiple video tracks, each
video track that contains a clip results in a corresponding Select node being created
in the Shake script. All clips that were edited into the same video track are connected
to the same Select node.
Note: The actual edit points for each FileIn node attached to the Select node are
stored within the branch parameter. The data stored within this parameter is not
intended to be editable; any attempt to do so will disrupt the edit points of the
affected nodes.
• All the Select nodes are connected to a single MultiLayer node, which determines
which clips are in the foreground of the composition, and which are in the
background. Their arrangement reflects the arrangement of video tracks in the
original Final Cut Pro sequence.
For example, if you used the Send to Shake command on the following three
sequentially edited clips:
The result would be the following Shake script, with one Select node and one
MultiLayer node.
Sequentially edited clips in Final Cut Pro
Resulting arrangement in Shake134 Chapter 3 Adding Media, Retiming, and Remastering
If you used the Send to Shake command on the following superimposed clips:
The result would be the following Shake script, with three Select nodes, and one
MultiLayer node:
While it is possible to slide footage within edits by adjusting the placement of
imported clips in Shake’s Time View, you are better off making these adjustments in
Final Cut Pro and re-sending the media to Shake. Shake’s Time View makes it difficult to
determine whether there is sufficient underlying footage to prevent gaps in the
sequence.
Unsupported Media and Effects
Since QuickTime is the file format used for all media exchange between Final Cut Pro
and Shake, the following media and settings are not imported into Shake from Final
Cut Pro:
• QuickTime audio tracks
• Standalone audio files
• Still image files
• Generators
• Composite modes
• Transformations (referred to in Final Cut Pro as motion effects)
• Filters
Sequentially edited clips in Final Cut Pro
Resulting arrangement in Shake
Warning: Audio clips and tracks from the original QuickTime files are not imported
into Shake. Any timing changes you make in Shake will result in the adjusted clips
going out of sync with the audio in the originating Final Cut Pro project file.Chapter 3 Adding Media, Retiming, and Remastering 135
Sending Clips From Final Cut Pro
If you want to send one or more selected clips (or a single sequence), from Final Cut
Pro to Shake, you should use the Send to Shake command in Final Cut Pro.
To send one or more clips from Final Cut Pro to Shake:
1 Clean up your project timeline, so that you are able to select only the clips you intend
to send.
2 Do one of the following:
• Select one or more clips in the Timeline or Browser.
• Select a sequence in the Browser.
3 Do one of the following:
• Choose File > Send > Send to Shake.
• Control-click (or right-click) the selected clips or sequence, then choose Send > To
Shake from the shortcut menu.
4 When the Send to Shake dialog appears, select the appropriate options:
• Resulting Sequence Name: Type a name for the sequence that you’ve selected, prior
to sending the media to Shake.
• Save as Shake Script: Type a name for the Shake script to be created, then click
Choose to pick a location on disk to save it to.
• Save Placeholder QuickTime movie (FileOut) to: Type a name for the placeholder
QuickTime movie that will correspond to the FileOut node in the newly created Shake
script, then click Choose to pick a location on disk to save it to.136 Chapter 3 Adding Media, Retiming, and Remastering
5 Check the Launch Shake box if you want to automatically open the newly created
Shake script and start working on it.
6 Click Export.
When you click Export, four things happen:
• A duplicate sequence appears in your Final Cut Pro project, containing duplicates of
the selected media.
• A Shake project is created on disk.
• A placeholder QuickTime file is created on disk.
• The placeholder QuickTime file appears in a new video track that is created as the
topmost track in your sequence (the original media remains where it was).
The placeholder QuickTime clip in your Final Cut Pro project corresponds to the media
that will eventually be rendered out of Shake—specifically, from the FileOut node
appearing at the end of the generated Shake script.
Sending Media Back to Final Cut Pro
When you’re finished working in the Shake script that was generated from Final Cut
Pro, all you have to do is render the originally created FileOut node. The newly rendered
media file takes the place of the original placeholder QuickTime file, ready for use by
the original Final Cut Pro project.
When you reopen the originating Final Cut Pro project file containing the original
placeholder QuickTime file, you’ll need to use the Reconnect Media command to relink
the clip in your Timeline to the media that was rendered out of Shake.
The TimeRange of Scripts Generated From Final Cut Pro
The timeRange Global parameter in the Shake script that’s created by the Send to
Shake command is automatically set with the appropriate range of frames for the
media it references.
Important: Clicking the Auto button to update the timeRange is not recommended.
This can result in many more frames being referenced than expected, depending on
the total duration of the source media files that are referenced.4
137
4 Using Proxies
Shake has a sophisticated proxy system that lets you
dynamically adjust the resolution of the images to speed
your workflow. This chapter covers how to tailor Shake’s
proxy system to suit your needs.
Using Proxies
This section discusses how to use proxies to speed up your workflow. This includes
using Shake’s interactive scale setting, creating and assigning proxies to footage in your
script, creating custom proxy settings, working with offline full-resolution elements,
and pre-generating proxies.
What Are Proxies?
A proxy is a lower-resolution image that can be temporarily substituted for the highresolution plates in your script, thereby enabling you to work and see rendered tests
faster. Because the images are smaller, you drastically decrease the processing time,
memory requirements, and amount of time spent reading and writing files as you
work. Naturally, the trade-off is that the quality of the image displayed in the Viewer
suffers, which is why proxies are generally used only when creating low-resolution
comps, and creating test previews. After assembling a script using proxies, you can
return your script to the original, full resolution in order to render the final output.
You can also use proxies to temporarily view anamorphic images in flattened space. For
more information, see “About Aspect Ratios and Nonsquare Pixels” on page 209.
Proxies and Final Low-Resolution Output Renders
When you work with film plates and you need to generate a high-quality video
output, you have better (but slower) results if you render your plates at full resolution
and then size them down, instead of using the proxies to generate low-resolution
images. The proxies can be used to generate lower-resolution output files, but the
quality is not as high as with full-resolution rendering. 138 Chapter 4 Using Proxies
The following example shows a full-resolution image compared to a 1/3 scale proxy
image. You can see that the proxy uses 1/9th of the space, which potentially requires
11 percent of the processing time, memory, and I/O activity.
As a result, there is a dramatic difference in quality when the clip is viewed at the same
resolution, as you can see by the softening of the image to the right.
Shake automatically adjusts the values of pixel-based parameters behind the scenes in
order to compensate for the lower resolution of any proxies being used. In other words,
when using a 1/3 proxy, a Pan parameter set to 100 pixels is actually calculated by
Shake to be 33.333 pixels. The actual Pan parameter used in the interactive value field is
not modified, and continues to reflect the actual size of the original media.
Shake’s Three Proxy Methods
There are three basic approaches to using proxies that are controlled via parameters in
the Globals tab.
Full resolution 1/3 proxy
Images from The Saint provided courtesy of
Framestore CFC and Paramount British
Full resolution 1/3 proxy, enlargedChapter 4 Using Proxies 139
Enabling a useProxy setting
If processing is slow overall, and you need to speed things up while you’re working,
you can enable one of the proxy settings without needing to pre-render a set of proxy
files. This is a good option if you don’t anticipate working on the project for very long.
Turning on interactiveScale
If the general processing speed for your operations is fine, but the interactivity of
controls that correspond to processor-intensive operations is slowing you down, you
can turn on the InteractiveScale option in the Globals tab. This sets Shake to use a
proxy resolution only while you’re adjusting parameters. This option does not affect
your Flipbooks or FileOut renders.
Enabling useProxy and pre-rendering sets of proxy files
If you’re working with very high-resolution footage, or you’re using footage that’s
stored remotely on networked machines, you may find it best to pre-render a set of
proxy files to your local machine. This is also a good option if the entire project is
extremely processor-intensive, and you’re doing a lot of Flipbook tests.
Important: If you decide to pre-render proxy files for your script within the Shake
interface, make sure that you set the proxySet parameter in the useProxy subtree of the
Globals tab before following the procedures outlined in “Pre-Generating Your Own
Proxies” on page 150.
Note: The pixelScale and pixelRatio parameters are generally obsolete due to the proxy
functions in SFileIn that were introduced in Shake 2.5. These parameters have been
retained for general compatibility, but you probably won’t ever use them.
Using interactiveScale
The interactiveScale setting, in the Globals tab, is designed to speed up Shake’s
interactivity whenever you make adjustments to parameters.
It works by temporarily dropping the image-processing resolution to the proxy
resolution that’s selected in the interactiveScale parameter whenever you adjust a
parameter’s controls. While you make adjustments, the image displayed in the Viewer is
low-resolution, but is updated much more quickly. As soon as you release the mouse
button, the image is rendered at its full resolution (or the current proxy resolution, if
you’ve enabled useProxy). The interactiveScale setting has no effect on your rendered
output or Flipbooks. 140 Chapter 4 Using Proxies
You can combine this setting with the useProxy setting if the script you’re creating is
exceptionally slow to render. For example, setting useProxy to P1 lowers the overall
processing resolution to 1/2 by default. If you set the interactiveScale setting to 1/4,
parameter interactivity will be very fast, and you won’t have so long to wait when you
release the parameter control to let the image render at the current proxy setting.
The following is an example of how to use the interactiveScale parameter:
1 To follow along, use a FileIn node to read in the saint_fg.1-5# file, located in the $HOME/
nreal/Tutorial_Media/Tutorial_05/images directory.
2 Apply a Filter–RBlur (not Blur or IBlur).
3 Set your oRadius to approximately 300 (note the very slow RBlur).
4 Open the Globals tab, then set the interactiveScale to 1/3.
5 In the Viewer, drag the center control around. Notice that the image drops to a
temporary lower resolution while you make this adjustment. When you release the
mouse button, the Viewer image returns to its original resolution.
Modify a parameter... ...Release the mouse button.Chapter 4 Using Proxies 141
Using Temporary Proxies
Unless you specifically do otherwise, Shake generates temporary proxies (also called
on-the-fly proxies) that are created only for frames that are displayed, as needed, and
that are discarded once your computer’s disk cache is full.
Whenever you set the useProxy parameter to something other then Base, Shake scales
down the resolution of frames at the position of the playhead as you view them, in
order to accelerate your script’s performance. Unlike the interactiveScale setting, the
image is left at the proxy resolution until you return the useProxy parameter to the
Base resolution.
Important: The useProxy parameter affects both Flipbooks and FileOut nodes.
Temporary proxy images are written first into memory, and then to the disk cache as
memory runs out. Shake keeps track of how many times each cached frame has been
viewed, and eliminates the least viewed frames first when the cache runs out of room.
To change to a lower proxy using the default proxy resolutions, do one of the
following:
m
In the Globals tab, switch useProxy from Base to P1, P2, or P3.
m Use the pull-down button menu in the title bar.
When activated, the proxy button in the title bar is highlighted with the selected proxy. 142 Chapter 4 Using Proxies
The default proxy settings are:
By default, you can select from the predefined proxy sets in the useProxy subtree of the
Globals tab. These are common proxy settings, but you can also use your own.
To temporarily set a custom proxy:
1 In the Globals tab, open the useProxy subtree.
2 Change the proxyScale and proxyRatio parameters to the desired settings.
Note: You can also enter the desired proxyScale and proxyRatio, separated by a
comma, directly into the useProxy value field.
As soon as the proxyScale or proxyRatio is modified, the useProxy Base/P1/P2/P3
buttons turn to Other, since there is no longer a correspondence to any of the preset
proxy settings.
Proxy Setting proxyScale proxyRatio
Base 1 1
P1 1/2 (.5) 1
P2 1/4 (.25) 1
P3 1/10 (.1) 1Chapter 4 Using Proxies 143
In the following example, the proxyRatio is set to .5. This setting has the added benefit
of correcting the anamorphic distortion of the image while simultaneously reducing its
resolution.
To return to a preset proxy setting:
m
Click Base/P1/P2/P3 (or Other in the upper-right corner of the Shake window).
Customizing P1/P2/P3 for a Script or Session
If you consistently require a different group of proxy settings for your project,
customized useProxy subparameters can be saved within that script.
To customize the proxy settings for an individual script:
1 In the Globals tab, open the useProxy parameter to reveal its subparameters.
2 Open one of the proxyXDefaultFile subparameters (where X is 1, 2, 3, or 4). In this
example, the proxy1DefaultFile and proxy2DefaultFile subparameters are being
customized.
Full Resolution .5 proxyRatio
Changing the Aspect Ratio to 0.5
To ensure that your nodes understand the aspect ratio as 0.5 (for example, Rotate),
open the Globals tab and set the defaultAspectRatio format to 0.5. This does not
modify the image, but only affects nodes such as Twirl and Move2D that are created
after you set this ratio. For more information, see “About Aspect Ratios and
Nonsquare Pixels” on page 209.144 Chapter 4 Using Proxies
3 Modify the proxy1DefaultScale and proxy1DefaultRatio parameters.
• For example, suppose you want to create a proxy setting that lowers the resolution
of an anamorphic image by resizing the image vertically to correct the anamorphic
distortion. You want to set P1 to be the same width as the base file (the full
resolution image) but flattened, and P2 to be 1/4 scale and also flattened. To do this,
use the following values:
• proxy1DefaultScale 1
• proxy1DefaultRatio .5
• proxy2DefaultScale 1/4
• proxy2DefaultRatio .5
Note: You can now close the useProxy subtree to clean up the Globals tab.
Click P1 or P2 to toggle to the flattened versions of the full-resolution image.
Note: Do not use the ratio parameter to change your aspect ratio on the fly if working
with pre-rendered proxies. For more information, see “Anamorphic Images and PreGenerated Proxies” on page 155.
Permanently Customizing Shake’s Proxy Settings
The previous section described saving custom proxy settings into a script. However, if
you always use the same custom settings for all new scripts, you can create a new set
of default P1/P2/P3 settings by creating a .h file in your startup directory.
Modified P1 Modified P2Chapter 4 Using Proxies 145
When an SFileIn node is created, three pieces of information are taken from the File
Browser:
• The file name
• The proxy level that corresponds to this file name (Base, P1, P2, or P3)
• The set of images to use for that proxy level
The chosen file name appears in the FileIn node’s proxy1File parameter, and the
settings for the selected proxy level from the selected proxy set are used to set the
other subparameters of the fields.
Next, the remaining proxy set paths defined in the proxy set are filled into the
remaining proxyNWhatever fields in the SFileIn node, with path modifications and
substitutions made as defined in the DefProxyPath or DefBasePath statements for the
proxy set.
The SFileIn node’s parameters are set via the values, modifications, and substitutions
defined in the proxy set, in much the same way that a Grad node takes its initial width
and height parameter values from the current defaultWidth and defaultHeight values
in the format subtree of the Globals tab.
.h File Syntax for Custom Proxy Sets
The syntax for defining proxy sets is as follows:
DefProxyGroup(“proxySet”,
DefBasePath(
“baseDefaultFile”,
“baseDefaultFileType”,
defauktAlwaysAdd,
“baseDefaultReplace”),
DefineProxyPath(
const char *proxyPath=“../proxy.50/.”,
float scale=.5,
float aspect=1,
int bytes=GetDefaultBytes(),
const char *fileFormat=“Auto”,
int render=0,
int alwaysAdd = 1,
int index =1,
string substitionString
); 146 Chapter 4 Using Proxies
Variable Definitions
This section explains the declarations made in the above script.
proxyPath
Defines the default location for pre-generated proxies. (See the example below.) Note
that you can use variables to grab strings from the baseName:
• = image name + frame range
• = format extension
• = image name (no frame range)
• = the frame range
• , , etc. = the name of the parent directory, two directories up, and so
on.
scale
The proxy scale.
aspect
The aspect ratio (the Y-axis scale).
bytes
The default bytes for pre-generated proxies. This does not affect on-the-fly generation
of proxies, so you maintain your bit depth if you do not pre-render your proxies.
fileFormat
The file format for pre-rendered proxies. (See below.)
render
Turns on or off the render lights on the Render Proxies menu. When on (1), the files are
rendered with the Render Proxies menu. (See below.)
alwaysAdd
When set to 1, an entry is added to an SFileIn node at the time of its creation.
index
Sets whether you are P1, P2, or P3 (1, 2, 3). This replaces Shake’s default settings, unless
you set the index to 0, in which case it is appended.
substitutionString
When the first string is found in the base file name, it substitutes the second string; for
example, from the first line, “4096x3112” is substituted by “2048x1556.” This string is not
always necessary—the proxyPath may already be taking care of differentiating the
proxy files if all of the names of the files are the same except for a root directory stating
the size of the proxies.Chapter 4 Using Proxies 147
Example
This example sets a proxy of .25 with an aspect ratio of .5. It takes the default bytes
setting, turns on the render light for the Render Proxies menu, adds an entry into an
SFileIn, and is set as P1:
DefineProxyPath(“../proxy.25.5/.”, .25, .5,
GetDefaultBytes(), “Auto”, 1,1,1, “substitutionStrings”);
You can also create and use predefined proxy sets in the useProxy subtree (in the
Globals tab), where you can choose the proxyScale values for P1, P2, and P3. The
following example assumes file names such as “4096x3112/name_4k.#.cin,” “2048x1556/
name_2k.#.cin,” “1024x778/name_1k.#.cin,” and “410x366/name_sm.#.cin.” To set a
group, use the following code in a startup .h file:
DefProxyGroup(“4K Fullap”,
DefBasePath(“../4096x3112/.”,
“Auto”,
1,
“2k|4k;1k|4k;sm|4k”),
DefProxyPath(“../2048x1556/.”,
.50,
1.,
GetDefaultBytes(),
“Auto”,
0,
1,
0,
“4k|2k;1k|2k;sm|2k”),
DefProxyPath(“../1024x778/.” ,
.25,
1.,
GetDefaultBytes(),
“Auto”,
0,
1,
0,
“4k|1k;2k|1k;sm|1k”),
DefProxyPath(“../410x366/.” ,
.10,
1.,
GetDefaultBytes(),
“Auto”,
0,
1,
0,
“4k|sm;2k|sm;1k|sm”)
);
//This sets the default set at startup, so obviously you only set it once.
script.proxySet = “4k Fullap”148 Chapter 4 Using Proxies
The first line names the group as “4k Fullap.” The next line describes the base file name.
The next three lines that begin with DefProxyPath describe the subproxies using the
DefineProxyPath definition, except the substitutionStrings. When the first string is
found in the base file name, it substitutes the second string. For example, from the first
line, “4096x3112” is substituted with “2048x1556.”
Note: Because the syntax is somewhat intricate, you may find it easier to copy an
example from the /shake.app/Contents/Resources/nreal.h file and to paste it
into a new file.
Using Pre-Generated Proxy Files Created Outside of Shake
It is sometimes convenient to begin working using only pre-rendered proxy-resolution
media files (which are typically generated during the scanning process), and to
substitute the full-resolution media files later. Shake’s proxy structure allows you to do
this.
Note: In the following example, it is assumed that all users in your production pipeline
are using a standardized naming convention.
To read pre-generated proxies into a script:
1 Open the useProxy subtree in the Globals tab.
2 Do one of the following:
• Choose an option from the proxySet pop-up menu as described in the previous
section. This automatically enters names for the file path names for the base file and
the proxy sets.
• Create custom proxyNDefaultFile settings as appropriate to match the resolution of
the pre-generated proxy files you’ll be using.
3 If you want to substitute a string in the file name, you can use the defaultReplace
parameters in the Globals tab. These subparameters are located within the useProxy
subtree, inside of each DefaultFile subtree.
These parameters allow you to replace text in the original DefaultFile paths.
For example, suppose that your P1 proxies are already available on the computer
“MyMachine,” but your full-resolution elements are not. You start compositing with the
proxies, intending to work on the full-resolution elements later. When the fullresolution elements become available, they’re located on “Server1.” The easiest way to
substitute the proxies with the full-resolution media is to use the defaultReplace
parameters. Chapter 4 Using Proxies 149
If the proxy was named:
//MyMachine/project1/shot1/plate1/proxy1/myfile_proxy1
and the full resolution elements are:
//Server1/project1/shot1/plate1/full/myfile_full
you enter MyMachine|Server1; proxy1|full as your baseDefaultReplace. Note the
semicolon to split the entries.
4 Create a FileIn node in the Node View to read the pre-rendered proxies into your script.
When the File Browser appears, choose the proxy files and specify the proxy setting it
corresponds to in the “Load as proxy” setting at the bottom of the Browser. In this
example, P1 is the proxy setting at which the selected files are read in.
When you import proxy files without corresponding base files, the imageName
parameter in the Parameters tab appears olive green.
Later, when you’re ready to start using the full-resolution media that corresponds to the
proxy files you’ve been using, you can read in this media using the baseFile
subparameter of the FileIn node’s imageName parameter.
When the full-resolution elements are brought online, the imageName field goes back
to its normal gray appearance, and you may toggle to the Base full-resolution mode to
display the media at its highest resolution.
Important: Use caution when working with anamorphic elements. The proxy ratio
parameter determines the relationship of the proxy to the base file. Therefore, if you
load in low-resolution flattened images, ensure that your ratio reflects the proper ratio
of the height to the width in the base files. See “Anamorphic Images and PreGenerated Proxies” on page 155.150 Chapter 4 Using Proxies
Pre-Generating Your Own Proxies
Ordinarily, if you set useProxy to P1, P2, or P3, the proxies created for each frame of the
composition are written on the fly to memory. Eventually, the computer’s memory fills
up, and these temporary proxy images are written to disk—into the cache.
The cache is a closed set of files that are viewable only by Shake. Additionally, remote
renders do not recognize the cache directory if not explicitly specified to do so. Finally,
you may have so many files that the cache mechanism becomes overworked. In these
cases, it makes sense to pre-generate your proxies when you start the project with an
initial rendering process. The proxy files are then pulled from these precalculated
images rather than generated on the fly.
Important: If you decide to pre-render proxy files for your script within the Shake
interface, make sure that you set the proxySet parameter in the useProxy subtree of the
Globals tab before you generate your proxies.
You can either pre-generate the files inside the interface, or you can load them after
they are created by an external process.
To quickly pre-generate files using Shake’s default settings:
1 Open the Globals tab and open the useProxy subtree.
2 Choose the type of proxies you want to generate from the proxySet pop-up menu.
There are six options in this menu:
• Custom: This option is automatically selected whenever you choose your own
resolutions, names, and format.
• No_Precomputed_Proxies: No proxies are generated.
• Relative: Three sets of proxies are generated, with defaultScale values that are
calculated relative to the original size of the image files:
• 1/2
• 1/4
• 1/10Chapter 4 Using Proxies 151
• 2K Academy: This option is suitable if your original image files have a resolution of
1828 x 1556. Three sets of proxies are generated, with the following absolute
defaultScale values:
• 914 x 778
• 457 x 389
• 183 x 156
• 2K Fullap: This option is suitable if your original image files have a resolution of 2048
x 1556. Three sets of proxies are generated, with the following absolute defaultScale
values:
• 024x778
• 512x389
• 205x156
• 4K Fullap: This option is suitable if your original image files have a resolution of
4096x3112. Three sets of proxies are generated, with the following absolute
defaultScale values:
• 2048x1556
• 1024x778
• 410x366
When you choose an option, the proxyXDefaultFile fields are automatically populated
with the appropriate path names. For example, if you choose Relative from the
proxySet pop-up menu, the proxyXDefaultFile fields are populated with the following:
By default, proxies are stored in new directories that are created in the same location as
the source media on disk. If necessary, you can change the path where the proxies are
saved, the directory into which they’re saved, the names of the generated files, and the
format of the proxies that are rendered. For more information, see “How Proxy Paths
Are Defined” on page 155.
3 As an optional step, select the FileIns in the Node View that you want to generate as
proxies.
Note: Even if proxies have already been rendered, they will be rendered again. Shake
has no way of checking to see if proxy files are already present and/or valid.152 Chapter 4 Using Proxies
4 Choose Render > Render Proxies.
The Render Proxy Parameters window appears.
5 Turn on the proxies you want to generate. In the following screenshot, only the
proxy1Default proxy set will be rendered, because the other two sets are turned off.
The Render Proxy Parameters window has the following parameters:
renderProxies
Specifies whether all FileIn nodes in the currently open script are rendered as proxies,
or just the selected ones.
timeRange
Sets the range of frames to render as proxies. When Auto is clicked, the entire frame
range for each clip is rendered. Note that proxies are not generated beyond a clip’s
frame range.
maxThread
The number of processors used for rendering on multiprocessor systems.
createProxyDirectories
Creates appropriate directories for the new proxy image files.
sequential
When generating many files, it may be more efficient to process each file individually,
one after the other, rather than simultaneously. This is equivalent to the
-sequential flag on the command line.Chapter 4 Using Proxies 153
previewFrames
Displays the thumbnails of the new proxy frames as they’re rendered.
Render proxy Defaults
Each proxy set you want to be rendered must be enabled. You can also open each
proxyXDefault’s subtree to modify any parameters. If you create your own custom
proxy setting with a .h file (see above), you can specify if this button is on or off by
default.
6 When you’re finished changing the settings, click Render.
When Shake finishes rendering, the proxies are ready to be used in your script.
Activating the P1, P2, or P3 settings of the useProxy parameter results in Shake loading
the pre-generated proxy files you’ve just created, rather than generating them on the fly.
Rendering Proxies on the Command Line
If you’re rendering your proxies from the command line, there are three additional
parameters you can specify.
-renderproxies
Only renders the proxy subsets of the FileIn nodes. If no subsets are specified, what is
saved in the script is rendered. Otherwise, you can use -renderproxies p1 p2 p3.
-proxyscale
You can specify numerical scale and ratio parameters, or use the keywords Base, p1, p2, p3.
-createdirs
Creates directories for the -renderproxies command. Does nothing for normal FileOut
nodes.
Warning: Currently, the command-line proxy keywords referenced above do not
correspond to the proxy keywords in the useProxy parameter of the Shake graphical
interface. As a result, p1 (command line) = Base (graphical interface), p2 (command
line) = p1 (graphical interface), and p3 (command line) = p2 (graphical interface).154 Chapter 4 Using Proxies
Pre-Generated Proxy File References in FileIn Nodes
When you open a FileIn node’s parameters in the Parameters tab, the imageName
parameter shows which proxy image files are currently being used. For example,
generating a set of proxies for an image sequence located in a directory named “Media”
in the $HOME directory results in the following path appearing when you set useProxy
to P2:
If you open the imageName subtree in the Parameters tab, four proxyNFile fields
display information about the media files referenced for each proxy setting. Chapter 4 Using Proxies 155
Anamorphic Images and Pre-Generated Proxies
Do not use the proxyRatio parameter to change your aspect ratio on the fly if working
with pre-rendered proxies. This parameter dictates the relationship of the height-towidth ratio between the proxy file and the base file as they exist on disk. Therefore,
either ensure you are using flattened pre-generated proxies (that can be a second
proxy set), or use the global parameter viewerAspectRatio to flatten the anamorphic
proxies. In the following example, image A is the full-resolution anamorphic frame, so it
looks squeezed. Image B is a half-size anamorphic-resolution proxy. Image C is a halfsize, flattened image. This is how the raw images appear on disk.
Therefore, your proxy settings should be:
• P1: Image B, scale of .5, ratio of 1; as the ratio of the height to the width is the same
as in image A.
• P2: Image C, scale of .5, ratio of .5. The width is the same as image B, the ratio is half
of the ratio of the height-to-width of image A.
How Proxy Paths Are Defined
The default paths for proxies generated by Shake use variables that reference the name
and format of the original source media files. If the proxySet pop-up menu is set to
Relative, then the path of the proxy1DefaultFile is:
../proxy.50/.
This path translates into the following:
• ../ references the directory level of the directory containing the original media files.
• represents the image name and the frame range of the original media files.
• represents the format extension of the original media files.
Therefore, if the original media files being referenced are named MyAmazingFootage.1-
100#.cin:
• = MyAmazingFootage
• = 1-100#
• = MyAmazingFootage.1-100#
• = cin156 Chapter 4 Using Proxies
For example, suppose the source media of an image sequence using the file name
plate.# is referenced by the following path:
/MyHiResImages/plate.#.cin
By default, proxy image files are rendered and saved into new directories, which are
created within the directory referenced by using the following names:
/proxy.50/plate.#.cin
/proxy.25/plate.#.cin
/proxy.10/plate.#.cin
You can always change the name of the directories that are created to hold the proxies
generated by Shake. For example, you can use the variable that defines the file name,
plus a suffix, such as:
../_half/.
You can also change the image format you want the proxy files to be written to by
changing to a specific file suffix. For example, if you want to write the proxy
files as a .tiff sequence, you simply type:
/proxy.50/plate.#.tiff
Organizing Proxy Files
When specifying proxy directories, there are two ways you can organize the proxy files
that are generated.
You can place all the images at the same directory level, for example:
images/bluescreen1/bs1.#.cin,
images/bluescreen2/bs2.#.cin,
images/cg_plate/cg_plate.#.iff
In the following case, the default path puts all proxies into the same subdirectory:
images/bluescreen1/bs1.#.cin,
images/bluescreen2/bs2.#.cin,
images/cg_plate/cg_plate.#.iff
images/proxy.50/bs1.#.cin
images/proxy.50/bs2.#.cin
images/proxy.50/cg_plate.#.iff
Proxies of YUV Files
Use caution when making proxies of YUV files, as they are always at a set resolution.
Be sure to toggle your FileType to a different format. By default, the proxies are
switched to .iff format.Chapter 4 Using Proxies 157
If you have many plates and a high frame count, you may want to put the images for
each proxy resolution into separate directories. For example, you can provide a file path
such as:
../_p.50/.
This approach keeps the file count down in each directory, but increases the overall
number of directories referenced by your script. Examples of this are:
images/bluescreen1/bs1.#.cin,
images/bs1_p.50/bs1.#.cin
images/bluescreen2/bs2.#.cin,
images/bs2_p.50/bs2.#.cin
images/cg_plate/cg_plate.#.iff
images/cg_plate_p.50/cg_plate.#.iff
All of your images are in subdirectories based on resolution, for example:
images/bluescreen1/2048x1556/bs1.#.cin
images/bluescreen2/2048x1556/bs2.#.cin
images/cg_plate/2048x1556/cg_plate.#.iff
In this case, the default naming scheme works fine:
../proxy.50/.
This gives you:
images/bluescreen1/2048x1556/bs1.#.cin
images/bluescreen1/proxy.50/bs1.#.cin
images/bluescreen2/2048x1556/bs2.#.cin
images/bluescreen2/proxy.50/bs2.#.cin
images/cg_plate/2048x1556/cg_plate.#.iff
images/cg_plate/proxy.50/cg_plate.#.iff
You can of course use the following as your path to return numerical values for a halfproxy:
../1024x778/.
You can also use the variable to access the name of any parent directory. For
example:
/1024x778/..
creates:
images/bluescreen1/1024x778/bluescreen1.#.cin
Full-Resolution Proxies and Network Rendering
If your script references media that resides on a remote network machine, it can
sometimes be convenient to create full-resolution duplicates of this media on your
local machine. 158 Chapter 4 Using Proxies
Using local files can speed your compositing work by eliminating the need for your
computer to access media over the network. As an added benefit, using local files
speeds up renders on your machine. On the other hand, having your script reference
the original files over the network can speed up network renders by preventing
networked machines from having to access media on your computer.
You can create a local set of media files by setting the proxyScale of one of the proxy
settings (typically P1) to 1. This creates full-resolution duplicates of the files from the
network server on your local machine when you use the Render Proxies command.
Doing this allows you to switch back and forth between your local media, and the
original network media. When you switch the proxyMode to P1, the local copies of the
media files are used. When proxyMode is switched back to Base, the media referenced
from the network is used.
To specify this on the command line, use the following commands:
To use the original files:
shake -proxyscale Base
To use the local full-resolution copies:
shake -proxyscale 1 1
Customizing the Format of Pre-Generated Proxies
Instead of using the default proxy settings, you can open any of the proxyNDefaultFile
subtrees and change the scale, ratio, format, and bit-depth parameters for proxies
generated using that setting. If you customize these subparameters, the proxySet
parameter automatically changes to Custom.
Note: Changing the subparameter settings within a proxyNDefaultFile set does not
automatically rename the directory that will be created to hold the proxies that are
generated.Chapter 4 Using Proxies 159
The following example uses one of the tutorial clips to illustrate how you can create
custom proxy settings to create half-height proxies for anamorphic footage. Do not
read the tutorial images in right away.
To pre-generate customized proxies from the Shake interface:
1 In the Globals tab, open the useProxy subtree.
2 Set your proxy1DefaultFile to:
/TEMP/saint_p.1x.5
3 Set your proxy2DefaultFile to:
/TEMP/saint_p.25x.5
Note: This example requires you to use the tutorial images, and it is likely (if you are at
a large facility) that you do not have permissions to create files and directories in the
install directory. Therefore, set the proxy directories to be in an open user area.
(Typically, you do not save your proxies into the TEMP directory.)
4 Open the proxy1 subtree, and set scale to 1, ratio to .5, format to .iff, and turn
proxy1DefaultAlwaysAdd on.
5 Open the proxy2 subtree, and set proxy 2, to 25 and .5, format to .iff, and turn
proxy2DefaultAlwaysAdd on.
6 Open proxy3 and disable AlwaysAdd (in this example, you only need two proxy sets).160 Chapter 4 Using Proxies
This group of parameters should now look like this:
7 Now, create a FileIn node, and read in the saint_fg.1-5# and saint_bg.1-5# files from the
$HOME/nreal/Tutorial_Media/Tutorial_05/images directory.
8 Choose Render > Render Proxies.
9 In the Render Proxy Parameters window, set the frame range to 1-5.
10 Enable Render Proxy1Default and Render Proxy2Default.
11 Click Render.
Your proxies are rendered and available for use. When you click P1, you switch to the
half-height images. When you click P2, you switch to the quarter-resolution, half-height
images. When you click P3, you are using 1/10 resolution images, but these are
generated on the fly, as they were not pre-rendered.
Pre-Generating Proxies Outside of the User Interface
You can also pre-generate proxies for a script from the command line. There are two
methods to generate proxies outside of the interface. Chapter 4 Using Proxies 161
Pre-Generating Proxies From the Command Line—Method One
If the base-resolution images are already loaded into a script and you have checked
that the proxy paths are correct (see above), you can launch a proxy-only render on the
command line with the -renderproxies command:
shake -exec myscript.shk -renderproxies p1 p2 p3 -t 1-100 -v -
createdirs
This automatically creates the appropriate subdirectories when -createdirs is activated.
There is no checking for file status, so all images are still re-rendered, even if they
already exist. Also, each sequence is only rendered for its actual length, so a five-frame
sequence is not rendered out to 100 frames.
The command to specify your proxies looks like the following example, and can be
entered into a startup .h file or in a script. Its format is identical to what is listed above:
DefineProxyPath(“../proxy.25.5/.”, .25, .5,
GetDefaultBytes(), “Auto”, 1,1,1);
Now, no further work is needed to load these proxies back into the user interface, as all
paths are determined by the default proxy settings that were saved into the script.
Pre-Generating Proxies From the Command Line—Method Two
If you only have the raw image files, you can use the -z or -zoom functions to render
your images:
shake fullres.#.cin -z .5 -fo halfres.#.cin -t 1-100 -v
shake fullres.#.cin -zoom .5 .5 -fo halfres_halfheight.#.cin -t 1-100
-v
shake fullres.#.cin -zoom .5 .5 -bytes 1 -fo
halfres_halfheight_8bit.#.sgi -t 1-100 -v
The first command renders half-resolution Cineon files. The second renders halfresolution flat files (to squeeze scope images). The third command does the same, but
writes out 8-bit SGI files.
Using Pre-Generated Proxies in Your Script
After pre-generating your proxies, you need to set up your script so that it references
both the original media and the proxies you’ve created.162 Chapter 4 Using Proxies
To use pre-generated proxies in a script via the user interface:
1 Read the full-resolution images into a script with a FileIn node. The Load as proxy
parameter at the bottom of the Browser lets you choose whether the resolution of the
image corresponds to the Base, P1, P2, or P3 proxy resolution.
2 In the FileIn parameters, expand the imageName subtree. The baseName is already
supplied, but you’ll notice that proxy1File probably has an incorrect default name. Click
the folder to browse to the correct location of the proxy media files.
After you load the appropriate files for each proxyNFile parameter, the scale and ratio
parameters in the proxyNFile subtrees should be automatically set relative to the fullresolution baseFile image. Otherwise, they’re calculated according to the defaultWidth
and defaultHeight parameters in the format subtree of the Globals tab.
Keeping High-Resolution Elements Offline
If you have not yet loaded your full-resolution images onto a disk and are loading a
proxy into the interface, click Cancel to close the File Browser. Then, in the Globals tab,
set the useProxy parameter to Base. This helps to automatically calculate the proxy
size. Next, return to the FileIn and indicate the proxy set of your image with the
Browser. Later, when the full-resolution elements go online, you can load in your
elements with the FileIn node. If your layer does not have a full-resolution element the
same size as the option chosen in the format pop-up menu in the Globals tab, you
must manually adjust the scale and ratio parameters for the proxy set of that FileIn.Chapter 4 Using Proxies 163
Note: When you toggle the useProxy parameter from Base to P1, P2, or P3, you do not
necessarily load a FileIn node’s corresponding proxy1File, proxy2File, or proxy3File
media. The proxy mechanism loads the set that is closest to the global settings, and
does a further scale based on that set. For example, if you haven’t loaded a 10-percent
pre-generated proxy and you set useProxy to P3, a 10-percent file is generated on the
fly from the generated P2 proxy.
When Not to Use Proxies
Proxies are fine for gauging color and relative position. However, proxies do not work
well for pixel-sensitive operations such as DilateErode (where you may chew just one
pixel in or out), or for tracking nodes, due to round-off errors. This is not true for all
filters. Blur, for example, works fine at a proxy resolution.
If you’re performing an operation where it’s not advisable to use proxies, an easy way
to increase Shake’s refresh speed while working on high-resolution images is to
activate the Viewer DOD button.
For more information, see “The Domain of Definition (DOD)” on page 82.
Do Not Use Proxies for Tracking
This is worth repeating. Because proxies eliminate detail by lowering an image’s
resolution, proxies will introduce rounding errors with Shake’s motion tracking nodes—
including the MatchMove, Stabilize, and Tracker nodes.
Proxies and Z-Depth
Proxies also interfere with Z-depth compositing, as there is no anti-aliasing with the Z
channel. The left image is a full-resolution Z composite. The right image is the same
composite at a proxy resolution. The anti-aliasing of the depth channel significantly
alters the image quality.
This quality loss can be somewhat minimized by using a Box filter for your proxies, but
then the rest of your image quality suffers as well.
Full resolution Proxy image164 Chapter 4 Using Proxies
Proxy Parameters
The following tables list proxy parameters everywhere they appear in Shake, in the
Globals tab, and in each FileIn node’s Source tab.
In the Globals Tab
The useProxy subtree in the Globals tab has the following parameters that let you
customize how Shake handles proxies for the media used by your script:
useProxy
Specifies the proxy set to be used. The sizes are determined by opening proxy1File,
Proxy2File, and so on, and setting the scale/ratio parameters. The two numbers are the
scale and ratio of the proxy.
useProxyOnMissingFile
When active, substitutes a proxy generated from an associated proxy file when a
missing image is encountered.
proxyScale
A temporary setting (though it is saved into a script) for the proxy resolution that is
overwritten when useProxy is set. If setting this parameter matches up with a proxy set,
the proxy set is automatically activated.
proxyRatio
A temporary setting (though it is saved into a script) for the squeeze on the Y axis to
compensate for nonsquare pixel images that are overwritten when useProxy is set. If
setting this parameter matches up with a proxy set, the proxy set is automatically
activated.
proxyFilter
The filter used in the sampled images. The default is generally fine, although you may
want to switch to Box when working with Z-depth files.
pixelScale
An obsolete function to scale all pixel values.
pixelRatio
An obsolete function to scale all pixel values.
proxySet
A pop-up menu with pre-defined sets of proxy resolutions.Chapter 4 Using Proxies 165
baseDefaultFile
This is used when you bring in pre-rendered proxies before loading in the fullresolution elements. It is assumed you are using a standardized naming convention.
Therefore, by using this naming convention, Shake can establish the name of the full
resolution elements, based on the name of the proxy you supplied. Opening the
baseDefaultFile subtree reveals three more parameters:
• baseDefaultFileType: The anticipated file type of the full resolution element.
• baseDefaultAlwaysAdd: Whether to add this into the FileIn.
• baseDefaultReplace: Lets you replace strings in the first loaded proxy set to be
replaced by a second string, taking the format: source|replace;source|replace. For
example, if you always have _lr at the end of a low-resolution file name and _hr at
the end of a high-resolution file name, you could use _hr|_lr to automatically change
myfile_hr to myfile_lr for that proxy set.
proxyNDefaultFile
The default file path for each of the four proxy sets you can specify. Relative paths are
relative to the image read in with a FileIn. The proxyNDefaultFile subtree has six
additional parameters:
• proxyNDefaultScale: The setting for that proxy set, also setting P1, P2, and so on. For
example, proxy1 is P1.
• proxyNDefaultRatio: The Y scaling for that proxy set. If working with pre-rendered
elements and with anamorphic elements, be sure that this setting reflects the
height-to-width relationship between the proxy and base files as they actually exist
on disk. See “Anamorphic Images and Pre-Generated Proxies” on page 155.
• proxyNDefaultFileType: When pre-rendering your proxy files, they are stored in this
format. This has no effect with on-the-fly proxies.
Adding Your Own Entry to the proxySet Pop-Up Menu
You can define your own proxy set to appear in this menu via a .h file. This
automatically sets the paths and sizes for each set. You can also declare a proxy set
for a specific FileIn during browsing.
A predefined proxy set looks like this:
DefProxyGroup("4K Fullap",
DefBasePath( "../4096x3112/."),
DefProxyPath("../2048x1556/.", .50, 1., GetDefaultBytes(), "Auto", 0, 1,
0, "4096x3112|2048x1556"), DefProxyPath("../1024x778/." , .25, 1.,
GetDefaultBytes(), "Auto", 0, 1, 0, "4096x3112|1024x778"),
DefProxyPath("../410x366/." , .10, 1., GetDefaultBytes(), "Auto", 0, 1, 0,
"4096x3112|410x366")
); 166 Chapter 4 Using Proxies
• proxyNDefaultBytes: The bit depth for pre-rendered proxies. This has no effect with
on-the-fly proxies.
• proxyNDefaultAlwaysAdd: When enabled, this proxy set is added to a FileIn node
when created.
• proxyNDefaultReplace: See baseDefaultReplace.
textureProxy
The proxy level at which texture-rendered images that are used by the MultiPlane’s
hardware-rendering mode are displayed in the Viewer. This is similar to the
interactiveScale setting, in that the proxy level that’s set here is used to generate onthe-fly Viewer images.
interactiveScale
Sets a temporary proxy setting that is active only when modifying a parameter. Fully
compatible with the other proxy methods.
FileIn
Each FileIn node also contains parameters for defining proxy usage.
BaseFile
The original, high-resolution image sequence or movie file associated with this node.
Opening this subtree reveals one additional parameter:
• baseFileType: Tells Shake what the format is if the file suffix is ambiguous. Generally,
you do not need to set this, as “Auto” automatically detects the format. If you have a
problem reading an image, try setting the format to correct this.
ProxyXFile
The directory for pre-generated proxies. This is ignored for on-the-fly proxies.
• proxyNScale: The scaling factor for that proxy set.
• proxyNRatio: The Y scaling factor for that proxy set, used for anamorphic files.
• proxyNFileType: The file type for pre-generated proxies, ignored for on-the-fly proxies.5
167
5 Compatible File Formats and
Image Resolutions
The first part of this chapter covers the many file formats
with which Shake is compatible. The second chapter
covers how to control image resolution.
File Formats
The FileIn node can read in two kinds of media—image sequences and QuickTime files.
Image sequences are simply collections of image files, where each frame of film or
video corresponds to one image file. QuickTime files, on the other hand, contain every
frame of media inside of a single file. Which media format is more useful for you
depends on your production pipeline.
Note: QuickTime files can only be read and written by Shake on Mac OS X.
Image Sequences
The individual image frames in an image sequence can be saved in a wide variety of
formats. Interlaced frames for video contain both fields within the image file for that frame.
Each frame of an image sequence has the frame number saved as part of its file name.
These frame numbers can contain padding to keep the length of the file names
constant (Image0001, Image0002, Image0003, and so on) or padding can be left off
(Image1, Image2, Image3, and so on).
When creating an image sequence for use by Shake, it is good practice to include the
file extension (for example, .iff, .cin, .tif, and so on), but Shake does not necessarily need
it. In general, you use the extension to define the input or output format.
Shake Does Not Support HDV
Shake does not support long-GOP formats, including HDV, MPEG-2, or MPEG-1. If you
want to use HDV media that was captured with Final Cut Pro or Final Cut Express HD
in Shake, recompress it with the Apple Intermediate Codec first. 168 Chapter 5 Compatible File Formats and Image Resolutions
Shake is a hybrid renderer—it adapts its rendering from either scanlines or a group of
tiles. This means it never has to load the entire image, just a single piece of the image,
making a much smaller memory footprint than other compositors. Sometimes you
cannot load just a single line, for example, when using a Rotate node, in which case
Shake internally breaks the image down into small tiles to work with more
manageable bits.
QuickTime Files
Shake on Mac OS X supports the reading and writing of QuickTime files. QuickTime
support in Shake is limited, however. Embedded Flash and SMIL content is ignored, as
are all of QuickTime’s interactive features.
Audio tracks are also ignored by the FileIn node. If you read in a QuickTime file, you
must import its audio via the Audio Panel in a separate process. For more information
on using the Audio Panel, see Chapter 9, “Using the Audio Panel,” on page 277.
If you reference a QuickTime movie with more than one video track, Shake only reads
the first video track into your script; all others are ignored.
Note: You cannot read out (export) a QuickTime movie with a dynamically varying
frame size. The resulting file will be unusable.
Which Codec Is Best?
Compositing with Shake is best accomplished using source media files with little or no
compression. Ideally, you should then capture or create QuickTime movies for use with
Shake using codecs that apply the least amount of compression possible. QuickTime
includes the Uncompressed 8 and 10-bit 4:2:2 QuickTime codecs, as well as other codecs
for various formats of standard- and high-definition video. There are also a variety of
third-party codecs available that provide other bit depths and compression ratios.
Note: Always check with the developer regarding the compatibility of third-party
codecs with Shake.
Image Formats That Support tTmp Files
Shake creates temporary files (tmp files) when writing certain formats of images, or
when running out of memory. These temporary files are written like swap files, but are
used before memory-intensive activity occurs to avoid the slowdown of swapping.
Normally, Shake reads in only the portion of an image that it can fit into memory—
either a group of scan lines or a tile of the image. This means that any given image is
accessed not just once, but many times, with only the necessary portion of it being
read each time in order to save memory and processing time.
The ideal format to support this behavior is the native Shake .iff format (this format is
also licensed to Alias/Wavefront for their Maya software), but several other image
formats support this behavior as well.Chapter 5 Compatible File Formats and Image Resolutions 169
There are some formats that do not support the ability to efficiently read a random
portion of the image. As a result, these images can take significantly longer to load, and
may require more memory.
Note: QuickTime files do not support the creation of tmp files.
To calculate the maximum disk space needed to accomodate tmp files for each FileIn
node during I/O, use the following formula:
tmp file = width * height * number of channels * bytes
where bytes = 1 for 8 bit, 2 for 10 or 16 bits, 4 for float.
Create Temporary Files Do Not Create Temporary Files
Alias AVI
BMP
(depending on orientation)
DXP
Cineon
(depending on orientation)
GIF
JPEG IFF
PBM Mental Images
Softimage .mov/QuickTime
Targa
(depending on orientation)
OpenEXR
TIFF
(depending on orientation)
PNG
YUV RLA
SGI
Side FX170 Chapter 5 Compatible File Formats and Image Resolutions
Table of Supported File Formats
The table in this section outlines all of the image formats that Shake supports, with
columns for the file extension, image format, supported input and output channels,
compression options, bit depth, and tmp file support of each format.
Shake supports several combinations of the following input channels:
• BW: Black and White
• RGB: Red, Green, and Blue
• A: Alpha Channel
• Z: Z channel, for depth
For example, BW[A] is either BW or BWA. BW[A][Z] is any combination of BW, alpha, and
Z. RGB[A] and RGB[A,Z] are optional additions of alpha or Z channels.
Note: Targa and SGI have different input/output options for channels. When you write
a BWA image, it is converted to RGBA. Also, many options must be explicitly stated
when in command-line mode. For example, Cineon and JPEG files always write in RGB
unless you specify every argument found in the FileOut node for Cineon or JPEG in the
Shake interface.
Compression Controls
Compression controls indicate any special compression techniques. Note that Cineon
and YUV have no compression.
Nodes That Create tmp Files
Certain nodes also create tmp files of their own during the processing of a script. This
is required for nodes that drastically change the X, Y position of an image’s pixels
during rendering. For example, if you rotate an image 90 degrees, the pixel in the
lower right now moves to the upper right. In order to process this, Shake creates a
tmp file that includes as much information as necessary to calculate the image. A
tiling system is used, so these tmp files are typically much smaller than those created
during I/O. The nodes that create tmp files include the following:
• Move2D
• Move3D
• Rotate
• Orient
• Flip
• All Warps (Warper, Morpher, WarpX, DisplaceX, Turbulate, and so on)
By default, temporary files are written to: /var/tmp.
To relocate the temporary directory, set the environment variable TMPDIR:
setenv TMPDIR /more_disk_space/tmpChapter 5 Compatible File Formats and Image Resolutions 171
An asterisk indicates additional format notes (following the table).
Extension
Image
Format
Input
Channels
Output
Channels Compression Bit Depth tmp Files
.iff* (or no
extension)
Shake native BW[A, Z],
RGB[A, Z]
Same 8, 16, float No
.nri Shake icon
(only for
interface
icons)
RGB[A] Same 8 No
.iff* Alias/
Wavefront
Maya
(licensed
from Apple)
RGB[A, Z] Same 8, 16, float No
.als, .alias,
(pix)
Alias/
Wavefront
Alias
RGB Same 8 Yes
.alsz Alias/
Wavefront
Alias Z
buffer
Z,
BW[A, Z],
RGB[A, Z]
Same 8,16, float Yes
.avi* Microsoft
video file
format
RGBA Same Lossy, from 0
to 1, 1 =
high quality
.bmp, .dib BMP RGB Same 8
.ct, .ct16,
.mray
Mental Ray RGBA Same 8,16, float No
.cin* Kodak
Cineon
RGB[A] Same None 16 (10 on
disk)
Yes
.dpx DPX reader
courtesy of
Michael
Jonas, Das
Werk Gmbh,
modified by
Apple
RGBA Same N/A 8, 16 Yes
.exr OpenEXR RGB[A, Z]
(supports
any number
of additional
channels)
Same Options for
both lossless
and lossy
compression
ratios from
2:1 to 3:1
16-bit float,
32-bit float,
(32-bit int
supported in
Z channel)
No
.gif (read
only)
GIF RGB N/A N/A 8 No172 Chapter 5 Compatible File Formats and Image Resolutions
.jpeg, .jpg,
.jfif*
JPEG BW, RGB Same Lossy, from 0
to 100%. 100
= high
quality
8 Yes
.pbm, .ppm,
.pnm, .pgm
PBM BW, RGB Same 8 Yes
.pic Softimage RGB[A] Same 8 Yes
.png PNG RGB[A],
BW [A]
Same 8, 16 No
.psd* Adobe
Photoshop
RGB[A] RGBA 8, 16
.mov, .avi
(QuickTime*)
Apple video
file format,
multiple
codecs
supported
RGB[A] Same Lossy, from 0
to 1, 1 =
high quality
8, 16
rgb, sgi, bw,
raw, sgiraw*
SGI BW[A],
RGB[A]
RGB[A] Lossless RLE 8, 16 No
.rla* Alias/
Wavefront
RLA
(supports Z
buffer)
BW[A,Z],
RGB[A,Z]
Same 8, 16, float No
.rpf* RLA Rich
Pixel Format.
Use this type
when saving
RLA files
with Z depth
to be read
into Adobe
After Effects.
Make sure
the file
extension is
still .rla, but
set the
format to
.rpf.
BW[A,Z],
RGB[A,Z]
Same 8, 16, float No
.tdi Alias/
Wavefront
Explore
format
(identical to
.iff)
BW[A, Z],
RGB[A,Z]
Same 8, 16, float No
Extension
Image
Format
Input
Channels
Output
Channels Compression Bit Depth tmp FilesChapter 5 Compatible File Formats and Image Resolutions 173
Format Descriptions
The following section discusses some of the more useful image formats (those
indicated with an asterisk in the table above) in greater detail.
IFF
The Shake IFF (.iff) format is not the same as the Amiga format with the same
extension, although they share certain structural similarities. The IFF format is licensed
to Alias/Wavefront for use with Maya, so Shake is ideally suited to work with Maya.
Since Shake deals with this format internally, you get the best performance by
maintaining your intermediate images in this format as well. The IFF format can
accomodate 8, 16, or 32 bits per channel, as well as maintain logarithmic information,
alpha, and Z channels. Currently, not many packages explicitly support this format, but
if the package supports the old TDI (.tdi) format, it works with IFF as well (for example,
with Interactive Effects’ Amazon 3D Paint).
CIN
Shake works with images bottom-up, meaning 0,0 is at the bottom-left corner. The
Cineon and TIFF formats allow you to write the files either bottom-up or top-down.
Because of Shake’s bottom-up nature, the I/O time (actual render time remains the
same) is four times greater when dealing with top-down Cineon or TIFF files. You can
set how Shake writes the images—reading either way is no problem, except for the
speed hit. This information is placed in a startup.h file.
.tdx Alias/
Wavefront
Explore Tiled
Texture Map
BW[A, Z],
RGB[A,Z]
Same 8, 16, float No
.tga Targa RGB[A] RGB[A] On/Off 8 Yes
.tif, .tiff TIFF BW[A],
RGB[A]
Same 4 options,
see below.
8, 16, float Yes
.xpm XPM RGB[A] Same 8
.yuv, .qnt,
.qtl, .pal*
YUV/Abekas/
Quantel
RGB Same Uncompressed files
with YUV
encoding
8 Yes
.yuv (10-bit) Same RGB Same Uncompressed files
with YUV
encoding
16 (10) Yes
Extension
Image
Format
Input
Channels
Output
Channels Compression Bit Depth tmp Files174 Chapter 5 Compatible File Formats and Image Resolutions
To set Shake to write images in top-down mode:
m Add the following lines to a .h file in your startup directory:
script.cineonTopDown = 1;
script.tiffTopDown = 1;
You can also set environment variables in your .cshrc (or .tcshrc or whatever):
setenv NR_CINEON_TOPDOWN
setenv NR_TIFF_TOPDOWN
(For more information on setting up your own .h files, see Chapter 14, “Customizing
Shake.”)
By default, Cineon and TIFFs are set to the slower top-down mode, since many other
software products do not recognize bottom-up images. If you write a bottom-up image
and it appears upside down in another software package, you have four choices:
• Reset the TopDown switch/environment variable, and save the image again in Shake.
• Flip the image in Shake before saving it.
• Flip the image in the other software.
• Call the other vendor and request that they properly support the file formats in
question.
DPX
The reading and writing of DPX images has been improved for greater compatibility
with more film recorders.
When you read in a DPX image, its header data is passed down through the node tree.
If you read in a DPX image, process it with single input nodes, such as color, filter, or
transformation nodes, and then render (with a FileOut node) the result as another DPX
file, the header data is passed through the node tree and written out to the resulting
file. For more information about Shake’s support for custom file header metadata, see
“Support for Custom File Header Metadata” on page 178.
When rendering a DPX file with a FileOut node, an additional parameter allows you to
specify the orientation of the output image as either Top to Bottom (default), or Bottom
to Top.
OpenEXR
OpenEXR (.exr) is an extremely flexible, cross-platform file format developed and
maintained by Industrial Light & Magic. Key features of the OpenEXR format include
support for the efficient storage of high dynamic-range image data using the 16-bit
float “half” format, and support for auxiliary image data channels. OpenEXR 16-bit float
and 32-bit float data channels can be read directly into Shake’s RGBAZ data channels.
In addition, OpenEXR 32-bit unsigned integer channel data can be read into Shake’s Z
data channel, although Shake’s image processing nodes cannot process this data. Chapter 5 Compatible File Formats and Image Resolutions 175
Note: 32-bit unsigned integer channel data will only be useful to custom plug-ins with
built-in logic capable of processing the data within the Z channel.
A major feature of the OpenEXR format is its ability to support an extremely wide
dynamic range. Thanks to its floating-point support, a contrast range of up to 30 f-stops
can be supported with no loss of precision. Color resolution in 16-bit float (“half”) files is
1024 steps per f-stop.
Another advantage of the OpenEXR format is support for any number of additional
auxillary data channels, in addition to the standard RGBAZ channels. For example,
additional channels can be written to store luminance, surface normal direction
channels, velocity channels, and even individual lighting passes written from a 3D
rendering application.
Shake Support for Auxiliary OpenEXR Data Channels
Internally, Shake only supports the processing of RGBAZ channels down the processing
tree. However, the FileIn node provides channel remapping options in the Image tab of
a FileIn node’s parameters. Each channel that Shake supports has a corresponding popup menu. Each menu presents a list of every compatible channel within the referenced
OpenEXR file. Color channels that correspond to Shake’s supported channels are
mapped by default.
To remap any image channel:
1 Load a FileIn node that references an OpenEXR file into the Preferences tab.
2 Choose a new channel to map to from any color channel pop-up menu.
Channel remapping has the following restrictions:
• Any 16-bit or 32-bit float channel can be remapped within the RGBAZ channels.
• 32-bit integer channels can only be remapped to the Z channel.
Note: If you want to access multiple 32-bit integer channels within a script, duplicate
a number of FileIn nodes equal to the number of channels you want to access, then
remap each 32-bit integer channel to a Z channel of one of the duplicate FileIn nodes.
FileIn channel pop-up menus for two files with different image channels176 Chapter 5 Compatible File Formats and Image Resolutions
Support for Data Compression
The OpenEXR format supports several codecs, with options for either lossless or lossy
compression. Compression ratios range from 2:1 to 3:1.
Note: By default, FileOut nodes set to output OpenEXR images default to the Piz codec.
The following codec information appears courtesy of Industrial Light & Magic:
• none: No compression is applied.
• RLE: (Lossless) Differences between horizontally adjacent pixels are run-length
encoded. This method is fast, and works well for images with large flat areas. But for
photographic images, the compressed file size is usually between 60 and 75 percent
of the uncompressed size.
• ZIP: (Lossless) Differences between horizontally adjacent pixels are compressed using
the open source zlib library. ZIP decompression is faster than PIZ decompression, but
ZIP compression is significantly slower. Photographic images tend to shrink to
between 45 and 55 percent of their uncompressed size.
• PXR 24: (Lossy) After reducing 32-bit floating-point data to 24 bits by rounding,
differences between horizontally adjacent pixels are compressed with zlib, similar to
ZIP. PXR24 compression preserves image channels of type HALF and UINT exactly,
but the relative error of FLOAT data increases to about 3°—10-5.This compression
method works well for depth buffers and similar images, where the possible range of
values is very large, but where full 32-bit floating-point accuracy is not necessary.
Rounding improves compression significantly by eliminating the pixels’ eightleast
significant bits, which tend to be very noisy, and difficult to compress.
• Piz: (Lossless): This is the default compression method used by Shake. A wavelet
transform is applied to the pixel data, and the result is Huffman-encoded.This
scheme tends to provide the best compression ratio for typical film images. Files are
compressed and decompressed at roughly the same speed. For photographic images
with film grain, the files are reduced to between 35 and 55 percent of their
uncompressed size.
OpenEXR Proxy Handling
Shake can read both tiled and scanline OpenEXR images. Scanline files contain a single
image at a set resolution, but tiled files hold several versions of the same image at a
variety of resolutions, for use as proxies in supporting applications.
Shake’s proxy mechanism does not take advantage of tiled images. As a result, Shake
defaults to reading in the highest available tiled resolution.
For More Information
More information about the OpenEXR format can be found at http://
www.openexr.com.Chapter 5 Compatible File Formats and Image Resolutions 177
JPEG
In the FileOut node you can set the quality level of these image formats (.jpeg, .jpg, .jfif)
and determine which channels are present in the file.
MOV, AVI
The QuickTime format (.mov, .avi) is available on Macintosh systems only. AVI files are
written through QuickTime. If you select either as your output format, you have three
additional options:
• codec: A pop-up menu with a list of all available compressors with Animation (RLE
compression) as the default codec.
• compressQuality: 0-1. A high value sets high quality and a larger file size.
• framesPerSecond: This setting is embedded in the file.
When QuickTime files are rendered from the interface, the Shake Viewer displays the
thumbnail. You must close this window before the QuickTime file is actually completed.
For this same reason, you cannot write over the file on disk while it is still being viewed.
Important: When using the FileOut node to render uncompressed QuickTime movies,
use the Apple Uncompressed 8- or 10-bit 4:2:2 codecs to obtain the highest quality.
PSD (Photoshop)
There are two ways to import Photoshop files. First, you can use a FileIn node to import
a .psd file and select either the merged layers or a single layer. These controls are
located in the FileIn parameters.
The second way to import a Photoshop file is to use the File > Import Photoshop Script
command. Each layer is imported as a separate file and fed into a MultiLayer node. For
information on the Photoshop layering modes, see “Importing Photoshop Files” on
page 473.
RGB, SGI, BW, RAW, SGIRAW
With each of these image formats you have the option to set the channels saved into
the output file.
RLA, RPF
Adobe After Effects and Autodesk 3ds max do not properly support the original
Wavefront RLA file specifications for the Z channel. Therefore, you have to write the
image in a specific format—rpf (Rich Pixel Format) in the FileOut node with the .rla file
extension in the file name, or else these packages do not recognize the extension.
YUV, QNT, QTL, PAL
You have the choice to write out these image formats in NTSC, PAL, or 1920 x 1080 4:2:2
8 bit. You can also write them out as10-bit YUV files.
Note: The .qnt and .qtl options do not appear in the fileFormat list in the FileOut
parameters, and must be entered manually when setting your FileOut name.178 Chapter 5 Compatible File Formats and Image Resolutions
When yuvFormat is set to Auto, the resolution is automatically determined by the
resolution of the FileIn node. The selected resolution is the smallest possible to fit the
entire image. For example, if the image is smaller than NTSC, it is NTSC. If it is between
NTSC and PAL, it is PAL; otherwise it is HD. You can also manually select the resolution.
The script.videoResolution is no longer used for this purpose.
Note: The YUV file reader and writer supports Rec. 709 and Rec. 601-1 colorimetry
coefficients, used primarily in HD YCbCr (HD-SDI).
Support for Custom File Header Metadata
Internal support for blind data allows for the preservation of metadata from custom file
formats for facilities using special file translators.
If you design a file translator that places a file’s header metadata into Shake’s blind data
container, it will be passed down through the node tree. For example, if you read in an
image with custom metadata using such a file translator, process it with a series of
single input nodes, then render out the result into the same format with a FileOut node,
the blind header data is passed through the node tree and written to the resulting file.
If two images with metadata in the blind data container are combined in a node, such
as Over, Outside, or Multilayer, the data from the image connected to the node’s leftmost input (frequently labeled the foreground input knot) is propagated down the tree.
If, at some point in the node tree, you wish to assign the blind header data from one
image to another, use the Copy node. This can be useful if you have a complex node
tree that uses many layer nodes combining several images, yet you want the final file to
render out with specific header data taken from one of the FileIn nodes.
YUV FileIn parameters
YUV FileOut parametersChapter 5 Compatible File Formats and Image Resolutions 179
To assign blind header data from one image to another:
1 Add a Copy node to the node tree, so that the nodes providing the RGBA data you
want to use are connected to the Foreground input knot.
2 Attach a second FileIn node containing the blind header information you want to use
to the Copy node’s background input knot.
3 In the Copy node’s Parameters tab, turn on the copyBData parameter.
The resulting output from the Copy node contains the blind header data from the
second FileIn node. This operation replaces any header data that was originally in the
image. 180 Chapter 5 Compatible File Formats and Image Resolutions
Table of File Sizes
In the following table, all sizes are for 3-channel images. Note that many images
support optional alpha or Z channels, which add to the file size. A single channel image
is typically one-third the size. The two sizes listed in each cell are for a Ramp (an
example of extreme compression), and a completely random image, each in MB.
Normal plates tend to be in between, usually closer to the higher value. This can give
you a very wide variation in an image. For example, an .iff goes from 2.5 MB for a 2K 8-
bit ramp to 9.1 MB for the same size/depth with a random image. If only one entry is
listed, it is an uncompressed file.
Controlling Image Resolution
Shake has no formal working resolution to set. Setting the defaultWidth and
defaultHeight in the format subtree of the Globals tab only affects the size of newly
created Shake-generated images, including those created by the Checker, Color,
ColorWheel, Grad, QuickPaint, QuickShape, Ramp, Rand, RGrad, RotoShape, and Text nodes
in the Image tab. The actual resolution of your composition is primarily determined by
those of the input images, and can be modified by a variety of operations as you work
down your node tree. For example, if you read in a D1 resolution image, your project
starts out at D1 resolution.
Note: While it may seem that choosing a new setting from the format pop-up menu in
the Globals tab may be changing the size of the image in the Viewer, this is not true.
What you’re really seeing is the change being made to the defaultViewerAspectRatio
parameter, which changes how the image is scaled in the Viewer in order to
compensate for images using nonsquare pixels (like standard definition NTSC and PAL
video). This parameter has no actual effect on the resolution of your final image.
Extension NTSC, 8 Bits NTSC, 16 Bits NTSC, Float 2K, 8 Bits 2K, 16 Bits 2K, Float
.cin (10 bits) 1.3 12.2
.iff .74 | 1 1.7 | 2 2.9 | 3.9 2.5 | 9.1 11.5 | 18.2 22.5 | 35.8
.jpg (100 %) .02 | .48 1.4 | 4.3
.mray 1.3 2.7 5.3 12.2 24.3 48.6
.pic .9 | 1 1.5 | 9.1
.rla .8 | .92 1.8 | 2 4 2.3 | 9.2 11.5 | 18.4 36.5
.sgi .74 | 1 1.8 | 2 2.3 | 9.3 16.5 | 18.5
.tif .04 | 1.4 .07 | 1.6 3 | 3.7 .25 | 12.5 .2 | 17.8 27.4 | 33.3
.xpm .68 | 1.2 6.1 | 10.6
.yuv .68 4 (HD)Chapter 5 Compatible File Formats and Image Resolutions 181
Combining Images of Differing Resolution
When you composite images with different resolutions using one of the Layer nodes,
you can select which image defines the output resolution of this operation using the
clipMode parameter. This parameter allows you to select either the foreground (first
image) or background (second image) resolution to use for the final image.
For example, if you read in 50 2048 x 1556 images, you are working at 2048 x 1556. If
you composite all of the images over a 720 x 486 background, and you choose the
background resolution of that composite, then the foreground elements are cropped
to the video resolution, or you can choose to remain at the 2048 x 1556 resolution.
Because of the Infinite Workspace, you never have to crop a lower-resolution element
to match a higher-resolution element when compositing or applying transformations.
Note: You can view the resolution of an element in the title bar of the Viewer, or
position the pointer over the node and look at the help text in the lower-right window.
Changing Resolution
You can change the resolution of an element in your composition in several ways. In
the following examples, a 640 x 480 foreground element is composited over a 720 x
486 NTSC D1 element. The dark gray area designates space outside of the image frame.
Changing Resolution by Layering
One of the simplest ways you can handle this case is by using layering nodes to crop
the frame.
To set resolution by compositing two images of different resolutions:
m
Select the background or foreground resolution in the clipMode parameter in the
layering node.
Foreground, 640 x 480 Background, 720 x 486182 Chapter 5 Compatible File Formats and Image Resolutions
Note: This method works even when compositing a pure black plate generated with
the Color node.
In the following example, a 320 x 240 black frame is created with an Image–Color node.
The resolution of the foreground and background elements is set to 320 x 240 by
assigning the background clipMode in the second Over node.
Crop the Frame using the Crop, Window, or Viewport Nodes
The following three nodes, located in the Transform tab, crop the frame without scaling
and filtering the pixels.
• Crop: Arbitrarily supplies the lower-left and upper-right corners, and cuts into the
frame at those coordinates. Also turns off the Infinite Workspace.
• Window: Supplies a lower-left coordinate, and then the image’s resolution. Cuts off
the Infinite Workspace.
• Viewport: Same as Crop, except it maintains the Infinite Workspace. This is useful for
setting resolutions in preparation for later use of onscreen controls, as the controls
are always fitted to the current resolution.
In the following example, a Crop node is attached, with the Crop parameters set
manually to 244, 92, 700, and 430. These settings return a 456 x 338 resolution (this is
completely arbitrary). Notice the onscreen control you can use to adjust the resolution
manually.
Tree Background, 720 x 486 Foreground, 640 x 480Chapter 5 Compatible File Formats and Image Resolutions 183
Using the Resize, Fit, or Zoom Node to Scale the Frame
The following three nodes change the resolution by scaling the pixels.
• Resize: You set the output resolution of the node, and the image is squeezed into
that resolution. This usually causes a change in aspect ratio.
• Fit: Like Resize, except it pads the horizontal or vertical axis with black to maintain
the same aspect ratio.
• Zoom: Same as Resize, except that you are supplied with scaling factors, so a zoom of
1, 1 is the same resolution; 2, 2, is twice the size; .5, .5 is half the size, and so on.
For more information on the individual transform nodes that can be used to change
resolution, and tables listing the differences between the scaling functions, see
Chapter 26, “Transformations, Motion Blur, and AutoAlign,” on page 763.
Nodes That Affect Image Resolution
All of the nodes in this section can be used to modify image resolution.
Fit
The Fit node changes the image resolution, resizing the image to fit inside of a frame.
Fit does not stretch the image in either axis; it zooms X and Y by the same amount until
either value fits in the new resolution (that you specify). For example, if you have a 100
x 200 image, and fit it into a 250 x 250 resolution, it zooms the image by 25 percent
(250/200 = 1.25), and pads black pixels on the left and right edges.
Note: The Fit node allows you to use different scaling filters to scale the image
horizontally and vertically.184 Chapter 5 Compatible File Formats and Image Resolutions
Parameters
This node displays the following controls in the Parameters tab:
xSize, ySize
The new horizontal and vertical resolution. This parameter defaults to the width
expression.
xFilter, yFilter
The methods used to scale the image horizontally and vertically. Choosing Default uses
the sinc filter when scaling the image down, and the mitchell filter when scaling the
image up. For more information, see Chapter 28, “Filters.”
preCrop
Turns off the Infinite Workspace so that the letterbox area remains black.
Resize
The Resize node resizes the image to a given resolution.
Parameters
This node displays the following controls in the Parameters tab:
xSize, ySize
The new horizontal and vertical resolution. This parameter defaults to the width
expression.
filter
The method to use to scale the image. Choosing Default uses the sinc filter when
scaling the image down, and the mitchell filter when scaling the image up. For more
information, see Chapter 28, “Filters.”Chapter 5 Compatible File Formats and Image Resolutions 185
subPixel
Turns on quality control.
• 0 = low quality
• 1 = high quality
If the new width or height is not an integer (either because it was set that way, or
because of a proxy scale), you have a choice to snap to the closest integer (subpixel
off) or generate transparent pixels for the last row and column (subpixel on).
Zoom
The Zoom node resizes the image to a given resolution.
Parameters
This node displays the following controls in the Parameters tab:
xScale, yScale
The scaling factor, for example, .5 = half resolution, 2 = twice the resolution.
filter
The method to use to scale the image. For more information, see Chapter 28, “Filters.”
subPixel
Turns on quality control.
• 0 = low quality
• 1 = high quality
If the new width or height is not an integer (either because it was set that way, or
because of a proxy scale), you have a choice to snap to the closest integer (subpixel
off) or generate transparent pixels for the last row and column (subpixel on).
Remastering to a Different Resolution With Proxies
You can remaster your scene’s resolution using proxy images. As an example, you can
load NTSC and PAL images simultaneously, with one set as a proxy image. With the
proper scaling factors, the scene can be reset to the other resolution by switching the
proxy set.
For more information on working with proxies and high-resolution images, see
Chapter 3, “Adding Media, Retiming, and Remastering.”186 Chapter 5 Compatible File Formats and Image Resolutions
Cropping Functions
This section describes several nodes you can use to crop your images. Window,
Viewport, and Crop are located in the Transform tab, and AddBorders is located in the
Other tab.
AddBorders
The AddBorders node is similar to a Crop, except it adds an equal amount of space to
the left and right sides, or to the top and bottom sides. It is located in the Other tab
because of its infrequent use.
Parameters
This node displays the following controls in the Parameters tab:
xBorder
The number of added pixels to the left and right border.
yBorder
The number of added pixels to the bottom and top border.
Crop
This node crops an image by defining the lower-left corner and the upper-right corner
of the crop. Since the numbers can be greater or less than the boundaries of the image,
you can make the image smaller, or expand the image with a black border. A Crop cuts
elements beyond the frame area, so if you later use another transform node (for
example, Pan, Move2D, and so on) to move the image, black is brought in. This is how
Crop differs from Viewport—Viewport does not cut off the image. Window is also the
same command, but you set the lower-left corner, and then the X and Y resolution.
Crop is particularly helpful in that it sets a new frame area. Shake only processes nodes
within the frame, so to speed up an operator you can limit its area with a Crop. This is
essentially what is done with the Constraint node.Chapter 5 Compatible File Formats and Image Resolutions 187
Parameters
This node displays the following controls in the Parameters tab:
cropLeft
The number of pixels to crop from the left of the image. This parameter defaults to 0,
the leftmost pixel of the image.
cropBottom
The number of pixels to crop from the bottom of the image. This parameter defaults to
0, the bottommost pixel of the image.
cropRight
The number of pixels to crop from the right of the image. This parameter defaults to
the width expression.
cropTop
The number of pixels to crop from the top of the image. This parameter defaults to the
height expression.
Viewport
The Viewport node is exactly like the Crop node, but it keeps the image information
outside of the frame so you can perform later transformations. 188 Chapter 5 Compatible File Formats and Image Resolutions
Viewport Node Example
The following tree has a large input image (scaled down in the illustration) which is
piped into a Crop node and a Viewport node, each with the same values. Both nodes
are then piped into Move2D nodes with the same xPan values. The Crop result has black
on the right edge after the pan; the Viewport result does not.
[
Parameters
This node displays the following controls in the Parameters tab:
Node Tree FileIn1
Crop1 Viewport1
Move2D1 Move2D2Chapter 5 Compatible File Formats and Image Resolutions 189
cropLeft
The number of pixels to crop from the left of the image. This parameter defaults to 0,
the leftmost pixel of the image.
cropBottom
The number of pixels to crop from the bottom of the image. This parameter defaults to
0, the bottommost pixel of the image.
cropRight
The number of pixels to crop from the right of the image. This parameter defaults to
the width expression.
cropTop
The number of pixels to crop from the top of the image. This parameter defaults to the
height expression.
Window
The Window node is exactly like the Crop node, but you enter the lower-left corner, and
then the X and Y resolution of the image.
Parameters
This node displays the following controls in the Parameters tab:
cropLeft
The number of pixels to crop from the left of the image. This parameter defaults to 0,
the leftmost pixel of the image.
cropBottom
The number of pixels to crop from the bottom of the image. This parameter defaults to
0, the bottommost pixel of the image.
xRes
The number of horizontal pixels that equals the horizontal resolution of the image. This
parameter defaults to the width expression.
yRes
The number of vertical pixels that equals the vertical resolution of the image. This
parameter defaults to the height expression.6
191
6 Importing Video and Anamorphic
Film
Shake provides support for nearly any video or
anamorphic film format in use. This chapter covers the
parameters that must be set up—and the special
considerations given—for these formats.
The Basics of Processing Interlaced Video
Although Shake’s origins lie in film compositing—the processing of high-resolution,
non-interlaced (otherwise referred to as progressive-scan) images, Shake can also create
composites using video images from nearly any format, standard definition or high
definition. The individual frames of image sequences may be interlaced as well as
progressive-scan, enabling Shake users on any platform to work with video clips. On
Mac OS X, Shake supports QuickTime, which allows for an even wider variety of video
clips to be imported from applications such as Final Cut Pro.
When you read interlaced clips into Shake, there are a variety of parameters that you
must set to ensure that the image data in each field of every frame is properly
processed. If these parameters are incorrectly set, the result may be undesirable motion
artifacts that are not apparent on your computer screen, but that leap out at you when
the exported composite is played back on a broadcast video monitor.
Preserving, Eliminating, and Creating Interlacing
When you process interlaced video in Shake, you have the option of either preserving
both fields from every frame for interlaced output, or eliminating the interlacing
altogether and outputting progressive-scan clips. Additionally, you have the option of
taking non-interlaced source media and turning it into an interlaced clip for use in an
interlaced project.
Each of these processes requires you to set up parameters within each FileIn node, as
well as those found within your script’s Globals tab, in specific ways to insure proper
rendering, and to maximize the quality of the processed result.192 Chapter 6 Importing Video and Anamorphic Film
Understanding Video Interlacing
Dividing each frame of video into two fields is a technique originally developed for
television broadcasting to solve a number of technical difficulties with early TV
equipment. In essence, interlacing reduces the perceived strobing of 30 images playing
every second on a television screen. Interlacing divides each frame into two fields, each
of which contains half of the image information. Consequently, the television screen
displays 60 fields each second, resulting in smoother motion.
The following example depicts an animation sequence showing frames 1 and 3 of a
moving image. These are non-interlaced, full frames—each frame contains the entire
image at that instant in time.
The images below depict the same frames up to (but not including) frame 3 as they
appear when they’re interlaced. The image on the left shows some information from
frame 1, field 1 and from frame 1, field 2 (unconventionally labeled here as frame 1.5).
The image on the right shows frame 2, field 1 and frame 2, field 2 (unconventionally
labeled frame 2.5 for illustration purposes).
As you can see, each field contains only half of the horizontal lines of the image. If
you’ve ever seen a still photograph of an interlaced source clip, you’ve probably already
seen this type of image: the moving subject appears to be in two places at once,
blurred because each image contains only half the scan lines.Chapter 6 Importing Video and Anamorphic Film 193
This effect occurs because video fields are recorded one after the other, just like frames.
When a moving subject is recorded using an interlaced video format, the subject is in
one position when the first field is recorded, and in another position when the second
field is recorded. When the video is played back, and each field is played in the correct
order, the image appears normal. However, looking at both fields together as a single
frame, the change in position can be seen to occur within the one frame.
The following images show field 1 and field 2 of a single frame as separate images.
Notice the black lines across the images—these are the lines that are filled by the
opposing field. This example clearly illustrates that each field contains only half of the
total image data in a given frame.
The following illustration depicts a close-up of the interlaced frame, with both fields
combined. Because of the subject’s change in position from one field to the next, the
fields appear offset.194 Chapter 6 Importing Video and Anamorphic Film
Because each interlaced frame of video consists of two fields that contain half the
information for that frame, each field can be thought of as a half-height version of the
originating frame. Because, during playback, the television displays these images
quickly, one after the other, the human eye is fooled into perceiving the image as
having a higher resolution than each individual field actually possesses. To sum up, each
field sacrifices quality in terms of vertical resolution (perceived as image sharpness) for
the benefit of improved temporal quality (perceived as smoothness of motion).
Common Issues When Compositing Interlaced Images
In order to avoid pitfalls when compositing interlaced media, it’s important to
understand the peculiarities of the format. This section describes the compositing
operations in Shake that are affected by improperly setting a script’s interlacing
parameters.
Parameter Animation Across Fields
The first problem occurs when you animate any parameter. The animation must be
understood and applied to every field of video—at half-frame intervals. If you read in
an interlaced clip and apply a static Gamma, no problems occur because both fields
receive the same correction. If, however, you animate the gamma correction, you must
enable field rendering in order to apply the correct gamma value to each of the two
fields, in the correct order.
Transforms Applied to Fields
The second, and trickier, issue arises when you apply spatial effects with a node such as
the Blur or a Move2D node. For example, panning an image up by 1 pixel in the Y axis
has the inadvertent effect of reversing the two fields within that frame, because the
even lines are moved to the odd field, and the odd lines are moved to the even field.
The resulting motion artifacts will appear as a jittering effect because the fields are
playing backwards even though the frames are still playing forwards. This would be the
same as if you inverted a standard film frame sequence 1, 2, 3, 4, 5, 6 to play as 2, 1, 4,
3, 6, 5.Chapter 6 Importing Video and Anamorphic Film 195
Another issue arises when you apply image rotation and scaling to an interlaced clip. In
the following images, a rotation effect has been applied to two images, one with field
rendering and one without field rendering. The right image will appear correct when
played back on a broadcast monitor, because the interlaced lines are properly arrayed.
The lack of clearly defined fields in the left image will cause undesirable artifacts during
video playback.
Filters and Fields
Field rendering doesn’t just affect spatial transformations. The examples below depict
close-ups of frame 1 (from above), without and with a Blur node applied. Look at the
second image. When the blur effect is applied uniformly to both fields, the result is an
actual loss of image data because the individual fields are improperly combined.
No field rendering Field rendering
Original image Blurred without field rendering196 Chapter 6 Importing Video and Anamorphic Film
To illustrate what happens when fields are improperly combined, we’ve removed one
field from the image on the left below. Notice that information from both fields
intermingles due to the blur, as pixels from a different moment in time bleed into the
current field. Turning on field rendering gives you the correct image, shown on the
right. No image information bleeds between the two fields.
Setting Up Your Script to Use Interlaced Images
To make sure that interlaced footage is rendered properly in your script, you need to
make sure that your script is set up to correctly process and view the fields and frames
in your composition. This procedure involves four steps:
Step 1: Set the deInterlacing parameter of each FileIn node
In the Parameters tab of each FileIn node in your project, set the deInterlacing
parameter (located in the Source tab) to match the field dominance of that media.
Step 2: Set the reTiming parameters of each FileIn node
In the Parameters tab of each FileIn node, set the reTiming parameter (located in the
Timing tab) to Convert. Next, adjust the reTiming subparameters to the settings
appropriate to your desired output, whether to preserve or eliminate interlacing. For
more information, see “Setting the reTiming Parameters of Each FileIn Node” on
page 198.
Step 3: Set the Inc (Increment) parameter in the Time Bar to 0.5
Setting the Inc parameter in the Time Bar to 0.5 allows you to view each field
independently while you work on your composition. This allows you to eliminate the
blurring caused by overlapping fields.
One field of improperly blurred image Properly blurred image with field renderingChapter 6 Importing Video and Anamorphic Film 197
Step 4: Set the OutputFrameInterlaced and fieldRendering parameters when
you’re finished compositing
Once you’ve completed your composite and you’re ready to render, turn on the
OutputFrameInterlaced parameter within each FileIn node’s Timing tab, then set the
fieldRendering parameter in the renderControls subtree of the Globals tab to the same
field dominance used in your other clips.
Important: Make sure you leave fieldRendering set to off until you’re ready to render.
Otherwise, the Viewer will display both fields of every frame together, eliminating your
ability to see each individual field.
Setting the deInterlacing Parameter of Each FileIn Node
When you create a FileIn node to read interlaced media into your project, the first thing
you should always do is to set the deInterlacing parameter within the Source tab to the
correct field dominance for that media. By default, deInterlacing is turned off.
There are three field rendering settings:
• off/0: The image is not deinterlaced. This is the appropriate setting for progressiveframe footage, including progressive-frame video formats and scanned film frames.
• odd/1: Use this setting for video media with a field dominance of odd (counting
from the top) first. This is generally the setting for PAL images. In Final Cut Pro, this
setting is referred to as Lower (Even).
• even/2: Use this setting for video media with a field dominance of even. This is
generally the setting for NTSC images. In Final Cut Pro, this setting is referred to as
Lower (Even).198 Chapter 6 Importing Video and Anamorphic Film
When the deInterlacing parameter of a FileIn node is set to either odd or even, Shake
separates the two fields within each frame, placing field 1 at frame 1, and field 2 at
frame 1.5. This effectively doubles the number of frames processed within your script,
but keeps them within the same duration of the Time Bar. Turning on deInterlacing for
each FileIn node ensures that all animation, transforms, and filters are properly handled
by Shake’s field rendering.
Setting the reTiming Parameters of Each FileIn Node
After you’ve set a FileIn node’s deInterlace parameter to the appropriate field
dominance, an optional step is to open that FileIn node’s Timing tab and set the
reTiming parameters.
Preserving Interlacing
This step is not strictly necessary if you’re not removing, reversing, or adding interlacing
to the source media, but will improve the processing of clips in Shake when you’re
making transformations to interlaced footage—even when you’re preserving the
original interlacing for video output.
To preserve the interlacing from the original source media:
1 Click the right side of the FileIn node to load its parameters into the Parameters tab.
2 Open the Timing tab.
3 Set the reTiming parameter to Convert.
Additional parameters appear below.
4 Set the InputFrameRate to match the format of the original interlaced video media
(NTSC or PAL).
5 Open the InputFrameRate subtree, and turn on InputFrameInterlaced.
The InputFrameDominance defaults to the same setting as the deInterlacing parameter
in the Source tab.
How to Determine Field Dominance
If you’re not sure of the field dominance of your media, scrub through the first few
frames in the Time Bar until you find a range of frames with an obvious interlacing
artifact (generally found in frames with a fast-moving subject). Choose a field
rendering setting in the Parameters tab, set the frame Inc setting in the Time Bar to
0.5, and then step through that group of frames to see if the motion looks correct. If
you perceive any stuttering, that’s an immediate sign that the field dominance you
selected is incorrect (which results in every two fields playing backwards) and should
be reversed.
If the motion of the clip is subtle and you’re still not sure, it may be a good idea to
render a test clip that you can output to an interlaced broadcast video monitor. Any
motion artifacts should be immediately visible.Chapter 6 Importing Video and Anamorphic Film 199
6 Set the OutputFrameRate to match the InputFrameRate parameter.
7 While you’re working in Shake, leave the OutputFrameInterlaced parameter in the
OutputFrameRate subtree turned off.
Once you’ve finished and you’re ready to render the resulting composite from your
script as an interlaced image sequence, turn OutputFrameInterlaced on.
Removing Interlacing
If you’re reading in interlaced media with the intention of rendering out a noninterlaced result, or if you’re adding an interlaced shot to other, non-interlaced media,
you can permanently remove the interlacing, recreating progressive frames from the
original field information.
Note: This is a much higher-quality method than the DeInterlace node found in
previous versions of Shake.
To remove interlacing from the original source media:
1 With a FileIn node’s parameters loaded into the Parameters tab, open the Timing tab.
2 Set the reTiming parameter to Convert.
Additional parameters appear below.
3 Set the InputFrameRate parameter to match the format of the original interlaced video
media (NTSC or PAL).
4 Open the InputFrameRate subtree, and turn on InputFrameInterlaced.
The InputFrameDominance defaults to the same setting as the deInterlacing parameter
in the Source tab.
5 Set the OutputFrameRate to the desired output frame rate, either to match the media’s
original frame rate, or to convert the media to another format.
In this example, the media is being converted to 24p (progressive-scan).200 Chapter 6 Importing Video and Anamorphic Film
6 In the OutputFrameRate subtree, turn off the OutputFrameInterlaced button.
Creating Interlacing From Non-Interlaced Source Media
You can also use the reTiming parameters to interlace previously non-interlaced media.
To create interlaced output from non-interlaced source media:
1 With a FileIn node’s parameters loaded, open the Timing tab.
2 Set the reTiming parameter to Convert.
Additional parameters appear below.
3 Set the InputFrameRate parameter to match the format of the original media.
4 Open the InputFrameRate subtree, and make sure that InputFrameInterlaced is turned
off.
5 Set the OutputFrameRate to the desired video format (NTSC or PAL).
6 Open the OutputFrameRate subtree and do the following:
a Turn on the OutputFrameInterlaced parameter.
b Set the OutputFrameDominance parameter to the desired field dominance.
Displaying Individual Fields in the Viewer
In order to be able to see each individual field for purposes of paint or rotoscoping, you
must set the Inc (Increment) parameter in the Time Bar to 0.5.Chapter 6 Importing Video and Anamorphic Film 201
With Inc set to 0.5, the playhead moves in half-frame increments as you scrub through
the Time Bar. When you use the arrow keys to move back and forth in the Time Bar,
you’ll actually be moving from field to field. The first field is displayed when the
playhead is on whole frames, and the second field is displayed at every frame and a half.
When Shake deinterlaces the media in your script, the even and odd fields displayed in
the Viewer are actually interpolated to fill in the gaps where the lines of the opposing
field were previously pulled out. This makes it easier for you to work with the individual
fields, and also improves Shake’s overall processing quality. For example, if you only
displayed the actual lines found in each video field, the images would look something
like this:
Field 1 appears in frame 1. Field 2 appears in frame 1.5.202 Chapter 6 Importing Video and Anamorphic Film
Setting the deInterlacing parameter for each FileIn node not only separates each field
internally to Shake, but it sets the Viewer to display each field with interpolated lines of
resolution added, so that each field appears as a complete image. The default
interpolation quality is somewhat low, but is fast to work with. To improve the display
and processing quality of individual fields, see “Setting the reTiming Parameters of Each
FileIn Node” on page 198.
Important: While you’re working on your composition, you should leave the
fieldRendering parameter in the renderControls subtree of the Globals tab turned off.
Otherwise, the Viewer simultaneously displays both fields for each frame regardless of
the Inc setting in the Time Bar. Later, when you’re ready to render your composition,
you must then turn fieldRendering on to output interlaced media.
Zooming In on Interlaced Images in the Viewer
Zooming in on interlaced images in the Viewer can result in some undesirable artifacts.
These artifacts will only appear on your computer screen, but will not appear in the
final rendered output. These artifacts disappear when you set the Viewer back to a 1:1
viewing ratio (move the pointer over the Viewer area, and press Home).
De-interlaced field 1 De-interlaced field 2Chapter 6 Importing Video and Anamorphic Film 203
Note: You can also click the Home button in the Viewer to reset the ratio to 1:1.
Exporting Field Interlaced Footage
If you’re working on media that will be output to an interlaced video format, you have
to set one additional global parameter. Once you’ve finished your composite, you need
to set the fieldRendering parameter in the renderControls subtree of the Globals tab to
the appropriate field dominance.
Field Rendering Settings
By default, fieldRendering is set to off.
There are three field rendering settings:
• off/0: Field rendering is turned off. This is the appropriate setting for progressiveframe media, including progressive-scan video formats and scanned film frames.
• odd/1: Field rendering with the odd field (counting from the top) first. This is
generally the setting for PAL images.
• even/2: Field rendering with the even field first. This is generally the setting for NTSC
images.
When fieldRendering is set to odd or even, Shake re-interlaces both fields of each
frame when rendering out the final media.
Viewer artifact example204 Chapter 6 Importing Video and Anamorphic Film
In the following example, the image has been resized from 640 x 480 to 720 x 486. The
image on the left has field rendering off, while the image on the right has field
rendering on. In the left example, resizing the image without removing interlacing first
has resulted in undesirable inter-field bleeding (in other words, the lines in the
alternating fields have been enlarged and distorted, and no longer line up). The right
example benefits from having been de-interlaced before image resizing. By reapplying
interlacing after the resize transformation, field-to-field continuity has been preserved
(in other words, the field lines line up properly).
Integrating Interlaced and Non-Interlaced Footage
If you need to integrate interlaced and non-interlaced media within the same script,
the best strategy is to decide which format the video needs to be in—interlaced or
non-interlaced—and convert all media in your script to that format.
You can use the Convert setting of the reTiming parameter in the Timing tab of each
FileIn node’s parameters to do this. For more information, see “Setting the reTiming
Parameters of Each FileIn Node” on page 198.
JPEGs and Fields
Using the JPEG output for field rendering is not recommended. The compression
bleeds information from one field into the other, which results in unwanted artifacts.
Resize without field rendering Resize with field renderingChapter 6 Importing Video and Anamorphic Film 205
Video Functions
Shake has several other video-oriented functions. When using these features, make
sure that field rendering is off, because field-rendering options may interfere with the
desired result. These functions include the following:
Interlace
Located in the Layer tab, this node interlaces two images. You can control field
dominance, whether the input images are themselves separate field images (for
example, half Y resolution), or if the fields are extracted from every other line.
Location Function Description
Globals tab timecodeMode Sets the timecode format displayed in the Time Bar.
Time Bar T on keyboard Toggles timecode/frame display.
Image tab FileIn Has de-interlacing, as well as pulldown/pullup capabilities under
the Timing subtree. See the “Using the FileIn (SFileIn) Node” on
page 110 for more information.
Color tab VideoSafe Limits your colors to video-legal ranges. See “VideoSafe” on
page 208.
Layer tab Constraint Limits effects by certain criteria, either zone, change tolerance,
channel, or field. Naturally, field is of interest here. You can affect a
single field with this node. This is generally done with field
rendering off. See Chapter 16, “Compositing With Layer Nodes,” for
more information.
Layer tab Interlace Interlaces two images, pulling one field from one image, and the
second field from the other image. You can select field dominance.
This is generally done with field rendering off. See “Interlace” on
page 205.
Other tab DeInterlace Retains one field from an image and creates the other field from it.
You have three choices as to how this is done. The height of the
image remains the same. This is generally done with field rendering
off. See “DeInterlace” on page 206.
Other tab Field Strips out one field, turning the image into a half-height image.
Generally done with field rendering off. See “Field” on page 207.
Other tab Swapfields Switches the even and odd fields of an image when fieldRendering
is off. To do this when fieldRendering is on, just switch from odd to
even or even to odd. Generally done with field rendering off. See
“SwapFields” on page 207.206 Chapter 6 Importing Video and Anamorphic Film
Parameters
This node displays the following controls in the Parameters tab:
clipMode
Toggles between using the foreground (0) or background (1) images to define the
resolution.
field
Specifies which field the first image uses.
• 0 = even field
• 1 = odd field
mode
Tells Shake if the input image is the same height as the result image.
• 0 = replace. Takes every other field from the input images; input images have the
same height as the result.
• 1 = merge. Takes the entire input image; input images are half the result image
height.
DeInterlace
Located in the Other tab, this node is unlike the de-interlacing parameter available in
the FileIn node because it is designed to permanently deinterlace a clip, discarding the
upper or lower field. The DeInterlace node has three different modes you can use to
replace the alternating field lines that are removed:
• replicate: Replaces one field with the other by duplicating the remaining fields.
• interpolate: Replaces a field with an average of the fields above and below it.
• blur: Replaces a field with an average of the fields above and below it, combined
with the original field itself. More precisely, the field is replaced with a mix of 50
percent of itself, 25 percent of the field above, and 25 percent of the field below.
Note: The Convert option in a FileIn node’s Timing tab provides de-interlacing options
superior to those found in this node. For more information, see “Setting Up Your Script
to Use Interlaced Images” on page 196.Chapter 6 Importing Video and Anamorphic Film 207
Parameters
This node displays the following controls in the Parameters tab:
field
The field that is retained.
• 0 = even field
• 1 = odd field
mode
The mode in which the removed field is replaced (see above).
• 0 = replicate
• 1 = interpolate
• 2 = blur
Field
Located in the Other tab, this node extracts the even or the odd field of the image,
returning an image of half the Y resolution.
Parameters
This node displays the following controls in the Parameters tab:
field
Specifies the field that is extracted.
• 0 = even field
• 1 = odd field
SwapFields
Located in the Other tab, this node switches the even and odd fields of an image.
There are no parameters in the SwapFields node.208 Chapter 6 Importing Video and Anamorphic Film
VideoSafe
Located in the Color tab, this node clips “illegal” video values. As such, it is generally
placed at the end of a composite. You can set the node for NTSC or PAL video, based on
luminance or saturation. There is also an example (in the videoGamma subtree) of a
conditional statement that toggles between 2.8 (NTSC) and 2.2 (PAL). Generally, these
values are not touched.
Parameters
The VideoSafe node displays the following controls in the Parameters tab:
videoType
Toggles between NTSC and PAL.
• 0 = NTSC
• 1 = PAL
processingType
Determines whether the VideoSafe node affects the luminance or chrominance
(saturation) of the image.
• 0 = luminance-based calculation
• 1 = saturation-based calculation
chromaRange
The pseudo-percentage of the clip, as specified by the actual video hardware.
compositeRange
The pseudo-percentage of the clip, as specified by the actual video hardware.
videoGamma
The gamma basis, based upon the videoType. NTSC uses a gamma value of 2.2 and PAL
uses a gamma value of 2.8. This parameter defaults to the following expression, which
links it to the videoType parameter:
videoType ? 2.8 : 2.2Chapter 6 Importing Video and Anamorphic Film 209
The result of this expression is that if videoType is not zero (in other words, videoType is
set to PAL), videoGamma is set to 2.8. If videoType is set to 1, videoGamma is set to 2.2.
For more information about how expressions work, see Chapter 30, “Installing and
Creating Macros,” on page 905.
About Aspect Ratios and Nonsquare Pixels
Shake has several controls in the Globals tab to help you work with nonsquare pixel
images. These images are typically video images, or anamorphic film images. Different
controls are used for the two types, due to the nature of the data that is manipulated.
• In order to avoid mixing up each frame’s field information, nonsquare pixel distortion
is corrected by extending the image horizontally (in the X direction) and not
vertically.
• For anamorphic film plates, because the primary concern is the amount of data that
is calculated, the image is vertically squeezed. This has the added benefit of reducing
frame size of the image, which lets Shake process your script faster. In the case of
CinemaScope, this not only corrects the anamorphic distortion, but also speeds
Shake’s interactivity by a factor of two.
When you correct nonsquare pixel images, you need to know the aspect ratio of the
image in order to see the transformations and corrections without distortion. For this
discussion of different aspect ratios, film anamorphic plates are used to illustrate how
to work with such frames. Although this solution applies specifically to film plates, the
principles and problems are similar for anamorphic and non-anamorphic video,
although the aspect ratios vary depending on the video format.
What is Anamorphic Video?
Anamorphic processes such as CinemaScope create film frames that allow for an
extremely wide-screen aspect ratio when projected. This is accomplished by filming
with a special lens that horizontally squeezes the incoming image by half horizontally,
in order to fit twice the image into a conventional frame of film.
When viewed without correction of any kind, each frame appears very thin on the
physical negative. When the film is projected in the theater, a reverse lens is used to
expand the image by 2:1 horizontally, which returns the image to its original (wide)
format. It is important to understand that the recorded image is only widescreen in two
places—in front of the lens when filming, and on the projection screen. During the
postproduction process, you are usually working with the squeezed image. 210 Chapter 6 Importing Video and Anamorphic Film
This is a fundamental principle when compositing anamorphically squeezed
elements—the actual image should never actually be scaled, but in order to work on
the image, you still need to see the results as they will look in widescreen. Shake has
specific parameters that allow you to preserve the original anamorphic data, while
viewing the frame at the proper unsqueezed ratio for purposes of layering other
images, rotoscoping, and painting.
Anamorphic Examples
The following screenshot depicts an anamorphic frame. The image resolution is 914 x
778, or half of a standard 1828 x 1556 anamorphic plate. You can see from the shape of
the circle that the image is squeezed along the X axis.
Properly Viewing Squeezed Images
There are two ways to view this image in its unsqueezed, widescreen proportions. You
might be tempted to change the resolution of the image with a Zoom or Resize node,
but this would be the wrong thing to do. Zooming the image horizontally (by the Y
axis) down by 2 (or up by 2 in the X axis), compositing, and then zooming it back to its
squeezed proportions would result in an unacceptable and unnecessary loss of quality.
The correct way to work with an anamorphic image is to modify either the Viewer’s
aspect ratio, or the script’s proxyRatio. The script proxyRatio is better for film elements,
and the Viewer aspect ratio is better for video elements.
• Video: To change the Viewer aspect ratio, expand the format subtree in the Globals
tab and set the defaultViewerAspectRatio parameter to 2 (the anamorphic ratio is 2:1
for this film plate).
For video, use the appropriate ratio, which is listed in the Table of Common Aspect
Ratios at the end of this chapter. Doing this renders the image at full resolution, and
then doubles the Viewer’s width. Because you are modifying only the Viewer
parameters, this has no effect on your output resolution, and images take exactly the
same amount of time to render.Chapter 6 Importing Video and Anamorphic Film 211
The only speed hit is in the interactivity to adjust the viewed frame. This is the
parameter you should use when dealing with video clips, since you change the X axis
and leave the Y axis, which contains the sensitive field information, alone.
• Film: With film media, you should set the proxyRatio (in the useProxy subtree of the
Globals tab) to .5 (1/2). Use of the proxy system reduces your render time for
interactive tests by half. Unlike viewerAspectRatio, this procedure halves the Y value,
rather than doubling the X value. However, the proxy system affects your output files,
so be sure to set the proxyRatio back to 1 when you render your images to disk.
Remember, you want to send squeezed files to the film recorder.
Following the above guidelines for this anamorphic film plate, open the useProxy
subtree and enter a proxyRatio of .5 to correct the squeezed element.
Node Aspect Ratio and the defaultAspect Parameter
Certain functions need to be aware of the current aspect ratio—specifically functions
dealing with circles or rotation. For example, if you apply a Rotate node to an
anamorphic plate, the image is distorted.212 Chapter 6 Importing Video and Anamorphic Film
The Rotate node has an aspectRatio parameter. Set the parameter to .5, and the
rotation is no longer distorted.
The RGrad node is backward from the other nodes. The aspectRatio for this should be
1/defaultAspect (what it uses as its creation rule). Here, an RGrad with an aspectRatio of
1 is composited on the image.
Since it is distorted, change the aspectRatio of the RGrad to 2 and the world is
beautiful.
Compositing Square Pixel Images With Squeezed Images
In this example, the following image represents a square pixel frame, which is typical of
3D-rendered (CG) images.Chapter 6 Importing Video and Anamorphic Film 213
When composited over the image, there is distortion because of the proxyRatio.
There are two options to correct this. You can scale the X parameter by half, or increase
the Y parameter by two. The first option ensures the highest quality, but means you
have to render the original CG element at twice the resolution of the second option. In
this example, the image is scaled in the Y parameter by 2 with either a Transform–Scale
or Zoom node:
Inheritance of the defaultAspect Parameter for Individual Nodes
If you’re working on a nonsquare pixel video composite, it’s important that you
correctly set the defaultAspect parameter in the format subtree of the Globals tab
before you begin working on your composition. The aspectRatio parameter in Rotate
and other nodes inherits this parameter automatically whenever the nodes are created.
Changing the defaultAspect after the fact does not change any nodes you’ve already
created, but it only affects nodes that are created after you set the defaultAspect.
The following nodes inherit the defaultAspect parameter when they’re created:
• ApplyFilter
• Checker
• CameraShake
• Blur
• Defocus
• DilateErode
• EdgeDetect
• FilmGrain
• Grain
• IBlur
• IDefocus
• IDilateErode
• IDisplace
• IRBlur 214 Chapter 6 Importing Video and Anamorphic Film
• ISharpen
• PercentBlur
• Pixelize
• Sharpen
• RBlur
• Sharpen
• AddText
• MatchMove
• Stabilize
• Text
• PinCushion
• Randomize
• Turbulate
• AddShadow
Tuning Parameters in Squeezed Space
In addition to Rotate and RGrad, other nodes should be tuned with a squeezed aspect
ratio. In the following example, a Blur node is applied to the image.
Because the default yPixel value is set to the xPixel value, you get twice the blur on the
X parameter in the squeezed space. To correct this, double the Y value (or halve the X
value) with an expression in the yPixels parameter:
xPixels*2
3D Software Renders
If your software allows, render your scene with the proper aspect ratio. This ensures
the highest quality in your composite.Chapter 6 Importing Video and Anamorphic Film 215
The blur now looks proportionately correct.
Rendering Squeezed Images
Once your composite is complete, reset the proxyRatio (in the Globals tab) to 1, and
render. Do not change any other parameters. The resulting image appears squeezed on
the X axis, but this distortion is corrected during the film projection process in the
theater.
Although you worked on the image in a squeezed state, all elements are properly
positioned when the proxyRatio is returned to 1. Use of the proxy system temporarily
squeezes the image down and then stretches it back out, thereby maintaining the
pixel data.
Handling Video Elements
Nearly all standard-definition video formats use nonsquare pixels, which results in a
squeezed image similar to that caused by anamorphic distortion. In addition, each
video format has its own aspect ratio. Using the proxyRatio parameter is not
recommended for video elements because proxyRatio squeezes the image along the Y
axis, which causes problems with field separation. 216 Chapter 6 Importing Video and Anamorphic Film
The correct way to account for video pixel ratios is to use the viewerAspectRatio
parameter (within the format subtree of the Globals tab) to set the aspect ratio of the
Viewer, leaving the fields of your video frames untouched. This also only affects the
Viewer—rendered images are not affected. However, Viewer playback performance
may be slightly affected.
You need only to change the defaultAspect for proper rendering. Unlike using the
proxyRatio method, you set defaultAspect to 1/YourVideoAspectRatio. For example, PAL
HD uses 1.422 as its aspect ratio. Therefore, set viewerAspectRatio to 1.422, and
defaultAspect to 1/1.422. Shake resolves the expression of 1 divided by 1.422 to become
0.703235. All other principles of image manipulation apply.
Table of Common Aspect Ratios
Below is a table of common aspect ratios, containing the values you should assign to
specific parameters. For nodes, the aspectRatio parameter is taken from the
defaultAspect value at the time the node is created. It is not changed if you later
change the defaultAspect.
Additionally, RGrad is the inverse of the other aspectRatio parameters, for example, 1/
defaultAspect, which accounts for its own column in the table. Initially setting the
defaultAspect guarantees that all nodes automatically get the proper aspect ratio.
Finally, follow the guidelines in Chapter 14, “Customizing Shake,” and save your
interface settings to preserve default values for these parameters. For more
information, see “Customizing Interface Controls in Shake” on page 359.
Preset Formats
The format pop-up menu in the Globals tab provides a number of preset formats. You
can also create your own formats.
Format
aspect
Ratio
proxy
Ratio
Viewer
Aspect
Ratio
default
Aspect
aspect Ratio
(common
nodes)
aspect
Ratio
(RGrad)
2:1
Anamorphic
Film
2 .5 NA .5 .5 2
4:3 NTSC D1
720 x 486,
720 x 480
.9 NA .9 1/.9 = 1.1111 1.11111 .9
16:9 NTSC
720 x 486
1.2 NA 1.2 1/1.2 =
.83333
.8333 1.2
4:3 PAL D1
720 x 576
1.066 NA 1.06 1/1.066 =
0.938086
0.938086 1.066
16:9 PAL
720 x 576
1.422 NA 1.422 1/1.422 .703235 1.4227
217
7 Using the Node View
The Node View is the heart of Shake’s graphical
compositing interface. This chapter covers all aspects of
navigating, customizing, and organizing nodes using this
powerful tool.
About Node-Based Compositing
The Node View in Shake displays all of the nodes that are used in a script. This
amalgamation of all the nodes used in a script is referred to as the node tree.
Each node in the tree performs a specific function, and is connected to other nodes by
lines referred to as noodles. The noodles that stretch between each node represent the
flow of image data from node to node.
Noodles are connected to input knots on each node. The output knot of one node is
usually attached to the input knot of the next node down in the node tree. In this way,
image data is passed top-down, from one node to the next. Thus, the image is
modified bit by bit until the final result is achieved.218 Chapter 7 Using the Node View
Note: Knots are only visible when the pointer is positioned over a node.
This node-based approach has many advantages. By expressing the entire compositing
task as a big flowchart, of sorts, the flow of image data is easy to keep track of.
Graphical manipulation of the individual nodes in the tree also lets you make changes
by quickly turning on and off different functions in the composite, adding and
removing nodes where necessary.
Where Do Nodes Come From?
All of the nodes available in Shake are available in the Tool tabs, found in the lower-left
area of the interface.
You can also access the full list of nodes by right-clicking anywhere within the Node
View, and choosing a node from the Nodes submenu of the shortcut menu.
Shake on Mac OS X also provides a Tools menu in the menu bar, with submenus for
each Tool tab.
Clicking a node in the Tool tabs or choosing a node from the Node View shortcut menu
creates that node in the Node View. For more information about creating nodes, see
“Creating Nodes” on page 226.
Knot
Noodle
Node
The entire
node treeChapter 7 Using the Node View 219
Navigating in the Node View
Every effect in Shake is created by an individual node that has been inserted into the
node tree, and each node has its own specific function and parameters. As you build
progressively larger node trees, you’ll find yourself spending more and more time
navigating around the Node View as you make adjustments to nodes at various levels
of the node tree.
As with all the other areas of the Shake interface, you can pan and zoom in the Node
View to navigate around your node tree.
To pan in the Node View:
m
Press the middle mouse button or press Option-click or Alt-click and drag.
To zoom into or out of the Node View, do one of the following:
m
Press Control-Option-click or Control-Alt-click and move the mouse left to zoom out, or
right to zoom in.
m
Press the + or - key.
The Node View Overview
A dynamic overview available in the Node View is a helpful navigation tool, especially
for complex node trees.
To toggle the overview on and off:
m
Press O to display the overview. Press O again to hide it.
Drag within the overview to pan across the entire node tree.
The overview can be resized, making it easier to see.220 Chapter 7 Using the Node View
To resize the node overview:
m Drag the upper-right corner, the top, or the right of the overview.
Favorite Views
If you’ve assembled an exceptionally large and complex node tree, you can navigate to
specific areas of your node tree and save that position of the Node View as a Favorite
View. This is mainly useful for saving the position of a collection of nodes that you’ll be
adjusting frequently.
Later on, when you’re looking at a different part of the tree and you realize you need to
adjust some of the nodes at one of your Favorite Views, you can instantly jump back to
that part of the tree. You can save and recall up to five favorite views. For more
information, see “Saving Favorite Views” on page 28.
To define a Favorite View:
1 Pan to a position in the Node View that contains the region you want to save as a
Favorite View. If necessary, adjust the zoom level to encompass the area that you want
to include.
Note: You can optionally recall the state of the nodes—in other words, which ones are
currently loaded into the Viewer and Parameters tabs.
2 To save a Favorite View, move the pointer over that quadrant and do one of the
following:
• Right-click anywhere within the Node View, then choose Favorite Views > View N >
Save (where N is one of the five favorite views you can save) from the shortcut menu.
• Press Shift-F1-5, where F1, F2, F3, F4, and F5 correspond to each of the Favorite Views.
To restore the framing of a Favorite View, do one of the following:
m
Right-click in the Node View, then choose Favorite Views > View N > Restore Framing
(where N is one of the five Favorite Views you can save) from the shortcut menu.
Default size After resizingChapter 7 Using the Node View 221
m Move the pointer into the Node View, and press F1-5, where F1, F2, F3, F4, and F5
correspond to each of the Favorite Views.
That quadrant is set to the originally saved position and zoom level.
To restore the framing and state of a Favorite View, do one of the following:
m
Right-click in the Viewer, Node View, or Curve Editor, then choose Favorite Views > View
N > Restore Framing & State (where N is one of the Five Favorite views you can save)
from the shortcut menu.
m Move the pointer into the Node View, and press Option-F1-5 or Alt-F1-5, where F1, F2,
F3, F4, and F5 correspond to each of the Favorite Views.
The nodes that were currently loaded into the Viewer and Parameters tabs are
reloaded.
Using the Enhanced Node View
The enhancedNodeView subtree in the Globals tab provides additional display options
for the Node View. These options can be turned on all together, or individually, to make
it easy to spot image bit-depth at different parts of the tree, animated nodes, node
concatenation, and expressions linking one node to another.
Unlike most other guiControls parameters, which are either off or on, these
parameters—showTimeDependency, showExpressionLinks, showConcatentationLinks,
and noodleColorCoding—have three states. This allows you to toggle all three
parameters using the enhancedNodeView command from within the Node View. The
three states are:
• off: Always off, regardless of whether or not enhancedNodeView is turned on.
• on: Always on, regardless of whether or not enhancedNodeView is turned on.
• enhanced: On when enhancedNodeView is on, and off when enhancedNodeView is
off.
To toggle enhanced Node View off and on:
1 Before turning on enhanced Node View, make sure the proper subparameters are
turned on in the enhancedNodeView subtree of the Globals tab.222 Chapter 7 Using the Node View
2 Move the pointer over the Node View, and do one of the following:
• Right-click, then choose Enhanced Node View from the shortcut menu.
• Press Control-E.
• Open the Globals tab, then click enhancedNodeView.
Enhanced Node View Parameters
There are seven parameters in the enhancedNodeView subtree.
showTimeDependency
This parameter, when turned on, draws a bluish glow around nodes that are animated.
This includes nodes that consist of FileIn nodes that reference a QuickTime movie or
multiple-image sequence, nodes containing keyframed parameters, or nodes utilizing
expressions that change a parameter’s value over time. In the following screenshot, the
alien FileIn node is highlighted because it is a multi-frame animation. The Stabilize1
node is highlighted because it contains motion-tracking keyframes. The Ramp1 node is
not highlighted because it’s only a single image, and the Rotate1 node is not animated.
showExpressionLinks
Turning this parameter on draws a light purple line connecting nodes that use
expressions to reference other nodes. An arrow pointing from the linked node toward
the referenced node indicates their relationship. In the following screenshot, the
Move2D1 node contains expressions that link it to both the Rotate1 and Pan1 nodes.Chapter 7 Using the Node View 223
Note: When you clone a node by copying it and then pasting it with the Paste Linked
command, the resulting cloned node displays an expression link arrow when
showExpressionLinks is turned on.
ShowConcatenationLinks
When this parameter is turned on, a green line connects a series of nodes that
concatenate. For example, three transform nodes that have been added to a node tree
in sequence so that they concatenate appear linked with a green line connecting the
left edge of each node. As a result, nodes that break concatenation are instantly
noticeable. In the following screenshot, the Rotate1, CameraShake1, and Pan1 nodes are
concatenated.
Note: As is often repeated, node concatenation is a very good thing to take advantage
of. You are encouraged to group nodes that concatenater whenever possible to
improve the performance and visual quality of your output.
Noodle Color Coding
Turning on Shake’s noodle color coding parameters provides an additional way to
visually distinguish the bit depth and channel information of the image data flowing
down your node tree. With color coding turned on, you can see where in the node tree
the bit depth is promoted or reduced, which noodles contain RGBA channel data
versus BW data, and so forth.224 Chapter 7 Using the Node View
Note: The Node View redraw speed of extremely large scripts may be reduced with
noodleColorCoding turned on.
There are two kinds of coding used to identify the image data that noodles represent.
stipple8Bit, stipple16Bit, stipple32Bit
The stipple pattern of a noodle indicates its bit depth. In the following screenshot,
three renamed Bytes nodes output 8-, 16-, and 32-bit float data respectively. The
stippling indicates each bit depth at a glance.
noodleColor Coding
Different combinations of color channels being propagated down the node tree are
represented by different colors. In the following screenshot (showing four renamed
Reorder nodes), the first node is disconnected from any incoming image data, while the
next two are respectively propagating BWA and RGB channels. The color channels
represented by each noodle are clearly distinguishable (this screenshot is viewable in
color via the onscreen help).
Noodle Display Options
There are many ways you can customize the display of noodles in the Node View. If you
find the numerous color-coding and stipple patterning distracting, you can also leave
these options turned off.
Note: For purposes of simplicity, most of the examples in the Shake documentation
have these noodle display options either turned off or left to their defaults.Chapter 7 Using the Node View 225
Noodle Tension
The noodleTension parameter, within the guiControls subtree of the Globals tab, lets
you adjust how much “slack” there is in the way noodles are drawn from knot to knot.
Higher values introduce more slack, and noodles appear more curved. Lower values
reduce the slack, and noodles are drawn in more of a straight line.
Customizing Noodle Color Coding
In the noodleColors subtree of the colors subtree, there are 12 different parameters for
noodle color coding, corresponding to each possible combination of color channels in
Shake. Each combination has a color control you can use to set up whichever color
scheme makes the most sense to you.
The top parameter, noodleColor, lets you change the base color of noodles in the Node
View. Noodles are white by default, but you can use the color control to change this to
anything you like.
Customizing Noodle Stippling
You can customize the stipple patterns of noodles in the enhancedNodeView subtree
of the Globals tab. You can also customize the colors used to identify noodle bit depth
in the noodleColors subtree of the colors subtree of the Globals tab.226 Chapter 7 Using the Node View
In the enhancedNodeView subtree, the stipple8Bit, stipple16Bit, and stipple32Bit
parameters each have five different stipple patterns you can choose from, for
maximum clarity.
Creating Nodes
All effects in Shake are performed by creating and attaching nodes to one another in
the Node View tab.
To add a node to the Node View:
1 To add a node to an existing tree, select a node in the tree that you want to add a node
to.
Note: If no node is selected, new nodes that are added are free-floating, not connected
to anything.
2 Do one of the following:
• In Mac OS X, choose a node from the Tools menu at the top of the screen.
• Click a node in the Tool tabs in the lower-left quadrant of the Shake interface.
• Right-click in the Node View, then choose a node from the Node shortcut menu.
For example, to add an Atop node, choose Nodes > Layer > Atop.
Creating Custom Stipple Patterns
Different stipple patterns can be set in a .h preference file. Each stipple pattern is
defined by a four-byte hex number that, when converted to binary, provides the
pattern of the line drawn for each bit depth—each 1 corresponds to a dot, and each
0 corresponds to blank space.
For example, 0xFFFFFFFF is the hex equivalent of 1111111111, which creates a solid line.
0xF0F0F0 is the hex equivalent of 1111000011110000, which creates a dashed line.
The default settings are:
gui.nodeView.stipple8Bit = 0x33333333;
gui.nodeView.stipple16Bit = 0x0FFF0FFF;
gui.nodeView.stipple32Bit = 0xFFFFFFFF; Chapter 7 Using the Node View 227
• Right-click any Tool tab to display a shortcut menu of the available node functions.
The modifier keys (see below) work as well. Additionally, if you lower the Tool tabs so
that only the tabs are visible, you can also access the pop-up menus with the leftmouse button (a cool trick).
Note: To add several nodes from the Tool tab shortcut menu at once, right-click the
tab, then right-click each node you want to create in succession. To close the menu,
left-click in the menu.
New nodes appear underneath the selected node in the tree. Adding a node to a
selected node that’s already connected to another node inserts the new node in
between the two nodes.
You can also choose to add one type of node to every selected node in the Node View.
To add one type of node to multiple nodes in the Node View:
1 Select two or more nodes in the Node View.228 Chapter 7 Using the Node View
2 In the Tool tabs, right-click the node you want to add, then choose Insert Multiple from
the shortcut menu.
The new node is inserted after each selected node.
Selecting and Deselecting Nodes
There are numerous ways you can select nodes in the Node View.
To select one or more nodes, do one of the following:
m
Click a single node to select it.
m Drag a selection box around one or more nodes in the Node View.
m
Shift-click individual nodes to add them to the current selection.
m
Shift and drag over nodes in the Node View to add nodes to the current selection.
Control-click individual nodes to deselect them, leaving other nodes selected.
m
Press the Control key and drag over nodes in the Node View to deselect nodes, leaving
other nodes selected.
m
Press Command-A or Control-A to select every node in the Node View.
To select every node in a tree that’s attached to a particular node:
1 Select the first node.
2 Do one of the following:
• Press Shift-A.Chapter 7 Using the Node View 229
• Right-click the first node, then choose Select > Associated Nodes from the shortcut
menu.
To select every node that’s connected above (upstream from) a selected node,
do one of the following:
m
Press Shift-U.
m
Right-click the selected node, then choose Select > Upstream Nodes from the shortcut
menu.
Note: To limit the selection to only the node above the currently selected one, choose
Select > Upstream 1 Level (or press Shift-Up Arrow).230 Chapter 7 Using the Node View
To select every node that’s connected below (downstream from) a selected
node, do one of the following:
m
Press Shift-D.
m
Right-click the selected node, then choose Select > Downstream Nodes from the
shortcut menu.
Note: To limit the selection to only the node above the currently selected one, choose
Select > Upstream 1 Level (or press Shift-Down Arrow).
To select a node by its name:
1 Press Command-F or Control-F in the Node View.
2 When the Select Nodes by Name window appears, enter the name of the node you’re
trying to find into the Search string field.
Nodes are selected in real time as you type. There are several options you can use to
help you select the right nodes:Chapter 7 Using the Node View 231
To invert the selected nodes, reversing which ones are selected and
deselected:
m
Right-click any node, then choose Select >Invert Selection from the shortcut menu.
To deselect all nodes in the Node View:
m
Click an empty area of the background to deselect all nodes.
Connecting Nodes Together
A node must be connected to the overall node tree in order to have any effect. The
main method for doing this is to drag a noodle from one node’s knot to another.
To connect one node to another by dragging:
1 Move the pointer over the knot of the first node you want to connect.
2 Click the knot, and drag the resulting noodle from the first node to the input knot of
the second node.
Function Description
Select by name Enter the search string, and matching nodes are immediately
activated. For example, if you enter just f, FileIn1 and Fade are
selected. If you enter fi, just the FileIn1 is selected.
Select by type Selects by node type. For example, enter Move, and all Move2D and
Move3D nodes are selected.
Select by expression Allows you to enter an expression. For example, if you want to find
all nodes with an angle parameter greater than 180, type
“angle>180.”
Match case Sets case sensitivity.232 Chapter 7 Using the Node View
You can also connect one knot to another by Shift-clicking. This is a more convenient
method to use if the two knots you want to connect are far away from one another in
the Node View.
To connect one node to another by Shift-clicking:
1 Select the first node you want to connect.
2 Move the pointer over the second node you want to connect so that the knot is visible,
then Shift-click the knot to connect both nodes together.
You can also use this technique to connect a group of nodes to a single multi-input
node, such as the MultiLayer, MultiPlane, or Select node.Chapter 7 Using the Node View 233
To connect several nodes to a multi-input node at once:
1 Select all of the nodes you want to connect to the multi-input node.
2 Shift-click the plus sign input of the multi-input node.
All selected nodes are connected.
One Input, Many Outputs
Any single input knot on a node can only be connected to a single noodle. This is true
even for multi-input nodes—each input knot can only be connected to a single
noodle. If the node you want to connect is already attached to another noodle, the
previous connection is broken, and replaced by the noodle you’re dragging.234 Chapter 7 Using the Node View
You can also drag a noodle from an input knot to the output knot of a different node.
For example, you can drag a noodle from the Over1 node to the moon node.
On the other hand, you can drag as many connections from a node’s output as you
want. For example, you can connect a FileIn node’s output knot to several nodes. This
creates multiple separate branches of image processing, which can be recombined
later on in the tree.Chapter 7 Using the Node View 235
Breaking Node Connections
Node connections are broken by deleting the noodle that connects them.
To delete the connection between two nodes:
1 Select the noodle you want to delete by positioning the pointer over it so that it turns
red (toward the bottom) or mustard yellow (toward the top).
2 To delete it, do one of the following:
• Press Delete or Backspace.
• Right-click the noodle, then choose Edit > Delete from the shortcut menu.
To switch two inputs around:
m Drag the bottom half of a noodle (it turns red when selected) to another input. When
you release the mouse button, the inputs are automatically switched.
Inserting, Replacing, and Deleting Nodes
Once you create a node tree, there are a number of different ways to adjust your
composite by reorganizing the nodes in your project.236 Chapter 7 Using the Node View
Inserting Nodes Into a Tree
You can insert nodes into the middle of the node tree in the Node View using either
the Tool tabs, or the Nodes submenu in the shortcut menu of the Node View. There are
several shortcuts you can use to create and insert nodes.
To insert a new node between two nodes, do one of the following:
m
Select a parent node in the Node View, and click a new node in the Tool tabs.
m
Select a parent node in the Node View, then right-click it and choose a node to create
from the Nodes submenu of the shortcut menu.
Note: When you insert a node between a node that branches to two other nodes, both
branching nodes are connected to the output knot of the newly inserted node.
To insert a disconnected node between two existing nodes:
1 Drag a node directly over a noodle so that both its knots appear highlighted. You can
select which input of a multi-input node to insert by dragging the node to the left or
right, so that the desired input knot is highlighted.
2 When you release the mouse button, the node is automatically inserted.
To create a new branch, do one of the following:
m
Select a parent node in the Node View, then Shift-click a new node in the Tool tabs.
m
Select a parent node in the Node View, then right-click a node in the Tool tabs, and
choose Branch from the shortcut menu.Chapter 7 Using the Node View 237
m
Select a parent node in the Node View, then, pressing the Shift key while you right-click
it, choose a node from the Nodes submenu at the top of the shortcut menu.
To replace an already existing node with a different one, do one of the
following:
m
Select the node you want to replace in the Node View, then Control-click the new node
in the Tool tabs.
m
Right-click a node in the Tool tabs, then choose Replace from the shortcut menu.
m
Select the parent node in the Node View, then, pressing the Control key while you
right-click it, choose a node from the Nodes submenu at the top of the shortcut menu.
To create a floating node that’s not connected to anything, do one of the
following:
m
Control-Shift-click the node you want to create in the Node List.
m
Right-click a node in the Tool tabs, then choose Create from the shortcut menu.238 Chapter 7 Using the Node View
m Deselect all nodes in the Node View, then right-click in the background area of the Node
View and choose a node from the Nodes submenu at the top of the shortcut menu.
Deleting and Disconnecting Nodes From a Tree
There are several ways to remove nodes from a tree, either by isolating the node, or
eliminating it completely.
To delete a node, do one of the following:
m
Select one or more nodes and press Delete or Backspace.
m
Select one or more nodes, then right-click one of them and choose Edit > Delete from
the shortcut menu.
To extract a node from a tree without deleting it, do one of the following:
m
Select the node and press E (for Extract).
m
Select a node, then right-click it and choose Extract Selected Nodes from the shortcut
menu.Chapter 7 Using the Node View 239
m
Click the node, and with the mouse button held down, drag it quickly to the left and
right several times to “shake” it loose.
To disconnect a noodle without affecting the nodes above and below, do one
of the following:
m
Control-click a noodle.
m Move the pointer over the noodle, and when the noodle turns red, press Delete or
Backspace.
m
Select a node, then right-click it and choose Edit > Delete from the shortcut menu.
Copying and Pasting Nodes
Nodes can be cut and pasted within the Node View.
To copy a node or group of nodes, select one or more nodes and do one of the
following:
m
Right-click the selected node, then choose Edit > Copy from the shortcut menu.
m
Press Command-C or Control-C.
After copying a node, you can now paste or “clone” it.
To paste or “clone” a node, do one of the following:
m
Right-click the background of the Node View, then choose Edit > Paste from the
shortcut menu.
m
Press Command-V or Control-V.240 Chapter 7 Using the Node View
Moving Nodes
To move a node, select the node and drag it within the Node View. If you drag a node
past the edge of the Node View, the workspace will scroll, allowing you to move the
selection further into that direction.
Loading a Node Into a Viewer
You can view the image output from any single node in your node tree. For example, if
you have several color nodes attached in a row, you can load any of these operations into
the Viewer to see the result of all of the nodes that have been attached up to that point.
To load a node into the Viewer, do one of the following:
m
Click the left side of a node to load that node into the current Viewer.
A green Viewer indicator appears on the left side of the node.
m Double-click anywhere on a node to simultaneously load it into the Viewer, and its
parameters into the Parameters tab.
The Viewer indicator appears on the left side, and the parameters indicator appears on
the right side of the node.
When a node is loaded into the Viewer, a number and a letter appear underneath the
Viewer indicator, identifying the compare buffer that the node’s image occupies.
For more information on working with multiple viewers, see “Using and Customizing
Viewers” on page 45.
This node is displayed in
Viewer 2, buffer A.
This node is displayed in
Viewer 1, buffer A.Chapter 7 Using the Node View 241
Loading Node Parameters
In order to modify a node’s parameters, you must first load them into one of the two
Parameters tabs. The Parameters tabs remain empty until you click the right side of a
node (or double-click the node).
To load a node’s parameters into the Parameters1 tab, do one of the following:
m
Click the right side of the node.
The parameters indicator appears on the right side of the node.
Note: The node does not have to be selected in order to load its parameters into either
of the Parameters tabs.
m Double-click anywhere on the node to load its parameters into the Parameters1 tab
and its image into the Viewer.
To load a node’s parameters into the Parameters2 tab:
m
Shift-click the right side of the node.
Loading a node’s parameters into a tab automatically clears out whatever previous
parameters were loaded. If necessary, you can clear a Parameters tab at any time.242 Chapter 7 Using the Node View
To clear a tab so that no parameters are loaded into it:
m
Right-click the Parameters1 or Parameters2 tab, then choose Clear Tab from the
shortcut menu.
It’s important to bear in mind that you can load the image from one node into the
Viewer, while loading another node’s parameters into the Parameters tab.
For example, you can view the resulting image from the bottommost node in a tree,
while adjusting the parameters of a node that’s farther up in that tree.
The Viewer indicator shows which nodes are loaded into Viewers, and the parameters
indicator shows which nodes have been loaded into one of the Parameters tabs.
For more information on adjusting parameters, see “The Parameters Tabs” on page 72.
Click to load node
parameters.
Click once to display node in Viewer.
Double-click to load both the Viewer
and the node parameters.
Viewer indicator shows
that this node is loaded
into Viewer 1, buffer A.
Parameters indicator
shows that this node is
loaded into one of the
Parameters tabs.Chapter 7 Using the Node View 243
Ignoring Nodes
Nodes in the node tree can be disabled without actually removing them from the tree,
using the Ignore command. Ignored nodes have no effect on your script whatsoever,
and are never rendered.
This is a good way to see what effect a node is having on your composition. Ignoring
nodes also allows you to disable nodes you may not need any more, without
permanently deleting them in the event you change your mind later on.
To toggle nodes off and on, ignoring and restoring them, do one of the
following:
m
Select one or more nodes, then press I.
m
You can also load a node into the Parameters tab, then turn on the ignoreNode
parameter.
Ignored nodes appear with a red diagonal slash in the node tree. To reactivate a node,
press I again.
Renaming Nodes
For organizational purposes, you may find it useful to rename a node in order to keep
track of its function in your composition. This can be especially useful if you have
numerous identical nodes in close proximity to one another in a dense node tree.
To rename a node:
1 Load the node’s parameters into the Parameters1 tab.
2 Enter a new name into the nodeName field of the Parameters tab.
FileIn and FileOut nodes are automatically named based on the media files they’re
linked to on disk.244 Chapter 7 Using the Node View
Arranging Nodes
Shake has several commands to help you organize and navigate complex node trees.
Keeping your node trees clean and organized will save you much time later on, when
you’re fine-tuning a massively involved 200-node tree.
Grid Snap
You can toggle a grid in the Node View to line up nodes evenly.
To activate a grid in the Node View, do one of the following:
m
In the Node View, right-click, then choose Snap to Grid from the shortcut menu.
m Open the guiControls subtree of the Globals tab, and turn on the gridEnabled
parameter.
When the Node View grid is enabled, the nodes you move are automatically aligned.
To temporarily activate grid snapping when Snap to Grid is disabled:
m
Press Shift as you drag a node.
To temporarily deactivate grid snapping when Snap to Grid is enabled:
m
Press Shift as you drag a node.
Grid Parameters in the Globals Tab
Five parameters, located in the Globals tab in the guiControls subtree, allow you to
define the spacing of the Node View grid.
gridWidth, gridHeight
Specifies how wide and tall each rectangle of the grid is.
Name Nodes Carefully
Here are some rules about names to avoid using:
• Avoid using spaces or non-alphanumeric characters (’, .!, and so on).
• Don’t name any node “color.”
• To avoid confusion, don’t give a node another node’s name, for example, renaming
a Brightness node to Fade.
• Don’t use a name that’s used by a local variable within that node.
• Don’t name nodes with single characters that typically have other meanings within
Shake, such as x, y, z, r, g, b, or a.Chapter 7 Using the Node View 245
gridEnabled
Lets you turn grid snapping on and off. This control also toggles the background grid
pattern in the Node View if gridVisible is turned on.
gridVisible
Displays the grid as a graph in the background of the Node View. This graph is only
displayed when gridEnabled is turned on.
layoutTightness
This parameter affects the Layout Arrangement commands in the next section. It lets
you specify how closely nodes should be positioned to one another when they’re
newly created, or whenever you use one of the arrangement commands.
Automatic Layout Arrangement
Shake has several commands to help you automatically arrange nodes in the Node
View. Use these commands with caution—complicated trees with lots of cross-branching
may produce odd results.
When nodes are created or organized using the Node Layout commands, they are
spaced according to the layoutTightness parameter in the guiControls subtree of the
Globals tab. Newly created nodes are spaced this distance from their parents, so a
smaller number creates tighter trees.
To automatically organize a group of nodes:
1 Select one or more nodes.
2 Do one of the following:
• Right-click one of the selected nodes, then choose Node Layout > Layout Selected
from the shortcut menu.
• Press L.
The selected nodes are automatically rearranged in an organized manner.
To align nodes vertically (on the same X axis):
m
Select one or more nodes and press X. 246 Chapter 7 Using the Node View
To align nodes horizontally (on the same Y axis):
m
Select one or more nodes and press Y.
To compress and align nodes vertically:
m
Select one or more nodes and press Shift-L.
The selected nodes are lined up and stacked together one against the other.
Groups and Clusters
A group is a collection of several nodes that are collapsed to appear as a single object
in the Node View. Grouped nodes save space and allow you to organize your node tree
into related sets of nodes that perform specific and easily labeled functions.
When you collapse several nodes into a group, the input and output noodles for the
topmost and bottommost nodes in the group connect into and out of the group
object.
To create a group of nodes:
1 Select one or more nodes.
2 Do one of the following:
• Right-click in the Node View, then choose Groups > Group Selected Nodes from the
shortcut menu.
• Select the nodes and press G.Chapter 7 Using the Node View 247
• To create a group and immediately open it into a cluster, right-click in the Node View,
then choose Group Selected Nodes and Maximize from the shortcut menu (or press
Option-G or Alt-G).
To consolidate two or more groups into a larger group:
1 Select two or more groups in the Node View.
2 Do one of the following:
• Right-click a group, then choose Groups > Consolidate Selected Groups from the
shortcut menu.
• Press Shift-G.
To ungroup nodes, do one of the following:
m
Right-click a group, then choose Groups > Ungroup Selected Nodes/Groups from the
shortcut menu.
m
Select a group, then press Control-G.
m Open the group into a cluster, then click the Ungroup button in the cluster’s title bar.
The nodes go back to being a set of individual nodes in the tree. The original
connections remain the same.
Clusters
Groups are typically collapsed to save space, but are easily expanded to reveal their
contents for further modification. Expanded groups are referred to as clusters.
Even if you don’t keep a collection of nodes collapsed into a group, having an
expanded cluster of nodes in the Node View allows you to color-code them. This color
coding can aid visual navigation within a large node tree.
To expand the node group into a cluster:
1 Click the Expand button on the group node.248 Chapter 7 Using the Node View
2 Select a group, then press G.
The group expands into a cluster.
Once a group is expanded into a cluster, the group node includes two additional
controls:
The Ungroup button
The Ungroup button (on the left side of the group node) removes all grouping/cluster
information.
When you click this button, a warning window appears that states, “You are about to
ungroup the selected group. Continue?” Click Yes to ungroup the selected group.
The Collapse button
The Collapse button (on the right side of the group node) closes the cluster back into a
normal group.Chapter 7 Using the Node View 249
Group Parameters
Loading the parameters of a group into the Parameters tab allows you to change the
color of the cluster background (see below), add notes, and expose selected
parameters.
To add a note to a group:
1 Load the group into the Parameters tab.
2 Type your annotation into the Notes parameter field.
Cluster notes appear at the bottom-left corner of an open cluster.
To change the background color of a cluster:
1 Load the group into the Parameters tab.
2 Open the revealParameters subtree, and use the Background Color control to pick a
new color.
The revealParameters button lets you isolate important sliders from multiple nodes and
load them into a single Parameters tab, saving you the trouble of jumping between
multiple nodes.
To expose individual parameters in a group:
1 Load the parameters of the group (click the right side of the group node).
2 In the group Parameters tab, click the revealParameters button.250 Chapter 7 Using the Node View
The Expose Group Parameters window appears.
3 To select one or more node parameters from nodes within the cluster, do one of the
following:
• To expose every parameter within a node, click Select All.
• To expose individual parameters within a node, expand the node’s Select All subtree,
then enable the desired parameters.
4 Click OK.Chapter 7 Using the Node View 251
The selected nodes parameters appear in the group Parameters tab.
Opening Macros
If you’re using macros within your script, they can be opened and closed in much the
same way as groups.
To examine the contents of a macro, do one of the following:
m
Right-click a macro, then choose Macro > Show Macro Internals from the shortcut menu.
m
Press B.
When a macro is open, you can view any parameter or stage of the macro, but you
cannot edit parameters or rewire nodes. This functionality is primarily useful for
understanding the workings of a macro you may be unfamiliar with.
To close a macro, do one of the following:
m
Click the macro.
m
Position the pointer on an empty area, and press Option-B or Alt-B.
Cloning Nodes
It’s possible to create clones of nodes within your node tree. For example, if you’ve
created a color-correcting node with specific settings, and you want to use it multiple
times in your script, you can create clones. 252 Chapter 7 Using the Node View
The principal advantage to cloned nodes is that changes made to one cloned node are
automatically applied to every other cloned duplicate of that node within your script.
To create a clone of a node:
1 Copy a node by pressing Command-C or Control-C.
2 Paste a clone of that node by doing one of the following:
• Right-click in the Node View, then choose Edit > Paste Linked from the shortcut
menu.
• Press Command-Shift-V or Control-Shift-V.
Note: You cannot clone Tracker, LookupHSV, or FilmGrain nodes in the node tree using
the Paste Linked command.
The cloned node is pasted, and automatically named OriginalNode_CloneN, where N is
the number of clones that have been made. You can clone a node as many times as
you want, and all cloned nodes are linked to the original node. In the following
screenshot, the Gamma1 node has been cloned twice, in order to apply an identical
gamma correction to the other two images in the tree.
The links between cloned nodes can be viewed by turning on ShowExpressionLInks in
the enhancedNodeView subtree of the Parameters tab (and then turning on
enhancedNodeView). For more information, see “Using the Enhanced Node View” on
page 221.Chapter 7 Using the Node View 253
Thumbnails
By default, thumbnails are automatically generated in the Node View for image nodes,
including but not limited to the FileIn, Grad, Ramp, and RotoShape nodes. These
thumbnails are meant to help you navigate the Node View by showing where the
originating images in your script are. In order to prevent these thumbnails from
slowing down Shake’s processing speed, they do not automatically update to reflect
changes made to them. You can manually update them, if necessary, a process
described later in this section.
Thumbnails are stored in the cache and, since they are rarely modified, are fast to load
when you load a script.
Customizing Thumbnail Display
Four parameters in the displayThumbnails subtree of the guiControls subtree of the
Globals tab allow you to customize how thumbnails are displayed in the Node View.
displayThumbnails
Turns all thumbnails in the Node View on and off.
thumbSizeRelative
Makes thumbnails a similar size or relative to their actual sizes. By default, all
thumbnails are displayed at the same width. To display thumbnails at their relative
sizes, turn on thumbSizeRelative. 254 Chapter 7 Using the Node View
In the following images, the greenscreen image is PAL and the truck image is 410 x 410.
thumbSize
Lets you adjust the size of thumbnails in the Node View. If thumbSizeRelative is turned
on, all nodes are resized relative to one another.
thumbAlphaBlend
Turns thumbnail transparency on and off. When thumbAlphaBlend is on, moving one
thumbnail over another results in a fast look at how the nodes might appear when
composited together in the Viewer. More usefully, it gives you an instant view of which
images have transparency in them.
Once you’ve customized the thumbnail settings for your project, you can save these
parameter settings.
To save the new settings:
m
Choose File > Save Interface Settings.
Adding Thumbnails to Nodes
Most nodes are not created without thumbnails. However, you can display a thumbnail
for any node in the Node View. This can help you to emphasize key points in your node
tree that you’d like to use as roadsigns to show how things are working.
To toggle a thumbnail on or off for any node:
1 Select one or more nodes.
2 Do one of the following:
• Right-click the selected node, then choose Thumbnails > Show/Hide Selected
Thumbnails from the shortcut menu.
thumbSizeRelative deactivated thumbSizeRelative activatedChapter 7 Using the Node View 255
• Press T.
When a node with a thumbnail appears in the middle of a node tree, the input noodles
feed into the top of the thumbnail.
Updating Thumbnails
To prevent unnecessary processing, thumbnails are not automatically updated, so they
may not reflect the frame that’s at the current position of the playhead. Furthermore,
nodes aren’t updated when you change their parameters. This can be especially
noticeable in the thumbnails of QuickPaint and RotoShape nodes.
Thumbnails can be updated manually, so that they more accurately represent the
current state of the node tree.
To update a thumbnail:
1 Select one or more nodes.
2 To display a different frame, move the playhead in the Time Bar.
3 Do one of the following:
• Right-click in the Node View, then choose Thumbnails > Refresh Selected Thumbnails
from the shortcut menu.
• Press R.256 Chapter 7 Using the Node View
Toggling Thumbnails Between Color and Alpha Channels
When the pointer is positioned over a thumbnail, a number and letter appear in the
upper-left corner, indicating which frame is loaded as a thumbnail, and whether you
are looking at the RGB/Color view (C) or the Alpha view (A). Thumbnails can be toggled
between Alpha and Color view on an individual basis.
To toggle thumbnails between Color and Alpha view:
1 Select one or more nodes.
2 To display the Color view, right-click in the Node View, then choose Thumbnails > View
RGB channels from the shortcut menu.
3 To switch back to the Alpha view, right-click in the Node View, then choose
Thumbnails > View Alpha channels from the shortcut menu.
You can also place the pointer over a thumbnail and press A or C to toggle between
the Alpha view and the Color view.
Defining Which Nodes Are Created With Thumbnails
You can declare any specific type of node to always contain thumbnails upon creation.
For example, to add FileOuts into the list of nodes receiving a thumbnail, set the
following ui.h code:
nuiNodeViewEnableThumbnail(“FileOut”);
You can also disable an enabled node with the following ui.h code:
nuiNodeViewDisableThumbnail(“FileOut”);
For all nodes, use NRiFx as your class. Note, however, that specifying downstream nodes
(especially FileOut nodes) can cause pauses at script load time, as the entire tree must
be calculated to derive the proper thumbnail. Use with caution.
To have thumbnails always off, disable the thumbnails in the guiControls of the Globals
tab, then choose File > Save Interface Settings, or add the following ui.h code:
script.displayThumbnails = 0;
Indicates a thumbnail
displaying frame 1, in
Alpha view.Chapter 7 Using the Node View 257
The Node View Shortcut Menu
The following commands are available in the shortcut menu that appears when you
right-click in the Node View.
Shortcut
Menu Option Keyboard Desription
Nodes Create nodes directly in the Node View from the
node list.
Edit Cut Command-X or
Control-X
Removes selected nodes and places them into
the paste buffer.
Copy Command-C or
Control-C
Copies the selected nodes into the paste buffer.
Paste Command-V or
Control-V
Pastes the buffer into the Node View. You can also
copy nodes from the Node View and paste them
into a text document.
Delete Del or
Backspace
Deletes the selected nodes. If the branching is
not complicated, the noodles between the
parent(s) and children are automatically
reattached to each other.
Undo Command-Z or
Control-Z
Undo up to 100 steps. Rearranging nodes counts
as a step.
Redo Command-Y or
Control-Y
Redo your steps unless you have changed values
after several undos.
View Zoom In + Zooms into the Node View (also use ControlOption-click or Control-Alt-click).
Zoom Out – Zooms out of the Node View (also use ControlOption-click or Control-Alt-click).
Reset View Home Centers all nodes.
Frame
Selection
F Frames all selected nodes into the Node View.
Render Render
Flipbook
Renders a Flipbook of the node visualized in the
active Viewer.
Render Disk
Flipbook
Mac OS X only. This option launches a disk-based
Flipbook into QuickTime. This has several
advantages over normal Flipbooks. It allows for
extremely long clips and allows you to attach
audio (loaded in with the Audio Panel in the main
interface). You can also choose to write out the
sequence as a QuickTime file after viewing,
bypassing the need to re-render the sequence.
Render
FileOuts
Opens the render window, which lets you set
how you want to render FileOuts in your script.
Render Proxies Opens the render proxy parameters window.
Overview
On/Off
O Turns on the Overview window to help navigate
in the Node View.258 Chapter 7 Using the Node View
Enhanced
Node View
On/Off
Control-E Turns the selected enhanced Node View options
off and on.
Snap to Grid
On/Off
Turns gridEnabled on and off in the Globals tab.
Select Find Nodes Command-F or
Control-F
Activates nodes according to what you enter in
the Search string field in the Select Nodes by
Name window.
• Select by name. Enter the search string, and
matching nodes are immediately activated. For
example, if you enter just f, FileIn1 and Fade are
selected. If you enter fi, just FileIn1 is selected.
• Select by type. Selects by node type. For
example, enter Transform, and all Move2D and
Move3D nodes are selected.
• Select by expression. Allows you to enter an
expression. For example, to find all nodes with
an angle parameter greater than 180:
angle >180
• Match case. Sets case sensitivity.
All Command-A or
Control-A
Selects all nodes.
Associated
Nodes
Shift-A Selects all nodes attached to the current group.
Invert Selection ! All selected nodes are deactivated; all deactivated
nodes are activated.
Select
Upstream
Shift-U Adds all nodes upstream from the currently active
nodes to the active group.
Select
Downstream
Shift-D Adds all nodes downstream from the currently
active nodes to the active group.
Select
Upstream 1
Level
Shift-Up Arrow Adds one upstream node to the current selection.
Add
Downstream 1
Level
Shift-Down
Arrow
Adds one downstream node to the current
selection.
Node Layout Layout
Selected
L Automated layout on the selected nodes.
Align Selected
Vertically
X Snaps all selected nodes into the same column.
Align Selected
Horizontally
Y Snaps all selected nodes into the same row.
Shortcut
Menu Option Keyboard DesriptionChapter 7 Using the Node View 259
Thumbnails Refresh
Selected
Thumbnails
R Activates/refreshes the thumbnails for selected
nodes.
Show/Hide
Selected
Thumbnails
T Turns on/off selected nodes. If you haven’t yet
created a thumbnail (R), this does nothing.
View RGB
Channels
C Displays the RGB channels.
View Alpha
Channels
A Displays the alpha channel.
Groups Group/
Ungroup
Selected Nodes
G Visually collapses selected nodes into one node.
When saved out again, they are remembered as
several nodes. To ungroup, press G again.
Group Selected
Nodes and
Maximize
Option-G or
Alt-G
Groups nodes and automatically maximizes the
group, for immediate editing.
Maximize/
Minimize
Selected
Groups
M Opens a group into a subwindow.
Consolidate
Selected
Groups
Shift-G Consolidates two or more selected groups into a
larger group.
Ungroup
selected
Nodes/Groups
Command-G or
Control-G
Turns a group of nodes back into a collection of
individual, ungrouped nodes.
Ignore/
UnIgnore
Selected Nodes
I Turns off selected nodes when activated. Select
them again and press I to reactivate the nodes.
You can also load the parameters into the
Parameter View and enable ignoreNode.
Extract
Selected Nodes
E Pulls the active nodes from the tree, and
reconnects the remaining nodes to each other.
Save Selection
as Script
S Saves the selected nodes as a script.
Force Selected
FileIn Reload
When an image is changed on disk and you have
already looked at that image in the interface,
Shake does not recognize that the image has
changed. Selecting these two functions forces the
checking of the date stamp for the images on
disk.
Force All FileIn
Reload
See above.
Macro Make Macro Shift-M Launches the MacroMaker with the selected
nodes as the macro body.
Shortcut
Menu Option Keyboard Desription260 Chapter 7 Using the Node View
Show Macro
Internals
B Opens a macro into a subwindow so you can
review wiring and parameters. You cannot
change the nodes inside the subwindow.
Hide Macro
Internals
Option-B or
Alt-B
Closes up the macro subwindow when the
pointer is placed outside of the open macro.
Shortcut
Menu Option Keyboard Desription8
261
8 Using the Time View
The Time View provides a centralized representation of
the timing for each image used in a script. This chapter
covers how to navigate this interface, and how to make
adjustments to the timing parameters of each image.
About the Time View
While the Node View allows you to arrange and adjust the nodes that comprise your
composite, the Time View lets you view and arrange the timing of your nodes.
Specifically, the Time View displays all image nodes, as well as nodes with more than
one input, that appear in your node tree.
In the Time View, nodes are displayed as horizontal bars. You can select them and load
them into the Viewer or Parameters tab, just as you can in the Node View. In addition,
you can also set In and Out points for your clips, as well as shift a clip’s start and end
frames to change its duration. Finally, the Time View allows you to change the looping
behavior of clips in order to extend their duration.
To display the Time View, click the Time View tab (on the right side of the Tool tabs).
The Time View appears, displaying each node stacked over a second Time Bar that
mirrors the Time Bar found at the bottom of the Shake interface.262 Chapter 8 Using the Time View
The Time View lets you modify the timing parameters that are found inside each FileIn
node in your node tree. This means that you have the option of modifying these timing
parameters either numerically, in the Parameters tab, or graphically, in the Time View.
Either way, the effect is the same.
As soon you make changes to a clip’s timing, an internal node called IReTime is
associated with a FileIn node. The IReTime node is saved into the script, but is invisible
in the Node View. The IReTime parameters are controlled by the FileIn node parameters
and in the Time View.
Viewing Nodes in the Time View
The Time View displays only image nodes and compositing nodes with more than one
input. Other nodes are not represented. For example, in the following node tree, the
Pan node is not visible in the Time View.
It’s possible to further reduce the number of nodes displayed in the Time View,
allowing you to concentrate on only a select group of nodes.
To display only currently selected nodes:
m
Turn on the Select Group, located at the bottom-left side of the Time View.Chapter 8 Using the Time View 263
Clip Durations in the Time View
The duration of image sequences and movie files (hereafter referred to as clips)
referenced by a FileIn node is simply that of the source media on disk. The duration of
single image files, and of image nodes generated by Shake, is considered to be infinite.
When two or more nodes are combined, as with the Over or MultiLayer node, the final
duration is that of the longest clip in the operation.
When a FileIn node is created, its timing parameters are referenced via that node’s
Timing tab in the Parameters tab. Other image nodes have a timing subtree within the
main Parameters tab. Additional nodes that are connected to a clip inherit the source
clip’s timing. You cannot adjust the timing of non-image nodes.
Adjusting Image Nodes in the Time View
In the Time View, nodes representing images (whether clips, single image files, or
Shake-generated images such as gradients and rotoshapes) have several controls
attached to them. Handles at the beginning and end of image nodes allow you to
adjust their timing, while other controls let you load parameters, ignore and “unignore”
nodes, and perform other functions.
Trimming and Looping QuickTime Clips and Still Images
The methods described in this chapter for trimming and looping clips in the Time
View do not work the same with QuickTime clips, Still Images, or Shake-generated
images. The following exceptions apply:
• QuickTime clips cannot be trimmed using the timing handles, because QuickTime
clips do not have corresponding startFrame and endFrame parameters in the
Source tab of the FileIn parameters. However, QuickTime clips can be looped for
their full duration.
• Still Images and Shake-generated images cannot be looped, as a single image’s
duration is infinite by default.264 Chapter 8 Using the Time View
Image Node Controls
When you move the pointer over an image node in the Time View, three controls
appear: the Viewer indicator, the parameters indicator, and the Ignore control.
Image Sequence Timing Controls
In the Time View, image nodes also have timing handles, located at the beginning and
end of the bars. Timing handles allow you to adjust the In and Out points of a clip or
Shake-generated image. Non-image nodes have no timing handles.
You can adjust the In and Out points of an image node by dragging its timing handles
to the left or right. You can also drag image nodes and compositing nodes in the Time
View, changing their location in time.
In the following screenshot, the RGrad1 and bus2 image nodes both have timing
handles, at the left and right edges of the bars. However, the Over1 node has no
handles because, as a non-image compositing node, it inherits the duration of the
longest image node to which it is connected.
Adjusting a Clip’s timeShift Parameter
The timeShift parameter corresponds the the clip’s position in the Time View—
adjusting the timeShift parameter moves the entire clip forward or backward along the
Time View.
Click the Viewer
indicator to load
the node into the
Viewer.
Click the parameters
indicator to load the
node’s parameters
into the Parameters
tab.
Click the Ignore
node to ignore
the node.
In and out timing handlesChapter 8 Using the Time View 265
To shift a node in time:
m Drag an image node in the Time View to the left or right.
That node’s timeShift parameter changes, and the start and end frames of the node are
moved together. The inPoint and outPoint parameters, however, remain the same, since
this operation does nothing to change the clip’s duration.
All compositing nodes that are attached to a time-shifted image node are
automatically shifted in time to match. When you drag a non-image compositing node
in the Time View, such as a layering node, the timing of all image nodes connected to it
is also modified.
In the above example, when the Over1 node is dragged, all nodes above the Over1
node (including the invisible Pan1 node) are shifted, as well. This means that the frames
of bus2 are shifted in time, and any animation curves on Pan1 and RGrad1 are shifted as
well. If you drag only the bus2 clip, then only the bus2 clip is modified in time—unless
the Shift Curves control (in the bottom-left corner of the Time View) is activated.
When the Shift Curves control is enabled, all curves that are attached to the shifted
nodes are shifted as well, so the animation is carried with the shift. This control is
enabled by default, and in most cases should be left on.
Important: If Shift Curves is disabled, the curves remain locked in their previous
positions, and will be offset from the new position of the clip.
Adjusting an Image Sequence’s firstFrame and lastFrame Parameters
You can trim frames off of the beginning or the end of a clip by adjusting its firstFrame
and lastFrame parameters—either in the Source tab, or in the Time View.
Note: QuickTime movies do not have firstFrame and lastFrame parameters.266 Chapter 8 Using the Time View
To adjust the startFrame and lastFrame points of an image sequence:
m
In the Time View, drag the left handle of an image node to adjust its inPoint parameter,
or drag the right handle to adjust its outPoint parameter.
Notice that the firstFrame value for that clip is labeled on the left side of the bar, and
the lastFrame value is labeled on the right side—in the example above, the inPoint
parameter is 40 and the outPoint parameter is 138. These numbers are dynamic, and
change while you modify a clip’s In or Out point.
If you drag the timing handles of an image node in the Time View, you change the
node’s duration. For example, if you drag the ends of RGrad1, you set the boundaries
within which that node appears in time. When you drag the left or right handles of a
Shake-generated node (such as a Color, Grad, or Ramp node) in the Time View, the
duration of that image is extended to fill the new time range.
Using the inPoint and outPoint to Repeat Clips
You can extend the beginning or end of an image sequence or QuickTime movie
beyond its actual duration without retiming it by setting the clip to repeat frames.
To extend the inPoint or outPoint frame of a clip so that it repeats:
m
In the Time View, Control-drag the In or Out handle of a clip.
The clip’s duration extends as you pull a new handle to the right or left. The original
firstFrame and lastFrame timing handles and parameters remain as they were, but the
inPoint and outPoint parameters update to reflect the extended duration.
By default, the newly extended area of the clip is represented by a blue bar, which tells
you that the expanded time range in the clip has been filled by a freeze frame of the In
or Out point. In the example below, Control-dragging the Out point of the bus2 clip to
frame 100 creates an extended freeze frame effect from frame 41 to frame 100.
Repeated part of clip
is colored blue,
which designates a
freeze frame.Chapter 8 Using the Time View 267
You can change the repeat mode of an extended-duration clip at any time using the
controls in the Timing tab of that image node. A clip’s repeating behavior is controlled
by the inMode and outMode parameters in the Timing tab of the FileIn parameters.
The inMode parameter controls the looping behavior of frames between the firstFrame
looping handle and the inPoint, and the outMode parameter controls the frames
between the outPoint and the lastFrame looping handle.
The following table lists the available repeat modes.
Mode Result Example
Black No frames are repeated. Instead,
black frames are inserted.
Freeze The first/last frames are repeated as a
freeze-frame.
Repeat The clip is continuously repeated,
always starting from the first frame.
Mirror The clip repeats, first in reverse order,
and then forward, indefinitely. To
provide a smooth transition, the first
frame does not repeat between the
loops.
Inclusive
Mirror
The entire clip repeats, first in reverse
order, and then forward, indefinitely. 268 Chapter 8 Using the Time View
Clips With Infinite Duration
Image nodes such as RGrad and Ramp have no preset range because they are
generated by Shake. In the Time View, these types of nodes have infinity symbols on
their left and right edges, indicating that these images have no end. To limit these
nodes, grab the handles as you would with clip nodes.
Customizing How the Last Frame Is Represented
The Out point of an image node represents different things to different users,
depending on which media applications they’re used to. For video editing applications
(usually), an Out point is the frame at which there is no more image (and is therefore
black). For a clip of 50 frames, video editing applications usually consider the Out point
to be frame 51. For CG artists, the Out point is the last frame to render, making it frame
50 in this example. To add to the confusion, keep in mind that even if you have 50
frames, you have meaningful information up to frame 50.99999—incremental frames
are necessary for motion blur calculations and field rendering.
You can right-click in the empty part of the Time View to reveal a shortcut menu that
lets you change how the Out point is represented. The shortcut menu contains the
following options:
In/Out Point Display
This option toggles the display of the In/Out point in the Time View. When enabled, the
In and Out points are displayed whenever you click the end of a clip.Chapter 8 Using the Time View 269
Const Point Display
When Const Point Display is enabled, the frame considered as the Out point is toggled
to the frame at which it becomes black, or the last frame on disk.
FileIn Trim
The FileIn Trim option controls what happens when you drag the outPoint handle and
right looping handle past each other (first image). With FileIn Trim off, Control-dragging
a timing handle past a looping handle collapses the looping portion of the clip as the
clip’s total duration is changed.
When FileIn Trim is turned on, you can Control-drag a timing handle past a looping
handle. This is useful if you want to keep the original frame range of the clip as a
reference.
Reversing a Clip
To reverse a clips so that it plays backwards, simply switch the firstFrame and lastFrame
values in the Timing tab of the clip’s FileIn node. This cannot be done with QuickTime
clips, because QuickTime clips do not have firstFrame and lastFrame parameters.270 Chapter 8 Using the Time View
In the following example, a clip that begins at frame 40 and ends at frame 80 is
reversed by manually swapping inPoint and outPoint values.
The modified clip now begins at frame 80, then plays in reverse until it reaches frame 40.
There are no interface controls in the Time View for this functionality.
The Transition Node
The Transition node, located in the Other tab, is an editing node to mix or cut two clips
together. It is unique in that it is really a shell to drive other functions that determine
the mixing. Modifying it also modifies the timing of the second input.
The mixers parameter of the Transition node can be set to cut, which simply cuts to the
second clip at the frame that it starts. The Transition node also can be set to any of the
following transitions:
• Dissolve
• HorizontalWipe
• VerticalWipe
The duration of the transition is determined by the overlap value, and starts at the
frame where the second clip appears. If you modify the timing of the second clip, the
overlap value automatically changes as well.
A third parameter, mixPercent, appears whenever the mixer parameter is set to
anything other than “cut.” mixPercent determines the timing for the mixing. For
example, for dissolve, if mixPercent is at 20, the second image is 20 percent mixed in
and the first image is 80 percent. You can tune a curve interactively in the interface to
adjust timing.Chapter 8 Using the Time View 271
In the following example, two clips have been added to the script.
Connecting both FileIn nodes to a Transition node (located in the Other tab) offsets the
second input clip from the first. Notice that the transition node appears underneath,
spanning the combined duration of both clips.
You can now route the combined output of the Transition node to as complicated a
node tree as you like, and both image nodes feeding the transition node will be treated
as a single image stream.
So, how is this different from just compositing the clips with an Over node and
offsetting the second clip in time? The advantage is that you can easily dial in an
overlap value to determine how many frames they overlap, and add a simple transition
effect. In the following screenshot, the overlap value is increased in the Transition node,
and the second node shifts to the left as it increases. You can also shift the second clip,
and read the overlap value in the Transition node.
Note: When the mixer parameter is set to cut, the cut point occurs at the beginning of
the second clip, not at the end of the first clip.
Parameters in the Transition Node
The Transition node has the following parameters:
overlap
This parameter sets the amount that the second clip is shifted earlier (to the left) to
provide overlap of the two clips.272 Chapter 8 Using the Time View
mixer
Other default choices are:
• cut
• dissolve
• horizontalWipe
• verticalWipe
You can also add your own custom effects. For more information, refer to “Customizing
the Transition Node” below.
clipMode
This option appears when mixer is set to Dissolve, allowing you to choose which image
sets the DOD.
channels
This parameter appears when mixer is set to Dissolve, allowing you to choose which
channels are dissolved.
blur
This parameter appears for horizontal and verticalWipe, and is used to soften the
wiping edge.
reverse
This parameter appears for horizontal and verticalWipe, and is used to flip the
direction, for example, from left to right to right to left.
mixPercent
This parameter, with accompanying graph, appears whenever the mixer parameter is
set to anything other than “cut.” mixPercent determines the timing for the mixing. For
example, for dissolve, if mixPercent is at 20, the second image is 20 percent mixed in
and the first image is 80 percent. You can tune a curve interactively in the interface to
adjust timing.
Customizing the Transition Node
If you like, you can create custom mixers for use by the Transition node.
To create your own custom mixers in a startup .h file, you must do two things:
m
Create a macro with two image inputs, i1 and i2, and a float parameter named
mixPercent that typically has the default value of
“HermiteV(x,1,[0,50,50]@0,[100,50,50]@100)”. This gives you the animation curve that
can then be tuned in the interface. You can also add other parameters.
m Declare the function as a mixer for Transition with the nfxDefMixer command in a
startup .h file. The first parameter is the name of the mixer as it appears in the list. The
second entry is the call to the macro:
nfxDefMixer(“horizontalWipe”,”HWipe()”);Chapter 8 Using the Time View 273
The following is an example from the include/nreal.h file for horizontalWipe:
image HWipe(
image i1=0,
image i2=0,
float blur=0,
int reverse=0,
float mixPercent=“HermiteV(x,1,[0,50,50]@0,[100,50,50]@100)”
)
{
Color1 = Color(
max(i1.width,i2.width),
max(i1.height,i2.height),
1, 1, red, red, 1, 0);
Crop1 = Crop(Color1, 0, 0, width, height);
Pan1 = Pan(Crop1, mixPercent*width/100*(reverse?-1:1));
BlurMe = Blur(Pan1,blur,0,0);
IMult1 = IMult(i1, BlurMe , 1, 100, 0);
Invert1 = Invert(BlurMe , “rgba”);
IMult2 = IMult(Invert1, i2, 1, 100, 0);
IAdd1 = IAdd(IMult1, IMult2, 1, 100);
return IAdd1;
}
nfxDefMixer(“horizontalWipe”,“HWipe()”);
Notice that in mixPercent the default curve goes from 0,0 to 100,100 for mixPercent.
Also, notice how the Color generator compares the two input resolutions to determine
how large to make the max function.
Create Your Own Transition Type
In this example, you will make your own transition mixer by scaling the radius of an
RGrad node to create a radial wipe.
To create a custom radial transition:
1 Begin with a simple tree that feeds two FileIn nodes into a KeyMix node. The two clips
are named i1 and i2 (to help later).274 Chapter 8 Using the Time View
The following images show the effect that can be achieved by increasing and
decreasing the RGrad radius.
Select all of the nodes in the tree in the Node View, and copy them (press Command-C
or Control-C).
2 Create a text file, and paste (press Command-V orControl-V) the nodes you’ve copied
into it. You’ll notice that the node tree you copied is automatically converted into Shake
script. It should look like this:
RGrad1 = RGrad(720, 486, 1, width/2, height/2, 1
min(width,height)/4,
min(width,height)/4, 0.5, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0);
in1 = FileIn(“myclip.1-39#.iff”,
“Auto”, 0, 0);
in2 = FileIn(“myotherclip.1-39#.iff”,
“Auto”, 0, 0);
KeyMix1 = KeyMix(in1, in2, RGrad1, 1, “A”, 100, 0);
// User Interface settings
SetKey(
“nodeView.KeyMix1.x”, “156.75”,
“nodeView.KeyMix1.y”, “127”,
“nodeView.RGrad1.x”, “317.4916”,
“nodeView.RGrad1.y”, “198.6512”,
“nodeView.in1.x”, “67”,
“nodeView.in1.y”, “201.125”,
“nodeView.in2.x”, “202”,
“nodeView.in2.y”, “198.3111”
);
3 Save the text file into your $HOME/nreal/include/startup directory. Don’t close the file
yet, some changes need to be made first.
4 You can now prune a lot of the data, keep the bold sections of the above code, and
format it as a macro. You also want to add the standard parameters of blur, mixPercent,
and reverse. Copy these parameters from the nreal.h file’s HWipe node (at the end of
the file). Chapter 8 Using the Time View 275
Finally, calculate the resolution of the RGrad by comparing the two input sizes. The
script should now look like this:
image RadialWipe(
image in1=0,
image in2=0,
float blur=0,
int reverse=0,
float mixPercent=“HermiteV(x,1,[0,50,50]@0,[100,50,50]@100)”
)
{
RGrad1 = RGrad(
max(in1.width,in2.width),
max(in1.height,in2.height),
1, width/2, height/2, 1,
min(width,height)/4, //This is the radius
min(width,height)/4, //This is the falloff
0.5, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0);
return KeyMix(in1, in2, RGrad1, 1, “A”, 100, 0);
}
5 The maximum distance to expand the RGrad can be calculated by measuring the
distance from the center to a corner, which can be done with the distance() function.
(For more information, see Chapter 31, “Expressions and Scripting,” on page 935.) Once
this is calculated, multiply it by the mixPercent. Also, plug the blur value into the falloff
parameter, with a check on the radius to see if falloff should equal 0 when radius
equals 0. Also, add the command to load it as a mixer in the Transition node:
image RadialWipe(
image in1=0,
image in2=0,
float blur=0,
int reverse=0,
float mixPercent=“HermiteV(x,1,[0,50,50]@0,[100,50,50]@100)”
)
{
RGrad1 = RGrad(
max(in1.width,in2.width),
max(in1.height,in2.height),
1, width/2, height/2, 1,
mixPercent*distance(0,0,width/2,height/2)/100,
radius==0?0:blur,
0.5, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0);
return KeyMix(in1, in2, RGrad1, 1, “A”, 100, 0);
}
nfxDefMixer(“radialWipe”, “RadialWipe()”);276 Chapter 8 Using the Time View
6 Now comes the tricky bit—reversing the mix. You may think multiplying by -1 inverts
the transformation, but you’d be wrong. Instead, you often have to subtract the value
from the maximum value that you expect, in this case the distance from the center to
the corner. This is part of a conditional statement that tests to see if reverse is activated.
Also, invert the mask in the KeyMix to help it out.
image RadialWipe(
image in1=0,
image in2=0,
float blur=0,
int reverse=0,
float mixPercent=“HermiteV(x,1,[0,50,50]@0,[100,50,50]@100)”
)
{
RGrad1 = RGrad(
max(in1.width,in2.width),
max(in1.height,in2.height),
1, width/2, height/2, 1,
reverse?distance(0,0,width/2,height/2)-
mixPercent*distance(0,0,width/2,height/2)/100:
mixPercent*distance(0,0,width/2,height/2)/100,
radius==0?0:blur,
0.5, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0);
return KeyMix(in1, in2, RGrad1, 1, “A”, 100, reverse);
}
nfxDefMixer(“radialWipe”, “RadialWipe()”);
7 Save all of this as a .h file in your startup directory.
8 As a final touch, open a ui .h file and add an on/off button for the reverse parameter:
nuxDefExprToggle(“RadialWipe.reverse”);
Now when you launch Shake, a new mixer in Transition is available for you to use.9
277
9 Using the Audio Panel
The Audio Panel lets you import reference audio clips
that you can use for timing and to generate keyframe
data within your script.
About Audio in Shake
Shake supports the use of PCM AIFF and PCM Wave files in your projects. You can
import one or more audio files, mix them together, extract curves based on frequency,
manipulate the timing of the sound, and save the files back out again. These audio
curves can also be visualized directly within the Curve Editor.
Note: Because audio playback is handled through the use of Macintosh-specific
QuickTime libraries, you can only hear audio playback (either solo, or synchronized with
your video) on Mac OS X systems. You can still analyze and visualize audio in Linux.
Although Shake supports a variety of frequencies and bit depths, playback is always 16-
bit, defaulting to a sample rate of 44.1 kHz (the playback sample rate can be changed
via the Playback Rate parameter). Independently of playback, audio is always exported
at the highest quality specified in the Mixdown Options, corresponding to a 27-point
symmetric Kaiser-windowed sinc interpolation.
Audio and QuickTime
When you read in a QuickTime file via a FileIn node, the audio is not automatically
imported into your project. Instead, the audio must be imported in a separate
process using the Audio Panel, just like any other audio file in Shake. This means that
you’ll be reading in the same file twice, once via a FileIn node, and a second time
using the Open Audio File button in the Audio Panel.278 Chapter 9 Using the Audio Panel
Most of Shake’s audio functionality resides within the Audio Panel. To access the audio
controls, click the Audio Panel tab.
The Shake audio panel opens.
Loading, Refreshing, and Removing Audio Files
You can load both AIFF and Wave files into Shake. The first row of buttons on the top of
the Audio Panel controls loading, removing, refreshing, and previewing .aif (.aiff) and
.wav files.Chapter 9 Using the Audio Panel 279
To load an audio file into a script:
1 Open the Audio Panel.
2 In the Audio Panel, click the Open Audio File button.
3 In the Select Audio File window, select the audio file(s) you wish to import, then click OK.
Note: You can also double-click an audio file to import it.
The audio file appears in the audio track list, at the top of the Audio Panel. You can
import several audio files into your script—they all appear in this list.
Details about the selected audio file appear in the audio track list.
The path to the originating media file on disk appears in the File Name area.
If an audio file is on an offline server or drive when a script is loaded, that audio file
becomes unlinked, appearing red in the sound file list. Once the server or drive is
online, you can use the Refresh command to reload the audio file. You can also use the
Refresh control to update an audio file in your script that you’ve made changes to,
simultaneous to it’s being used by Shake.
To refresh a sound file used by a script:
m
Click the Refresh button.280 Chapter 9 Using the Audio Panel
To remove an audio file from a script:
1 Select an audio file in the track list of the Audio Panel.
Note: You can Shift-select or drag to select multiple files.
2 Do one of the following:
• Click Remove Selected Files.
• Press Delete or Backspace.
The selected audio tracks are removed from your script.
Previewing and Looping Audio
You can use the Preview Audio button to listen to and edit audio tracks as they play.
You can also loop an audio track within a designated time range. The preview tool
works independently of other playback and previewing mechanisms in Shake. To view
or sync an image sequence with audio, use the Audio Playback button on the Time Bar.
For more information, see “Playing Audio With Your Footage” on page 282.
To preview an audio file in the Audio Panel:
1 To load the audio file, follow steps 1 through 3 in the “To load an audio file” section,
above.
2 If necessary, perform the following optional steps:
• You can set the timeRange parameter in the Globals tab to a frame range to limit
preview playback to a specific section of the audio.
• You can enable the looping control so that the preview playback loops continuously.
3 To begin playback, click Preview Audio.Chapter 9 Using the Audio Panel 281
If the audio clip’s Time Shift subparameters (at the bottom of the Audio Panel) have
been changed, these parameters will modify playback. For example, if the Source In
parameter (in the Time Shift subtree) has been altered, then audio will begin
previewing at the new Source In point.
While the audio plays, the Preview Audio button turns into the Stop Preview button.
4 To stop audio playback, click Stop Preview.
When you preview an audio clip, the audio meter lights up to display the clip’s audio
level.
5 To change the level of the audio playing back, adjust the Master Gain (dB) slider
underneath the audio meter.
To loop audio playback:
1 To loop a specific frame range, enter a frame range in the timeRange parameter of the
Globals tab.
Secondary Peak Meter
You can also enable a small peak meter next to the Load/Save script buttons by
entering the following line in a ui.h file:
gui.showTopPeakMeter = 1;282 Chapter 9 Using the Audio Panel
Note: If a frame range is not specified in the Globals tab, the audio preview continues
to play (beyond the end of the actual audio track) until you click Stop Preview.
2 Open the Audio Panel, and toggle looping on.
3 In the Audio Panel, click Preview Audio.
The track loops within the designated frame range. Click Stop Preview to halt playback.
Muting and Soloing Tracks
If you have multiple audio tracks in your script, you can use the mute and solo controls
to control which ones play back.
• Muting a track silences it.
• Soloing a track silences all tracks except for those with solo enabled.
Both of these controls are located in the parameters list below the audio track list, and
can be enabled and disabled for each selected track individually.
Playing Audio With Your Footage
This section discusses the tools associated with enabling audio playback with footage,
viewing audio waveforms, and editing audio tracks.
Note: Depending on your hardware configuration, audio playback does not necessarily
sync with the frame indicator in the Time Bar.
To enable simultaneous audio/video playback:
m
Turn on the Audio Playback button, located among the playback controls next to the
Time Bar.
When audio playback is enabled, clicking Play results in both the audio and video of
your project playing together.Chapter 9 Using the Audio Panel 283
Important: Because Shake is designed primarily as a compositing application, and not
a real-time editing application, audio sync is not guaranteed due to the excessive
processor demands of most operations. If you want a synchronized preview of your
script, use one of the Flipbook options. For more information, see “Previewing Your
Script Using the Flipbook” on page 90.
Alternately, you can scrub through the audio directly in the Time Bar.
To scrub audio with the playhead:
m Hold the Command or Control key down, and drag the playhead in the Time Bar.
Viewing Audio
If you are syncing animated parameters in your script with specific audio cues, you can
display the waveform of your audio in the Curve Editor. If you have multiple audio files
loaded into Shake, the Curve Editor displays the overall mix.
To view an audio waveform in the Curve Editor:
m
In the Curve Editor, click the Draw Waveform button.
This control toggles curve display off and on.
When enabled, the waveform is displayed in the Curve Editor.
Slipping Audio Sync in Your Script
If you need to resync the audio tracks in relation to the visuals in your project, you can
slip each audio track in your project back and forth in time.284 Chapter 9 Using the Audio Panel
To slip an individual audio track in time:
1 In the Audio Panel, select a track in the track list, and enable solo.
The waveform for the track is redrawn in the Curve Editor.
2 Do one of the following:
• Press Shift-Option or Shift-Alt and drag in the Curve Editor.
• In the Audio Panel, enter the slip amount (in frames) in the Time Shift value field.
To slip all audio tracks in time at once:
m
Press Shift-Option or Shift-Alt and drag in the Curve Editor.
Note: Although multiple tracks can be highlighted in the track list (and deleted), you
cannot selected multiple tracks in the sound file list for slipping, muting, or setting to
solo. Only the highlighted track with the line around it is actually selected.
Time Shift Subparameters
The following table lists the time shift subparameters associated with every audio track.
These parameters let you adjust the timing of audio clips used in your script.
Time Shift (frames)
Sync slip control in frames. To interactively drag sound in the Curve Editor, press ShiftOpt or Shift-Alt and drag in the Curve Editor.
Time Shift (seconds)
Sync slip control in seconds.
Source In (seconds)
Beginning point of the clip, listed as seconds.Chapter 9 Using the Audio Panel 285
Source Out (seconds)
End point of the clip, listed as seconds.
Start Time (seconds)
Beginning point of the clip, listed as frames.
End Time (seconds)
End point of the clip, listed as frames.
Playback Rate Subparameters
These parameters allow you to specify the rate at which audio plays back, resampling
the audio tracks if necessary to make them conform. Audio playback is only possible on
computers using Mac OS X.
Playback Rate (percent)
Same as playback rate, but allows you to enter a specific cycle rate.
Playback Rate (Hz)
Same as playback rate, but allows you to enter a specific cycle rate.
Track Gain Subparameters
These parameters let you adjust the relative volume and stereo imaging of each audio
track you’ve imported into your project. Mixing the different channels of audio allows
you to adjust the loudness of a voiceover track in relation to a separate background
music track.
Track Gain (dB)
A decibel gain.
Track Pan
Adjusts the stereo effect by panning a track toward or away from one ear.
Track Wide
Controls how much uniqueness exists in the left and right channels, giving a feeling for
wider stereo.
Extracting Curves From Sound Files
The Create Curves subtree gives you control over sampling of the audio to be
converted into a curve. These curves appear in the Globals tab as a local parameter,
and can then be used by other functions in Shake as standard expressions. This is an
extremely powerful feature, and can be used to synchronize nearly any animated
parameter to the waveform of your audio.286 Chapter 9 Using the Audio Panel
The parameters located in the Create Curves subtree let you analyze the current audio
mix, creating a keyframed curve that’s stored as the Audio parameter, within the
localParameters subtree of the Globals tab.
The audio parameter can be referenced as an expression from any other parameter in
your project.
To create an audio curve:
1 Open the Audio Panel, and import one or more audio files into your project.
2 Open the Create Curves subtree.
3 Click the Update from Globals button to set the time range to be analyzed to match
the timeRange parameter of your project.
4 Adjust the other Create Curves parameters as necessary (see the next section for more
information).
5 To create the audio parameter, click the Create Variable Under Globals button.Chapter 9 Using the Audio Panel 287
A progress bar appears to show you how long this process takes.
Opening the Globals tab reveals the Audio parameter that has been created,
underneath the localParameters subtree. This parameter is now ready for use as an
expression from within other parameters in your project.
Create Curves Options
These parameters in the Create Curves subtree of the Audio Panel let you customize
how keyframe information is extracted from the audio waveform.
Update from Globals
Indicates if timing for the audio export should come from script globals or from the
audio tab. Click “update now” to read settings from the Globals tab.
Time Range
The frame range of the audio to be extracted.
Interval (frames)
A key is entered every N frames, controlled by the Interval parameter.
Type
This parameter defines how the volume of the audio in your project is analyzed.
• Peak means the generated value is the highest absolute value the waveform reaches
during that interval.
• RMS (“Root Mean Square”), on the other hand, uses the square root of the mean of
the squares on the absolute values of the waveform over the interval. In other words,
each absolute displacement value in the interval is squared, the average of all those
squared values is calculated, and its square root is taken as the representative value
for that interval. The RMS method is said to be more representative of the actual
perceived volume of an interval of digital audio.
Logarithmic (dB)
When deactivated, the returned value is the actual value from the audio file. The
sample values are normalized from -1.0 to 1.0, with 1.0 =0 dBFS. Therefore, the usual
range in this mode is 0.0 to 1.0. Values may exceed 1.0 if the audio is clipping
(exceeding 1.0, or 0 dBFS).
If“logarithmic” mode is on, the value generated is converted to a dB scale, normalized
linearly over 90 decibels from 0.0 to 1.0. A value of 0.0 would thus represent -90 dBFS
(or lower), and 1.0 would represent 0 dBFS. 288 Chapter 9 Using the Audio Panel
For example, if in peak mode, and the peak audio value over an interval is 0.5
(approximately -6 dBFS), the value entered with logarithmic mode off would be 0.5.
With logarithmic mode on, it would be ( -6 + 90 ) / 90 = 0.9333.
Create separate left/right curves
Either one curve is created, or left and right channels are sampled and two curves are
created.
Lowpass Filter
Activates the LPF, as described below.
You can use the lowpass filter to restrict the frequency range that Shake analyzes to
create the resulting curve data. For example, you can filter out the majority of the highfrequency content of a song, allowing the bass (such as drums) to become the primary
source of curve data.
Lowpass Filter Freq (Hz)
This control determines the cutoff frequency for the lowpass filter. All frequencies at or
above this frequency are cut off completely when the filter is activated. This setting is
useful for analyzing strictly low-frequency content, such as bass drums, rumble,
earthquakes, and other subwoofer-shaking phenomena. When the filter is active, its
effect may be heard during playback. (Use “Preview Audio” to hear the effect of sliding
the filter freq in real time.) The lowpass filter setting does not affect mixdown files or
QuickTime renders.
Exporting an Audio Mix
If necessary, you can export the audio mix you’ve created within your project as a
separate audio file. If you’ve created an audio mix that must accompany an image
sequence that you’re outputting, the audio mix must be output separately.
Note: Rendering a QuickTime file via a FileOut provides the option to export the audio
and video within a single media file.
Mixdown Options
The following parameters are available in the Mixdown Options subtree:
Update from Globals
Indicates if timing for the audio export should come from script globals or from the
Audio Panel. Clicking update now reads settings from the Globals tab.
Time Range
The number of frames to be exported.
Mixdown File Name
The output file name.Chapter 9 Using the Audio Panel 289
Sample Rate, Bit Depth
The output sample rate and bit depth of the output file.
Resampling Quality
When input clips are adjusted and scaled in time, their sound samples must be
interpolated to the output sampling rate. How closely this is done is determined by the
quality parameter. “Highest” should be used if intended for broadcast. For the
technically minded, this corresponds to a 27-point symmetric Kaiser-windowed sinc
interpolation.
Clipping Info
When lighted, indicates that the right or left audio channel is clipping.
Save Mixdown
Click Save Mixdown to render out a single audio file.
Track Wide
Controls how much uniqueness exists in the left and right channels, giving a feeling for
wider stereo.10
291
10 Parameter Animation and the
Curve Editor
Shake has a flexible keyframing interface for animating
nearly any parameter in your script. This chapter covers
how to create keyframed animation, as well as how to
manipulate keyframed data using the Curve Editor.
Animating Parameters With Keyframes
Several controls exist throughout Shake that allow you to animate the parameters of
various nodes over time. The Viewer, Parameters tabs, Time Bar, and playback controls
all have controls specifically for creating, modifying, and removing keyframes.
Animating Parameters Using Keyframes
It takes a minimum of two keyframes to animate a parameter. Each keyframe you
create stores a particular value for a parameter. When you’ve keyframed a parameter,
Shake automatically interpolates its value at each frame, from the first keyframe to the
last, creating animation.
To animate one or more parameters:
1 Prior to animating a parameter, do one of the following:
• In the Parameters tab, turn on the Autokey buttons for the parameters you want to
animate.
When a specific parameter’s Autokey button is enabled, keyframes are created
whenever you modify that parameter.
Autokey button292 Chapter 10 Parameter Animation and the Curve Editor
• To keyframe parameter changes you make using a node’s Viewer controls, turn on
the Autokey button in the Viewer shelf.
Whenever you first enable keyframing, a keyframe is automatically created at the
current position of the playhead in the Time Bar for the affected parameters.
2 To create a second keyframe, move the playhead in the Time Bar to another frame, and
then change the value of the parameter.
A keyframe appears in the Time Bar to show this change.
When a parameter is animated, keyframes appear in the Time Bar to indicate each
change to that parameter. If the playhead is directly over a keyframe, that keyframe is
green to indicate that it’s “selected.” In other words, changes you make to that
parameter simply modify the current keyframe, instead of creating a new one.
Note: You can also edit keyframes in the Curve Editor by clicking the parameter’s Load
Curve button (the clock-shaped button to the left of the Autokey button in the
Parameters tab).
To delete a keyframe:
m
In the Time Bar, move the playhead to the desired keyframe, then do one of the
following:
• Click the Delete Keyframe button in the Viewer or Curve Editor.
• Right-click the parameter from which you want to delete a keyframe, then choose
Delete Current Key from the shortcut menu.
• Click the Load Curve button for the parameter that contains the keyframe you want
to delete, then click the keyframe in the Curve Editor and press Delete.
Autokey button
Delete Keyframe buttonChapter 10 Parameter Animation and the Curve Editor 293
Rules for Keyframing
How keyframes are created and modified depends on two things—the current state of
the Autokey buttons, and whether or not there’s already a keyframe for that parameter
in the Time Bar or Curve Editor at the current position of the playhead.
When animating any parameter, the following rules apply:
• When the Autokey button is off and you adjust a parameter that has no keyframes,
you can freely adjust it at any frame, and all adjustments are applied to the entire
duration of that node.
• When you adjust a parameter that has at least one keyframe already applied, you
must turn on the Autokey button before you can make further adjustments that add
more keyframes.
• If the Autokey button is off, you cannot adjust a parameter at a frame that doesn’t
already have a keyframe. If you try to do so, the change you make to the parameter
doesn’t stick, and that parameter goes back to its original value when you move the
playhead to another frame.
Note: If you’ve made a change that you want to keep, turn on the Autokey button
before you move the playhead to add a keyframe at that frame.
Adding Blank and Duplicate Keyframes to Pause Animation
If you want a parameter to be still for a period of time before it begins to animate,
insert a pair of identical keyframes at the start and end frame of the pause you want to
create.
If you want to delay an animated effect for several frames beyond the first frame, insert
a keyframe with no animated changes at the frame where you want the animation to
begin, then modify the shape at the keyframe where you want the animation to end.
To manually add a keyframe without modifying the parameter:
m
Turn the Autokey button off and on again.
A keyframe is created for the current parameter. If the parameter is already animated,
the state of the parameter at the position of the playhead in the Time Bar will be stored
in the new keyframe.
Start of animation Identical keyframes
pausing animation
End of animation
Start of
animation
End of
animation294 Chapter 10 Parameter Animation and the Curve Editor
Navigating Among Keyframes in the Time Bar
Once you’ve created a number of keyframes, two keyframe navigation controls let you
move the playhead from keyframe to keyframe, making it easy to jump to and modify
existing keyframes.
Note: These two controls only work when the playhead is within a range of two or
more keyframes in the Time Bar. If the playhead is located either before the first
keyframe, or after the last keyframe, these controls have no effect.
Using the Curve Editor
While the Time Bar is adequate for creating keyframes for one parameter at a time, the
Curve Editor gives you a much more complete environment in which to create, move,
and otherwise modify keyframes. The Curve Editor also provides the only place where
you can see and modify the animation or Lookup curves that represent the
interpolated values that lie in between each keyframe.
Go to previous
keyframe
Go to next
keyframe
Curve Editor controls
Curves
Parameters that are
loaded into the Curve
EditorChapter 10 Parameter Animation and the Curve Editor 295
Parameters can be represented by any one of a number of different curve types, each
of which gives you a different way of controlling how a parameter’s values are
interpolated from keyframe to keyframe. You can change a curve’s type and cycling
mode at any time. For more information on specific curve types, see “More About
Splines” on page 316.
In addition to the Curve Editor tab, there are also curve editors that appear within the
Parameters tabs. These are used to edit lookup-style curves that generally relate to
color correction. These are the Lookup, LookupHLS, LookupHSV, and HueCurves nodes.
Loading and Viewing Curves in the Editor
By default, the Curve Editor appears empty. You have to specify one or more
parameters to be displayed in the Curve Editor. If multiple curves appear in the Curve
Editor, they overlap.
To load curves into the Curve Editor from the Parameters tabs:
1 Load a node into one of the Parameters tabs.
2 To display a parameter’s curve in the Curve Editor, do one of the following:
• Inside of the Parameters tab, click a parameter’s Load Curve button.
A red checkmark appears to let you know the button is enabled, and that parameter
is loaded into the Curve Editor.
Note: Just because a parameter is loaded into the Curve Editor doesn’t mean the
parameter is animated.
• Click a parameter’s Autokey button to create a keyframe at the current playhead
frame.296 Chapter 10 Parameter Animation and the Curve Editor
Note: Whenever you turn on an Autokey button, the corresponding parameter’s curve
loads into the Curve Editor.
In the following example, even though the pan and angle parameters are both
animated, the angle parameter is the only one that’s actually displayed in the Curve
Editor.
Controlling Curves Displayed in the Curve Editor
There are several controls that allow you to control the visibility of curves from within
the Curve Editor.
Autoload Controls
Use the Autoload curve viewing controls within the Curve Editor to choose which
parameter curves are automatically loaded into the Curve Editor. There are two options:
• Turn on Current to show all the parameters that are animated within a node that’s
loaded into one of the Parameters tabs, regardless of whether or not the Load Curve
controls are enabled.
• Turn on Selected to show all animated parameters from all nodes that are selected in
the Node View, whether or not any nodes are loaded into one of the Parameters tabs.
Click Current to display
all animated parameters
from the currently
loaded node.
Click Selected to display all animated
parameters from the selected node.Chapter 10 Parameter Animation and the Curve Editor 297
Visibility and Persistence Controls
In the loaded parameters list, additional controls let you toggle the individual visibility
and persistence of parameters in the Curve Editor.
To toggle the visibility of individual curves, do one of the following:
m
Click a parameter’s Visibility button to turn the display of a curve on and off in the
Curve Editor, keeping it in the list.
m
Select a parameter in the parameters list, and press V to toggle its visibility.
m
Click a parameter’s Persistence control to keep that curve loaded in the Editor
regardless of the currently selected Autoload settings.
To remove a curve from the Editor:
m
Select the curve in the parameters list and press Delete or Backspace, or click the Load
Curve control in the node’s parameters.
The curve is removed from the Editor, and the Load Curve control is disabled.
Persistence
Visibility
Warning: If you rename a node that is already loaded into the Curve Editor, the new
name is not automatically updated in the Curve List. To display the new node name,
remove the node from the Curve Editor, then reload it into the Curve Editor.298 Chapter 10 Parameter Animation and the Curve Editor
Navigating the Curve Editor
There are many controls you can use to move around the Curve Editor.
Useful controls for automatically framing curves:
• To frame one or more selected curves to the size of the Curve Editor, press F.
• To frame all curves to the size of the Curve Editor, press the Home key, or click the
Home button.
• To frame only the selected curves, click the Frame Selected Curves button.
To pan around within the Curve Editor, do one of the following:
m
Press the middle mouse button, or Option-click or Alt-click, and drag.
m
Click the Navigation button, and drag to pan around.
To zoom into and out of the Curve Editor, do one of the following:
m
Press the + or - key.
m Middle-click or Control-Option-click or Control-Alt-click.
m With the pointer over the Navigation button, Control-click and drag.
To zoom the Curve Editor tab to take up the full screen:
m
Press the Space bar.
Curve Editor Right-Click Menus
The shortcut menu within the Curve Editor has many controls you can use to manage
curve visibility, and other options for curve editing.
Home button
Frame Selected
Curves button
Navigation button
Shortcut Menu Description Keyboard
Add All Curves Adds all animated curves into the Curve Editor.
Remove Curves Removes selected curves from the Curve Editor.
Does not delete the animation.
Delete or Backspace
Visibility > Hide Curves Turns off visibility of selected curves. You can also
toggle visibility in the parameters list.
Visibility > Show Curves Shows selected curves. You select curves in the
parameters list prior to this function.Chapter 10 Parameter Animation and the Curve Editor 299
The Curve Editor Buttons
The following table describes the Curve Editor buttons.
Visibility >Toggle
Visibility
Inverts the visibility of all selected curves.
Select > All Curves Selects all curves in the parameters list. Command-A
Control-A
Select > CVs Selects all keyframes on active curves. To select
all keyframes on all loaded curves, press
Command-A or Control-A and then Shift-A.
Shift-A
Display Timecode Toggles the time display from frames to
timecode.
T
Sticky Time When enabled, the current frame is set to the
keyframe you are modifying.
S
Time Snap When enabled, keyframes snap to frames, rather
than float values.
Display Selected Info Displays data on selected curves and keyframes
when active.
Shortcut Menu Description Keyboard
Button Description
Set Horizontal/
Vertical Lock
Locks off movement on the X or Y axis. You can also press X to
restrict movement to the X axis, or press Y to restrict movement
to the Y axis. Press the key again to reenable movement.
Keyframe Move Mode This mode determines the behavior when keyframes are
moved left and right past other nonselected keys. This behavior
is discussed in “Using Keyframe Move Modes” on page 307.
Reset Tangents The Shake Hermite tangents are automatically set to the
tangent of the curve. Modifying keys adjusts the tangents of
neighbor keyframes. However, if you manually adjust a tangent,
Shake recognizes this and disables this automatic adjustment.
Click Reset Tangents to reset the tangents to their unset state.
Flatten Tangents Sets a Hermite-curve tangent horizontal to ensure smooth ease
ins and ease outs. You can also press H.
Apply Curve Operation Applies an operation to the curve from a pop-up window.
These functions are detailed in “Applying Operations to Curves”
on page 309.
Home Frames all curves.
Frame Selected Curves Frames selected keyframes/curves.300 Chapter 10 Parameter Animation and the Curve Editor
Splitting the Curve Editor
You can separate the Curve Editor into two horizontal panes. This is useful when you
have two or more curves with completely different Y scales, such as the pan and scale
curves from a Move2D node. Each pane has its own set of visibility toggles, so you can
disable specific curves in one pane, and enable them in another. The V key is
particularly useful here, since the active pane is the last pane your pointer crossed.
To split the Curve Editor:
m
Click the top edge of the Editor window and drag down.
Working With Keyframes
This section presents different methods for adding and modifying keyframes within the
Curve Editor.
Adding Keyframes
There are many different methods you can use to add keyframes to a curve in the
Curve Editor.
Display Waveform Displays the waveform of any currently loaded audio files from
the Audio Panel.
Color controls These buttons appear in the Parameters tab for the Lookup,
LookupHSV, and LookupHLS functions. They allow you to pick an
input color (RGB, HSV, or HLS) and match it to a different color.
Only visible curves receive keyframes on their curves. For
example, if you only want to modify luminance, ensure the hue
and saturation curves are disabled in a LookupHLS node.
Button DescriptionChapter 10 Parameter Animation and the Curve Editor 301
To add keyframes to a curve by modifying a parameter:
m
In the node’s Parameters tab, click the Autokey button. Modifying a value when
Autokey is enabled creates a keyframe at the current position of the playhead.
To create keyframes at the position of the playhead on every loaded curve:
1 In the Parameters tab, click the Load Curve button for each parameter you want to
keyframe.
2 Move the playhead to the frame where you want to create a keyframe.
3 Click the Autokey button.
Keyframes are created for every curve that’s currently loaded in the Curve Editor. For
example, if the Move2D node has its x,yPan, x,yScale, angle, and x,yCenter parameters
set for keyframing, keyframes are created on the curves of each of these parameters.
Note: Creating keyframes in this manner overrides any expressions within those
parameters.
To insert a keyframe anywhere on a curve, do one of the following:
m
Shift-click a segment. This lets you insert new keyframes at frames that aren’t at the
current position of the playhead
m
Position the pointer over a curve so that it’s highlighted, and press K.
In the Curve Editor parameters list, a keyframe is created whenever you enter or modify
a value in the Val value field. However, the parameter’s Autokey button (in the
Parameters tab, below the Curve Editor) must be activated first.
The Lookup, LookupHLS, and LookupHSV functions have color control pairs embedded
in their dedicated Curve Editors that remain inside the Parameters tab. You can use
these to enter keys. However, these keys are not related to time, but, rather to a
particular color channel. Therefore, these keys become points on the color-correction
curves. For more information, see “The Curve Editor Buttons” on page 299.302 Chapter 10 Parameter Animation and the Curve Editor
Note: In the Curve Editor, when the pointer passes over a curve, the curve name is
highlighted; when the pointer passes over a keyframe, the keyframe values are
displayed.
Selecting Keyframes
You can edit a group of keyframes together by selecting them as a group. You can also
add and remove keyframes from a previously selected group.
To select one or more keyframes, do one of the following:
m
Click a single keyframe.
m
Shift-click to select multiple keyframes.
m Drag to select a group of keyframes with a selection box.
m
Press B while dragging to create a persistent manipulator box that you can use to
manipulate a range of keyframes.
This box allows you to scale the keyframe group in X and Y, only X, or only Y. The
manipulator box also allows you to move the group within the boundaries of the
surrounding (not selected) keyframes. For more information on this method, see “Using
the Manipulator Box” on page 304.
To add keyframes to the current selection:
m
Shift-click a point on a curve to add an additional keyframe.
To select all keyframes on a curve:
m
Position the pointer over the curve so that it’s highlighted (or select the curve in the
parameters list), then press Shift-A.
To select all keyframes in the Curve Editor:
m
Press Command-A or Control-A (select all curves), then press Shift-A.
To deselect one or more keyframes:
m
Command-drag or Control-drag to remove keyframes from the current selection of
keyframes.Chapter 10 Parameter Animation and the Curve Editor 303
Note: To remove keyframes from a group of selected keyframes within a manipulator
box, press Shift and click the keyframes. For more information, see “Modifying
Keyframes” on page 303.
To deselect all keyframes:
m
Click an empty area of the Curve Editor.
Deleting Keyframes and Curves
There are several different ways of deleting keyframes in the Curve Editor.
To delete one or more keyframes, do one of the following:
m
In the Curve Editor, select the keyframes you want to delete, then press Delete.
Note: In Mac OS X, you must use the Delete key located near the Home and End keys.
If you press the Delete key located under the F13 key, the entire curve will be deleted.
m Move the playhead to the location of the keyframe, then click the Delete Keyframe
button (in the Viewer shelf).
The Delete Keyframe button only deletes keyframes related to the onscreen controls.
To delete an entire curve, do one of the following:
m
Position the pointer over a curve, or select the curve in the parameters list, then press
Shift-A (to select the points), then press Delete.
m
In the parameters list, convert the curve to the Const (constant) curve type.
m
In the node’s Parameters tab, right-click in the parameter value field, then choose Clear
Expression from the shortcut menu.
Note: When a curve is deleted, it is replaced with a constant curve (set to the value of
the curve at the point in time the curve was deleted).
To delete keyframes from the Parameters tab:
1 Move the playhead to the keyframe you want to delete.
2 Right-click the parameter’s name, then choose Delete Current Key from the shortcut
menu.
To delete keyframes related to onscreen controls currently in the Viewer:
1 Move the pointer over the Viewer.
2 Press the Delete or Backspace keys.
Modifying Keyframes
You can modify keyframes by selecting and moving the keyframes, creating a
manipulator box, modifying keyframes numerically, or using the value fields in the
Curve Editor parameters list. 304 Chapter 10 Parameter Animation and the Curve Editor
Using the Manipulator Box
You can use the manipulator box to move or scale a group of keyframes. The
advantage the box has over simply selecting keyframes is that you can see the scale
borders.
To use the manipulator box:
m Hold down the B key (for box) and drag to select a group of keyframes.
The light gray manipulator box appears around the selected keyframes.
To move the selection:
m
Position the pointer within the box, then drag.
To scale the selection in both the X and Y axes:
m
Position the pointer at the corner of the box and drag. The box is scaled in X and Y
around the opposite corner you select—if you grab the upper-right corner, the center
of scaling is the lower-left corner.
To scale the selection in the X axis:
m
Position the pointer along either side edge of the box, then drag.Chapter 10 Parameter Animation and the Curve Editor 305
To scale the selection in the Y axis:
m
Position the pointer at the top or the bottom edge of the box, then drag.
Note: Once you make a selection with the manipulator box, clicking a keyframe or
clicking outside of the box deselects the box.
To add to the keyframes selected in the manipulator box:
m
Press Shift and click the additional keyframes.
To remove selected keyframes from the group:
m
Press Shift and click the keyframes you want to delete.
Using Transform Hot Keys
You can also move a group of keys using keyboard shortcuts—you are not obliged to
select the keys with the manipulator box.
The following keyboard shortcuts let you make curve adjustments:
• Move: To move selected keys, press Q and drag.
• Scale: Press W and drag; the first point clicked is the scaling center.
• Drop-off: Press E and drag; a drop-off occurs on the pan.
Using the Keyframe Value Fields
To modify a keyframe numerically, you can also enter values in the fields at the bottom
of the Curve Editor window.306 Chapter 10 Parameter Animation and the Curve Editor
• The Key field is the time of the keyframe.
• The Value field is the value of the keyframe.
• The LTan and RTan fields control the tangents. If the tangents are set to 0, the
keyframe is flattened (on a Hermite curve). You can also press H to set horizontal
tangents.
The value field displays “-----” when multiple keyframes are selected. To set all
keyframes to that value, enter a number into the Value field.
In the parameters list, the Val field displays the value of the curve at that point in time.
You can also modify the value of a keyframe at that point in time in the Val value field.
For the value to be saved, the Autokey button must be activated in the Parameters tab.
Editing Bezier Curve Tangents in the Keyframe Editor
Similar to control points on a shape in the Viewer, Bezier points on a curve have
tangents that can be edited.
To break the tangents of a keyframe:
m
Press Control and drag a tangent end.
To rejoin a tangent:
m
Press Shift and click the tangent end.
To reset a tangent:
m
Select the keyframe and click the Reset Tangents button.Chapter 10 Parameter Animation and the Curve Editor 307
Using Keyframe Move Modes
In the Curve Editor, there are four keyframe move modes—Bound, Interleave, Push, and
Replace—that are activated by the four states of the Keyframe Move Mode button.
These modes control the behavior of the keyframes when the keyframes are moved left
or right past non-active keyframes. To change modes, click the Keyframe Move Mode
button in the bottom-left corner of the Curve Editor.
Bound
When the Bound mode is set, the movement of a selected range of keyframes (whether
contiguously or noncontiguously selected) is clamped by the adjacent keyframes. In
the following example, keyframes A, B, and C are selected (highlighted in green) and
moved left in the Curve Editor.
When moved, the selected keyframes cannot move beyond the adjacent keyframe in
the curve, so keyframe A stops at the frame occupied by keyframe 2, and the distance
between A and B shrinks. If you continue to drag left, keyframes A, B, and C are placed
on the same frame.
Interleave308 Chapter 10 Parameter Animation and the Curve Editor
When the Interleave mode is set, the selected keyframes jump over the adjacent nonselected keyframes to the next segment of the curve.
Push
When the Push mode is set, the selected keyframes push the other keyframes along
the curve. In the following example, the selected keyframes are pushed to the left of
the Curve Editor. Therefore, keyframe A pushes keyframes 1 and 2, as well as all other
keyframes to the left of keyframe 1.
ReplaceChapter 10 Parameter Animation and the Curve Editor 309
When the Replace mode is set, the selected keyframes replace the adjacent nonselected keyframe(s). In the following example, keyframes A, B, and C have slipped past
the position of keyframes 1 and 2, removing them from the curve.
Applying Operations to Curves
To apply operations such as Smooth and Jitter to curves or keyframes, use the Apply
Operation button, located in the lower-left corner of the Curve Editor.
To apply an operation to a curve or to keyframes:
1 Select the curve from the parameters list, or, if applying the function to keyframes,
select the keyframes in the Curve Editor.
2 In the Curve Editor, click the Apply Operation button.
3 In the Curve Processing window, select your operation.310 Chapter 10 Parameter Animation and the Curve Editor
In the following example, the “scale” operation type is selected.
4 Where appropriate, enter the value(s) for the expression in the value fields.
5 Do one of the following:
• To apply the operation to a selected curve, make sure that the Apply Curve/Keys
button is set to Curve, then click Apply.
• To apply the operation to selected keyframes, make sure that the Apply Curve/Keys
button is set to Keys, then click Apply.Chapter 10 Parameter Animation and the Curve Editor 311
The selected operation is applied to the selected curve or keyframes. These operations
include the following:
• scale: You can manually scale a curve using a manipulator box. This scale function in
the Curve Processing window, however, allows you to enter specific scaling values.
Enter the curve center and values. The following curve has a center of 0, 0 and .5, .5
for the scale values.
• smooth: “Blurs” the curve by the Amount value, the value that indicates how many
neighbor keyframes are calculated in the smoothing. The higher the number, the
smoother the result. In the following example, the second curve is the result of a
smooth Amount of 10 applied to the first curve.312 Chapter 10 Parameter Animation and the Curve Editor
• jitter: The opposite of smooth, jitter removes all values except for the noise using the
formula (Unmodified Curve - Smoothed Curve = Jitter). Once the function is applied,
the curve snaps down to approximately the 0 value, so the curve may disappear
(press F or click the Home button to reframe the Editor). This is useful for stabilizing a
jerky camera move. You can keep the overall camera move, but remove the jerkiness.
The vertical scale of this image is much smaller than the first example snapshot.
• reverse: Makes the curve go backward in time.Chapter 10 Parameter Animation and the Curve Editor 313
• negate: Flips the curve around the 0 point, so a value of 300 turns into a value of
-300. Again, the curve may disappear, so press F or click the Home button to reframe
the Editor.
• average: Allows you to average two curves together. When the Operation mode is
set to average, the Dest Curve button appears, allowing you to select a second curve.
Click this button to select the curve that is averaged with the current curve. In the
following example, the random curve was averaged with a cos expression.314 Chapter 10 Parameter Animation and the Curve Editor
• resample: Replaces the curve or expression with a new sequence. This is useful for
two purposes. First, you “bake” an expression, turning it into a keyframe curve.
Second, you can adjust the number of keyframes that are on a curve. To set the
resample, enter a frame range. For example, set 1-50 to enter 1 keyframe per frame
from frames 1 to 50; 1-50x10 to enter only 5 keyframes every 10 frames, and so on.
Copying and Pasting Keyframes
You can copy and paste keyframes from one curve to another curve.
Note: You cannot simultaneously copy and paste keyframes from multiple curves.
To copy and paste a keyframe:
1 In the Curve Editor, select the keyframe you want to copy and press Command-C or
Control-C.
2 Position the pointer over the curve (the same curve or a separate curve) at the time
you want to paste the keyframe, and press Command-V or Control-V.
The keyframe is pasted at the position of the playhead.
To copy and paste multiple keyframes on a single curve:
1 In the Curve List, select the curve that contains the keyframes you want to copy.
Note: To select a curve, you can also position the pointer over the curve in the Curve
Editor. The curve name is displayed and the curve is highlighted in the Curve Editor.
2 Press Shift-A to select the keyframes on the selected curve.
3 Press Command-C or Control-C.
4 In the Curve List, select the curve into which you want to paste the keyframes.Chapter 10 Parameter Animation and the Curve Editor 315
5 In the Curve Editor, position the playhead at the frame you want to paste the
keyframes (the first keyframe pastes at the position of the playhead).
6 Press Command-V or Control-V to paste the keyframes.
The keyframes are pasted at the position of the playhead.
Modifying Curves
You can modify a curve type, and its repetition mode, as well as apply filter effects
(smooth, jitter extraction, and so on) to a curve.
To change a curve type:
1 Select the curve (drag over the curve, or select the curve in the Curve Editor
parameters list).
2 In the parameters list, choose a curve Type and select Hermite, Linear, CSpline, JSpline,
NSpline, or Step curve. The most commonly used curves are Hermite, JSpline, and Linear.
For more information on curve types, see “More About Splines” on page 316.
Note: You can have only one curve type per curve.
You can also modify a value in the Val field of the parameters list.
A curve’s Cycle setting determines the behavior before the first keyframe and after the
last keyframe:
• KeepValue (the default setting): The value of the first and last keyframes is kept before
the first keyframe and after the last keyframe.
• KeepSlope: Continues the tangent.
• RepeatValue: Repeats the curve between the first and last keyframes.
• MirrorValue: Reverses and repeats the curve.
• OffsetValue: Offsets and repeats the curve.316 Chapter 10 Parameter Animation and the Curve Editor
Note: The KeepSlope option cannot be used with curves that have expressions applied
to them.
To learn how to use local variables and expressions to control your curves, see Tutorial
4, “Working With Expressions,” in the Shake 4 Tutorials.
For more information on the cycle types, see “More About Splines” on page 316.
The Right-Mouse Menu
The lower portion of the right-click shortcut menu in the Curve Editor includes
additional options.
• Display Timecode: Toggles between frame count and timecode count.
• Sticky Time: Lets you jump to the time of the keyframe you are modifying (so you
can view the proper frame).
Note: You can also press S to turn Sticky Time on and off.
• Time Snap: Toggles the locking of the keyframes to frame increments.
• Display Selected Info: Shows the numerical information for selected keyframes.
More About Splines
This section presents more technical information about the different spline types
available in Shake.
Natural Splines
NSpline(cycle, value@key1,value@key2,...value@keyN)
NSplineV(time_value, cycle, value@key1,value@key2,...value@keyN)
The second-order continuity of natural splines ensures acceleration smoothly varies
over time, so the motion is visually pleasing. The visual system is very sensitive to first-
and second-order discontinuities, but not to higher-order discontinuities. But, in order
to achieve the curvature continuity, the whole curve must be adjusted whenever a
keyframe (CV) is moved. In the following example, when keyframe 3 is moved, the
segments to keyframe 6 are changed. This is not good, even if the influence decreases
very quickly as the number of intermediate keyframes increases. Chapter 10 Parameter Animation and the Curve Editor 317
In addition, the keyframes completely define the curve, so there is no tangent control
whatsoever.
Cardinal Splines
CSpline(cycle, value@key1,value@key2,...value@keyN)
CSplineV(time_value, cycle, value@key1,value@key2,...value@keyN)
Cardinal splines trade off curvature continuity for local control. When a keyframe
moves, only four segments are affected (two before and two after the keyframe). In
addition, for any keyframe, tangents are automatically computed to be parallel to the
segment joining the previous keyframe and the next keyframe. They are the
programmer’s best friend because they are so simple to evaluate—only two points are
needed, which simplifies data management (no tangent or other complicated stuff).
Jeffress Splines
JSpline(cycle, value@key1,value@key2,...value@keyN)
JSplineV(time_value, cycle, value@key1,value@key2,...value@keyN) 318 Chapter 10 Parameter Animation and the Curve Editor
Jeffress splines are similar to CSplines, except they are guaranteed to never overshoot.
If two keyframes have the same Y value, a flat segment connects them. This is very nice
for animation, since you have a good idea of your limits.
Hermite Splines
Hermite(cycle,(value,tangent1,tangent2)@key1,...
(value,tangent1,tangent2)@keyN)
HermiteV(time_value,cycle,(value,tangent1,tangent2)@key1,..
(value,tangent1,tangent2)@keyN)
Hermite splines also give up on trying to produce curvature continuity, but they add
tangent controls (so the animation is likely to look bad unless you eyeball the
smoothness each time you move stuff around). You also have the ability to break the
tangents (Control-click the handle end in the Curve Editor). It takes some effort to get
right, but you can shape it the way you want. Chapter 10 Parameter Animation and the Curve Editor 319
Linear Splines
Linear(cycle,value@key1,value@key2,...value@keyN)
LinearV(time_value, cycle,value@key1,value@key2,...value@keyN)
With Linear splines, there is not much mystery. No smoothness, but you know exactly
what you get.
Step Splines
Step(cycle,value@key1,value@key2,...value@keyN)
StepV(time_value, cycle,value@key1,value@key2,...value@keyN)
Step splines create a stair-stepping spline that maintains its value until the next
keyframe. This is great for toggling functions.320 Chapter 10 Parameter Animation and the Curve Editor
Cycle Types
You can change how the curve cycles its animation before and after the curve ends.
The cycle is represented by a dotted line in the Curve Editor. The value is declared with
the first parameter of a curve, for example, Linear (CycleType,value@frame1,...). Each
cycle type has a numeric code:
• 0 = Keep Value
• 1 = KeepSlope
• 2 = RepeatValue
• 3 = MirrorValue
• 4 = OffsetValue
KeepValue
Keeps the value of the first and last keyframe when a frame is evaluated outside of the
curve’s time range.Chapter 10 Parameter Animation and the Curve Editor 321
KeepSlope
Takes the slope of the curve at the last keyframe and shoots a line into infinity.
RepeatValue
Loops the animation curve. It works best when you set the first and last points at the
same Y value, and maintain a similar slope to ensure a nice animation cycle.322 Chapter 10 Parameter Animation and the Curve Editor
MirrorValue
Also loops the animation, but inverts the animation each time the cycle repeats.
OffsetValue
Also loops the animation, but offsets the repeated curve so that the end keyframes
match up.11
323
11 The Flipbook, Monitor Previews,
and Color Calibration
As you work with Shake, the Flipbook lets you preview
your scripts in motion before actually rendering them to
disk. The Monitor Preview control lets you send the
Viewer output to an external monitor, allowing you to
examine your image output on a different screen.
Cached Playback From the Viewer
You can cache frames using the Time Bar playback controls, to preview your work right
in the Viewer.
To begin cached Viewer playback:
m
Shift-click either the forward or back playback button in the Time Bar area.
The pointer changes into the cached-playback cursor, and Shake begins to render and
cache all frames within the current frame range.
Once the frames have been cached, cached playback continues in a loop.
Launching the Flipbook
You can also render a temporary Flipbook to preview your work. Once the Flipbook has
rendered into RAM, use the playback buttons (described below) to play back the
sequence. The Flipbook is available on Mac OS X and Linux systems.
Note: On a Mac OS X system, you can create a disk-based QuickTime Flipbook. For
more information, see “Creating a Disk-Based Flipbook” on page 326.
To launch the Flipbook from the Shake interface:
1 In the Globals tab, set the timeRange for the duration you want to preview.
For example: 1-50, 1-50x2.324 Chapter 11 The Flipbook, Monitor Previews, and Color Calibration
2 Load the node into the Viewer that represents the image you want to preview.
3 Do one of the following:
• Click the Flipbook button.
• Right-click in the Node View, then choose Render > Render Flipbook from the
shortcut menu. Enter your settings in the Flipbook Render Parameters window and
click Render. The images are rendered into RAM.
A Flipbook window appears, and the specified timeRange is rendered into RAM for
playback.
You can also launch the Flipbook from the command line.
To launch the Flipbook from the command line:
Call up the files by relative or absolute paths. In the command line, indicate a time
range and a frame placeholder—either # for padded numbers or @ for unpadded
numbers. For multiple padding that is not four places, use the @ or # sign multiple
times; for example, ##### = 00001, 00002, and so on. For example:
final_render.#.iff -t 1-56
final_render.#.iff -t 1-56x2
Flipbook Controls
When the Flipbook is finished rendering, there are a number of keyboard commands
you can use to control playback within the Flipbook window.
Keyboard Command Description
Period or > Plays the sequence forward. The sequence automatically loops
when it reaches the last frame.
Comma or < Plays the sequence backward. The sequence automatically loops
when it reaches the first frame.
Space bar Stops rendering and/or playback.
/ Continues rendering after an interruption.
Left Arrow key, Right Arrow key Steps through the sequence one frame at a time.
Shift-click and drag (in the
Flipbook window)
Scrubs through the sequence.
Control-> Plays the sequence forward once, without looping.
Shift-> Plays the sequence forward, using ping-pong looping when
reaching the last frame.
+ on numeric keypad Increases the frame rate of playback.
- on numeric keypad Decreases the frame rate of playback.Chapter 11 The Flipbook, Monitor Previews, and Color Calibration 325
As the Flipbook plays, the frame rate is displayed in the title bar. If the Flipbook is
playing back in real time, the title bar frame rate is followed by the word, “Locked.”
If your computer cannot maintain the desired speed, Shake will drop frames, indicating
the percentage of dropped frames in the title bar.
Viewing, Zooming, and Panning Controls
In the Flipbook, you also have access to the same viewing functions that are available
in the Viewer.
Memory Requirements
Real-time playback is a function of RAM, processor, image size, clip length, and graphics
card. In Shake, images are loaded into memory and then played back. Current systems
cannot achieve real-time playback with 2K-resolution images. With sufficient RAM and
a good graphics card, files of up to 1K resolution should play back in real time.
T Fixes the frame rate to real-time playback.
O Displays information on the image (available on Linux systems
only).
Keyboard Command Description
Function Key Notes
View R, G, B, alpha, or lum
channel
R, G, B, L, or A
View RGB channels C
Get RGBA and x, y values of a
pixel
Drag in the Flipbook window. The values appear in the title
bar.
Linux: overlay information O
Change color values between
0-1, 0-255, Hex
I
Zoom in/out + or - key
Pan image Middle-click and drag Some mouse button behavior
may vary, depending on the
manufacturer. If the middle
mouse button does not pan, try
right-clicking.
Re-center image Home
Close Window Esc326 Chapter 11 The Flipbook, Monitor Previews, and Color Calibration
Use the following formula to determine the amount of required memory:
width * height * channels * bytes per channel * images = bytes
For example, a single 1024 x 768 RGB 8-bit (1 byte) per channel image is:
1024 * 768 * 3 * 1 = 2359296 bytes
Or, it is approximately 2.4 MB per frame.
To convert from bytes to megabytes (MB), divide by 1024 two times (1024 equals the
number of bytes per kilobyte). Thankfully, all operating systems come with calculators.
For a rough approximation, drop the last 6 digits.
Note: An 8-bit image is 1 byte, a 10- or 16-bit image is 2 bytes, and a float image is 4 bytes.
Creating a Disk-Based Flipbook
Available on Mac OS X systems only, the Render Disk Flipbook command launches a
disk-based Flipbook into QuickTime. This approach has several advantages over normal
Flipbooks. For example, the Disk Flipbook allows you to view very long clips and to
attach audio (loaded with the Audio Panel in the main interface).
Note: Real-time playback performance varies depending on your system hardware.
After viewing the Flipbook, you can write out the sequence as a QuickTime file and
bypass the need to render the sequence again.
Customizing the Flipbook
The following arguments have been added to the Flipbook executable as global
plugs. This lets you specify an external Flipbook to come up as the default. Specify
these plugs using a .h file in the startup directory. The global plugs and their default
values are:
gui.externalFlipbookPath = "shkv"; // the flipbooks name -- this should
include the full path
gui.flipbookStdInArg = "-"; // instructs the flipbook to take data from
StdIn
gui.flipbookExtraArgs = ""; // allows you to enter any extra arguments the
flipbook needs.
gui.flipbookZoomArg = "-z"; // sets the zoom of the flipbook
gui.flipbookTimeArg = "-t"; // the time range argument
gui.flipbookFPSArg = "-fps"; // the frames per second argument
If the specified external Flipbook doesn’t support one of these arguments, setting its
value to an empty string ("") prevents that value from being passed to it.Chapter 11 The Flipbook, Monitor Previews, and Color Calibration 327
To render a Disk Flipbook:
Note: It is recommended to select a format for the Flipbook from the format pop-up
menu in the Globals tab before rendering.
1 Choose Render > Render Disk Flipbook.
The Flipbook Render Parameters window appears.
2 In the Flipbook Render Parameters window, set your Disk Flipbook parameters:
• Viewer Scripts, DODs, and Lookups: To apply an active Viewer Script, Viewer DOD, or
Viewer Lookup to the Flipbook render, enable the desired parameter. For example, to
render the Flipbook with the DOD currently set in the Viewer, enable
applyViewerDOD.
• updateFromGlobals and timeRange: By default, updateFromGlobals is enabled.
When enabled, the time range and other Global settings (such as aspect ratio, proxy
setting, motion blur, and so on) for the Flipbook render are constantly updated in the
Flipbook Render Parameters from the Globals tab.
Note: To disable updateFromGlobals, toggle “updated” to “update now.”
• timeRange: To override the updateFromGlobals parameter, enter a frame range in
the time range field and press Return. The updateFromGlobals parameter is disabled.
To automatically set the frame range based on the FileIn, click the Auto button.328 Chapter 11 The Flipbook, Monitor Previews, and Color Calibration
• quicktimeCodec: Click the codecOptions button to open the Compression Settings
dialog. Choose a codec from the Compression type pop-up menu. By default, the
Animation codec is selected.
• videoOutput: To enable playback on a broadcast monitor, enable the videoOutput
parameter.
Note: When using a broadcast monitor, ensure that the quicktimeCodec parameter is
the same as the device parameter found in the videoOutput subtree.
The videoOutput subtree contains several options:
• Device: This pop-up menu contains a list of all the devices that can be output to.
This pop-up menu is automatically set to the default, which uses your installed
monitor card.
• Conform: This pop-up menu has three options that let you define how the image
generated by your script is conformed to the frame size of the output device:
• Scale to Fit: Outputs the current DOD to the broadcast monitor.
• Fit Maintaining Aspect Ratio: Fits the DOD to the full screen of the broadcast
monitor while maintaining the aspect ratio.
• Crop To Fit: Crops the image to the DOD, and centers the image in the broadcast
monitor.
• aspectRatio: Modifies the broadcastViewerAspectRatio in the monitorControls
subtree of the Globals tab.
Underneath the videoOutput subtree, there are additional options:
• audio: To render the Flipbook with audio, enable the audio button. In the audio
subtree, you can set the sample rate and audio bits.
• useProxy: You can use proxies in the disk-based Flipbook. For more information on
proxies, see Chapter 4, “Using Proxies,” on page 137.
• quality: Sets the quality for the Flipbook. By default, high (1) is enabled. Click the
“high” button to toggle to “lo” quality.
• motionBlur: Enables and disables motion blur.
• maxThread: If you are working on a multiprocessor system, use the maxThread
parameter to specify how many processors are devoted to the render.
3 Click Render.
The Shake QuickTime Viewer opens and the Pre-Rendering bar displays the render
progress of the Flipbook. When the process is finished, the rendered QuickTime clip
appears.
Note: The Shake QuickTime Viewer is a separate application—when launched, the
viewer application icon appears in the Mac OS X Dock.Chapter 11 The Flipbook, Monitor Previews, and Color Calibration 329
To view and save the Disk Flipbook:
1 In the Shake Preview (Shake QuickTime Viewer) window, click the Play button.
Note: You can also press the Space bar to start and stop playback.
2 To loop the playback, choose Movie > Loop.
Note: You can also choose Loop Back And Forth to “ping-pong” the playback.
If you’re using a broadcast monitor, the Movie menu includes the following additional
options:
• Video Output: Enables and disables the Flipbook display in the broadcast monitor.
• Echo Port: Enables and disables the Shake Preview window in the interface. When
disabled, only the playback bar of the Shake Preview window is displayed.
Note: If audio is rendered with the Flipbook and Play Every Frame is enabled, you will
likely lose audio in the playback.
3 To save the QuickTime render, choose File > Save Movie, and specify the name and
location for the saved file.
4 To quit the QuickTime Viewer, do one of the following:
• Press Command-Q.
• Choose Shake QuickTime Viewer > Quit Shake QuickTime Viewer.
• Press Esc.
• Click the Quit button at the top of the Shake Preview window.
Disk-Based Flipbook Temporary Files
You can specify a location (other than the default) for the temporary disk-based
Flipbooks. For example, if you have an Xserve RAID or other setup, you can store the
temporary Disk Flipbooks on the array for real-time playback. The syntax for the default
location for the temporary disk-based Flipbooks (in the nreal.h file) is:
sys.qtMediaPath = “/var/tmp/Shake/cache”;
To change the location for the temporary files, create a .h file and put the .h file in your
home directory in the /nreal/include/startup/ file. For example, create a .h file similar to
the following:
sys.qtMediaPath = “/Volumes/Scene12/QTtemp”;
Note: You must first create the folder to store the files—Shake does not create a folder
based on the information in the .h file.
You do not need to comment out the default path in the nreal.h file. Any .h file in the
startup folder overrides the nreal.h file.330 Chapter 11 The Flipbook, Monitor Previews, and Color Calibration
Viewing on an External Monitor
When using the Mac OS X version of Shake, you can preview your work on a second
display. This can either be a second computer monitor, or a broadcast video monitor
connected to a compatible video output card (compatible video output cards support
an extended desktop). For more information on compatible video cards, go to http://
www.apple.com/shake/.
Note: The broadcast viewer option is not available on the Linux version of Shake.
To enable viewing via an external monitor:
1 Click the Globals tab.
By default, the format parameter is set to Custom.
2 Choose the format of your footage from the format pop-up menu. For example, if you
are working with NTSC D1 (4:3) non-drop frame footage, choose NTSC ND (D1 4:3) from
the format pop-up menu.
3 In the Viewer shelf, click the Broadcast Monitor button.
The external video monitor mirrors the image displayed in the Viewer, which means
that you can output the image from any selected node (the node displayed in the
Viewer), as well as the Viewer scripts, VLUTs, and so on. In the Node View, the Viewers
displaying the node are printed under the node (for example, 1A, 2A).
Note: If a broadcast Viewer is spawned prior to setting the correct format, the image
may appear incorrect if the wrong aspect ratio is assigned. Go to the Globals tab and
select the correct ratio from the format pop-up menu.
Customizing External Monitor Output
A group of parameters in the monitorControls subtree of the Globals tab let you adjust
how the image sent to an external monitor will be displayed.
broadcastViewerAspectRatio
By default, this parameter is a link to script.defaultViewerAspectRatio, which
mirrors the setting in the format subtree of the Globals tab. When first launched, Shake
looks at the system’s monitor card and outputs the proper aspect ratio based on the
format you select in the Globals tab. For example, if you have a D1 card and you select
NTSC D1 from the format parameter, Shake displays nonsquare pixels in the Viewer and
sends square pixels to the video monitor.
Note: If you change the value of the broadcastViewerAspectRatio using the slider or
the value field, the link to defaultViewerAspectRatio is removed. As with all Shake
parameters, you can enter another expression in the broadcastViewerAspectRatio
parameter to reset it.Chapter 11 The Flipbook, Monitor Previews, and Color Calibration 331
broadcastHighQuality
When the broadcastHighQuality parameter is turned on, the image is fit to the size of
the broadcast monitor in software mode (rather than hardware mode). The
broadcastHighQuality parameter applies a scale node and a resize node, instead of
using OpenGL. The broadcastHighQuality parameter is enabled by default.
broadcastGammaAdjust
Lets you adjust the gamma of your broadcast monitor to insure proper viewing (for
example, if you are sending an SD NTSC signal to an HD monitor).
broadcastMonitorNumber
By default, Shake looks for the first available monitor with an SD or HD resolution to
use as the external monitor. If you have more than one monitor card installed on your
computer, this parameter lets you choose which monitor to use.
Note: The external display monitor doesn’t have to be a broadcast display. If you have
more than one computer display connected to your computer, the second one can be
used as the external preview display.
Navigating the Broadcast Monitor
You can use the standard Viewer navigation keys, such as pan (hold down the middle
mouse button and drag), zoom (press + or –), and Home in the broadcast Viewer.
To turn off the broadcast Viewer, do one of the following:
m
Click the Broadcast Monitor button in the Viewer shelf.
m
Position the pointer in the broadcast Viewer, right-click, then choose Delete Viewer
from the shortcut menu.
Monitor Calibration With Truelight
Shake includes Truelight, a color management system that lets you simulate on your
Shake preview monitor, as closely as possible, the image that will eventually be printed
to film or displayed on a high definition monitor or projector. This simulation is based
on calibration data acquired from a variety of film stocks, film recorders, monitors,
digital projectors, and film projectors, and on calibration profiles that you can generate
yourself.
There are three parts to the Truelight tools:
• TLCalibrate, in the Other tab, is a utility node that you can use to accurately calibrate
your monitor’s color characteristics. This node allows you to create a calibration
profile by eyeballing adjustments to a series of ten images using the controls within
this node. Once you’ve made your adjustments, you can save this profile for future
use.332 Chapter 11 The Flipbook, Monitor Previews, and Color Calibration
Note: This node also allows you to use calibration profiles generated by a Truelight
Monitor probe.
• The calibration profiles generated using TLCalibrate can then be used by the Truelight
VLUT control in the Viewer shelf.
The Truelight VLUT lets you previsualize the image in the Viewer as it will look after
being output from a film recorder, or displayed by a high definition monitor or
projector. You can use the Load Viewer Lookup command to make adjustments to
the Truelight VLUT parameters in the Parameters tab, choosing which device profile
you want to emulate.
• A second node in the Color tab, Truelight, performs the same function as the
Truelight VLUT, except that it can be added to the node tree. The Truelight node has
parameters that are identical to the Truelight VLUT that let you specify the device
profile, current display profile, and color space for the preview. Additional controls let
you fine-tune the preview.
Important: The Truelight functions are intended for previsualization only. They are not
intended for use as color correction operations.
For more information on using the Truelight plugins, see the Truelight documentation,
located in the Documentation folder on the Shake installation disk.12
333
12 Rendering With the FileOut Node
When you’ve finished your composite, you can set up
one or more sections of your script to be rendered using
FileOut nodes. This chapter covers how to render scripts
from the Shake interface, from the command line, or by
using Apple Qmaster.
Attaching FileOut Nodes Prior to Rendering
After you’ve finished creating the necessary effect in Shake, you export your finished
shot by attaching a FileOut node to the section of the node tree that you want to write
to disk. You can attach an unlimited number of FileOut nodes anywhere along a node
tree, which allows you to simultaneously output different sections of your composite at
a variety of different resolutions, bit depths, or color spaces.
In the following example, three FileOut nodes have been added to different points in
the node tree. The top two FileOut nodes will output each half of the composite
individually, while the bottommost FileOut node will produce the combined results of
the entire composite.334 Chapter 12 Rendering With the FileOut Node
You can also branch multiple FileOut nodes from the same node, to output several
versions of the same result. In the following example, two FileOut nodes simultaneously
write both a 10-bit 2K log Cineon image and an 8-bit video resolution linear gammaadjusted frame, in order to obtain a video preview of the composite before sending the
filmout images to a film processing lab.
Note: You cannot export a QuickTime movie with a dynamically varying frame size by
using a FileOut node. The resulting file will be unusable.
The FileOut Node
When you add a FileOut node to your node tree, the File Browser appears. You must
choose a location for the rendered output to be written, and enter a name for the result.
For more information on using the File Browser, see “The File Browser” on page 38.
File Paths
The FileOut node recognizes local, absolute, or URL paths.
Note: Local file paths in a script are local to the location of the script, not from where
Shake is started.
• If the image paths are local (for example, ImagesDirectory/image.#.iff), images are
written relative to where the script is saved.
• If paths are global (for example, //MyMachine/MyBigHardDisk/ImagesDirectory/
image.#.iff), the directory images are written to have no relation to where the script is
saved, and thus the script may be easily moved into different directories.
If the script and the image are on different disks, you must specify the proper disk—
local file paths do not work.
• For a URL address, place a // in front of the path. To write to another computer, write
//MyMachine/Drive/Directory/etc.
Warning: You cannot name a FileOut node “audio.”Chapter 12 Rendering With the FileOut Node 335
File Names
If you write an image without a file extension (for example, image_name instead of
image_name.cin), and you haven’t explicitly set an output format, Shake writes the
image to its native .iff format.
Otherwise, adding a file extension defines the type of file that will be written. For
example, adding .tiff specifies a .tiff sequence, while adding .mov results in the creation
of a QuickTime movie. If you need to change the type of file that’s written later on, you
can select a new file type from the fileFormat pop-up menu in the Parameters tab of
that FileOut node.
If you’re rendering an image sequence, you can also specify frame padding by adding
special characters to the file name you enter:
• The # sign signifies a four-place padded number.
• The @ sign signifies an unpadded number.
You can also use several @ signs to indicate padding to a different number. (For
example, @@@ signifies 001.)
The following table lists some formatting examples.
Parameters
The FileOut node displays the following controls in the Parameters tab:
imageName
The path and file name of the output image.
Shake Format Writes
image.#.iff image.0001.iff, image.0002.iff
image.%04d.iff image.0001.iff, image.0002.iff
image.@.iff image.1.iff, image.2.iff
image.%d.iff image.1.iff, image.2.iff
image.@@@.iff image.001.iff, image.002.iff
image.%03d.iff image.001.iff, image.002.iff336 Chapter 12 Rendering With the FileOut Node
fileFormat
If no extension is given, the output format is .iff. To override this behavior, explicitly set
the output format.
QuickTime Parameters
The following parameters become available from the codecOptions button if the
fileFormat pop-up menu is set to QuickTime.
codec
A list of all available compression codecs installed on that computer.
compressionQuality
The quality of the compression algorithm. A value of 1 is maximum size, maximum
quality. 0 is minimum size, minimum quality.
framesPerSecond
The frames per second for the playback of the QuickTime compression.
audio
Turn this parameter on to write audio out to the QuickTime file.
Rendering From the Command Line
To render a Shake script from the command line, each FileOut node is explicitly
accompanied by a -fo (for -fileout). You can add multiple FileOut nodes along your
command string to output different steps of the command.
For the batch system, you can use the -fileout option, or the abbreviation -fo to write
your image. For example, to copy my_image.cin as a new image file in .iff format, use
the following script:
shake my_image.cin -fo my_image.iff
The interface allows you to view frames anywhere along the node tree using multiple
Viewers. In the script or the command-line mode, however, you may need to explicitly
call intermediate nodes with either -view or -monitor. For example, to show two
Viewers, one image rotated 45 degrees, and the second image rotated and flopped,
use the following script:
shake my_image.rla -rotate 45 -view -flop
If you append a .gz to the end of the file name, Shake further compresses the file.
Shake recognizes the file format and all of its channels when reading or writing one of
these images:
shake uboat.iff -fo uboat.iff.gzChapter 12 Rendering With the FileOut Node 337
This further compresses uboat.iff, maintains it in .iff format, and retains the Z channel.
For more information on executing scripts, see Appendix B, “The Shake Command-Line
Manual,” on page 1015.
Using the Render Parameters Window
When a render is performed using the Render menu, the Render Parameters window
opens. This window overrides the Global settings for your render. Note that these
settings are not saved into the script; they only control how the Shake interface
renders. To render to disk, you must attach an Image–FileOut node.
To render:
1 Attach Image–FileOut nodes to the nodes you want to render.
Note: To render only specific FileOut nodes, select the FileOut nodes in the Node View.
2 Choose Render > Render FileOut Nodes.
The Render Parameters window opens.
3 In the Render Parameters window, ensure that the timeRange (for example, 1-100) and
other parameters are correct, then click Render.
Parameters in the Render Parameters Window
The Render Parameters window has the following parameters:
renderFileOuts
Indicates whether all FileOut nodes, or just the active nodes, are rendered.338 Chapter 12 Rendering With the FileOut Node
updateFromGlobals
Indicates if your settings match the Globals tab settings (updated), or if you have
modified the settings (update now), in which case the button allows you to update the
settings from the Globals tab.
timeRange
Set a new time range using Shake’s standard frame syntax; for example, 1-100 renders 1
to 100, 10-20x2 renders frames 10, 12, 14, up to 20, and so on.
useProxy
Sets your proxy settings.
quality
When this is set to lo (0), anti-aliasing is disabled. This results in poorer image quality,
but improved render speed.
motionBlur
Motion Blur quality level. 0 is no blur, whereas 1 represents standard filtering. For more
speed, use less than 1. This value multiplies the motionBlur parameter of every node
that uses motion blur in your script.
shutterTiming
A subparameter of motionBlur. Shutter length 0 is no blur, whereas 1 represents a
whole frame of blur. Note that standard camera blur is 180 degrees, or a value of .5. This
value multiplies the shutterTiming parameter of every node that uses motion blur in
your script.
shutterOffset
A subparameter of motionBlur. This is the offset from the current frame at which the
blur is calculated. Default is 0; previous frames are less than 0. This value multiplies the
shutterOffset parameter of every node that uses motion blur in your script.
maxThreads
Specifies how many processors are devoted to the render on a multiprocessor machine.
sequential
If you have multiple FileOut noded in your script, it may be more efficient to render the
nodes sequentially. Turning sequential on causes each FileOut node to process every
frame in the tree above it before allowing the next FileOut node to be rendered. When
sequential is turned off, all FileOut nodes are rendered simultaneously. Sequential
rendering is more efficient in cases where FileOut nodes share upstream nodes and
trees. However, if there are too many processes running at the same time they will
compete for CPU and memory resources, which may cause the overall processing time
to increase, in which case turning sequential off may be more efficient.Chapter 12 Rendering With the FileOut Node 339
The Render Menu
There are four options in the Render menu.
Support for Apple Qmaster
Apple Qmaster is a system of utilities that provides automated work distribution and
processing projects created with Shake and other software packages on Mac OS X.
Shake provides an optional interface, available as a subtree at the bottom of the
Render Parameters window, which lets you submit jobs to the Apple Qmaster system.
For more information on setting up and using Apple Qmaster, you are stongly
encouraged to see the Apple Qmaster User Manual.
You can enable support for rendering using Apple Qmaster by adding the following
global plug to a .h file in the startup directory:
sys.useRenderQueue = “Qmaster”;
This setting causes additional options to appear in the Render Parameters window
when you choose Render > FileOut Nodes. These options become visible when you
open the renderQueue disclosure control.
Menu Option Description
Render Flipbook Renders a Flipbook of the contents of the current Viewer. It first
launches the Flipbook Parameters Window that allows you to
override the Global parameters. To cancel the render, press Esc in
the Flipbook window. See “Previewing Your Script Using the
Flipbook” on page 90 for instructions on how to use the Flipbook.
Render Disk Flipbook Available on Mac OS X systems only, the Render Disk Flipbook
command launches a disk-based Flipbook into QuickTime. Disk
Flipbooks have several advantages over normal Flipbooks. For
example, the Disk Flipbook allows you to view very long clips and
to attach audio (loaded with the audio tab in the main interface).
Render FileOut Nodes Renders FileOut nodes in the Node View. Press F in the Node View
to frame all active nodes. You have the option to render only the
active FileOut nodes or all FileOut nodes.
Render Cache Nodes Immediately caches sections of the node tree where Cache nodes
have been inserted. This command lets you cache all Cache nodes
in the Node View over a specific duration. For more information on
using Cache nodes, see Chapter 13, “Image Caching,” on page 343.
Render Proxies Renders the proxy files for your FileIn nodes, leaving your FileOut
nodes untouched. For more information on proxies, see Chapter 4,
“Using Proxies,” on page 137.340 Chapter 12 Rendering With the FileOut Node
Note: If Apple Qmaster isn’t installed but the sys.useRenderQueue plug is declared, a
message is sent to the console upon startup, and the following options do not appear.
renderQueue Options
The renderQueue parameter group contains the following options:
queueName
The name of the render queue software being used. If Apple Qmaster is installed,
“Qmaster” appears here.
useQueue
When useQueue is turned on, the FileOut nodes specified by the renderFileOuts
parameter are sent to the render queue when you click Render. By default, useQueue is
turned off. Setting renderFileOuts to All sends all FileOut nodes to the render queue
software. Setting renderFileOuts to Selected only sends the selected FileOut nodes to
the render queue software.
jobTitle
Enter the name you want to use to keep track of this job here.
workingDir
The directory in which you want to store the temp script used by the render queue.
The temp script is a temporary duplicate of your script that the computers in the
specified cluster can access to perform the job.Chapter 12 Rendering With the FileOut Node 341
Important: When you submit Shake jobs to a cluster, the working directory should
reside on a shared volume that’s accessible to all the computers in the cluster. This
ensures that the working directory is accessible to the rest of the nodes in the cluster.
cluster
A pop-up menu that allows you to choose which cluster you want to use to perform
the job. All clusters set up in your render queue software will appear here.
refreshClusterList
Shake checks for available clusters during startup. However, the available clusters may
change depending on which computers are available on the network at any given
time. Click this button to refresh the cluster pop-up menu with an up-to-date list of
available clusters.
minFrames
Use this field to specify the minimum number of frames you want to be processed by
each computer in the cluster.
timeout
The time, in seconds, a computer on a cluster can be idle before that part of the job is
re-routed to another computer.
priority
A pop-up menu that allows you to choose the priority of the job. The options are High,
Medium, and Low.
delay
A pop-up menu that allows you to delay when the render queue software starts the
job you’re submitting. The options are 15 minutes, 30 minutes, 1 hour, or 2 hours.
batchMonitor button
Click batchMonitor to launch the Apple Qmaster Batch Monitor application.13
343
13 Image Caching
Shake has a powerful image caching system that keeps
track of previously rendered frames in order to speed
your workflow as you work within the interface. This
system can be customized to optimize how you work.
About Caching in Shake
Shake’s cache is a directory of pre-rendered images, with script information about each
frame. When Shake displays the image data for a node tree at a given frame, it first
checks the cache to see if that frame has been rendered before. If it has, the cached
image is recalled to save time, as opposed to reprocessing the entire tree. Shake keeps
track of how many times each cached frame has been viewed, eliminating the least
viewed frames first when the cache runs out of room.
There are two ways you can control Shake’s image caching—using the cacheMode
parameter in the renderControls subtree of the Globals tab, or explicitly within the
node tree using the Cache node.
Important: If you run two instances of Shake, only one instance is capable of reading
from or writing to the cache. When you launch the second instance of Shake, you are
given the option to either move the current cache to a temporary location, or disable
caching.
Cache Parameters in the Globals Tab
The following parameters are found in the renderControls subtree of the Globals tab of
the interface.
cacheMode
This parameter controls the caching behavior of all of the nodes in your script. Every
node in your script is capable of being cached, in one way or another.344 Chapter 13 Image Caching
You can set the cacheMode to one of four states:
• none: Cache data is neither read nor written.
• read-only: Preexisting cache data is read, but no new cache data is generated.
• regular: The cache is both read from and written to, but normal caching does not
occur when the global time is changed (as when moving the playhead).
• aggressive: The cache is both read from and written to, and normal caching occurs
whenever the global time is changed (as when moving the playhead).
When setting the cacheMode, consider the following guidelines:
• In most circumstances, the regular cacheMode setting should be used.
• Consider setting the cacheMode to aggressive when you are constantly scrubbing
back and forth between two or three frames (for example, when tweaking tracking
or shape control points).
• You should only set cacheMode to none if you are using Shake on a system with
extremely limited RAM and disk space. By setting the cacheMode to none, Shake is
forced to re-compute each image that you select to view, which is the least efficient
way to run.
Using the Cache Node
The Cache node lets you tell Shake to cache image data at specific points in the node
tree. This gives you explicit control over which parts of the node tree require rendering
while you work. For example, if there is a processor-intensive branch of your node tree
that you’re no longer working on, you can insert a Cache node in between the last
node of that branch and the section of the node tree in which you’re now working.
Afterwards, the currently displayed frame is immediately cached. If you want to cache a
range of frames in order to pre-render that branch of the node tree, you can use the
Render Cache Nodes command. All cached image data is stored within the same cache,
in memory or on disk.
Note: Cache nodes cache image data at the currently set proxy resolution.Chapter 13 Image Caching 345
From that point on, Shake references the cached image data within that node, instead
of constantly reprocessing the nodes that precede it, unless a change is made to one of
the preceding nodes.
Important: Using a Cache node crops the image to the current image size, eliminating
data from the Infinite Workspace from that point in the node tree on.
When the Cache Becomes Full
Cache nodes that can’t be cached appear red in the Node View.
There are two possible situations when a Cache node won’t be able to actually cache:
The input image size is larger than the maximum allowable cache file size.
You can easily tell if this is the case by opening the indicated Cache node into the
Parameters tab, then checking to see if the value of the imageSize (an input image’s Bitdepth * its Width * its Height) is larger than the value of the imageSizeLimit. If this is
the case, you need to either increase the value assigned to the
diskCache.cacheMaxFileSize global plug, or change the size of the incoming image.
This section of the node
tree is cached by the
selected Cache node
below.
This Cache node caches
the image generated by
the nodes above it.
This node is able to cache. This node is unable to cache.346 Chapter 13 Image Caching
The total cache memory limit has been exceeded.
The second possibility is that the amount of memory needed by all the Cache nodes in
your script exceeds the memory assigned to the cache by the diskCache.cacheMemory
global plug. In this case, no additional Cache nodes may be cached without increasing
the diskCache.cacheMemory global plug.
For more information on the global plugs referenced above, see “Cache Parameters in
the Globals Tab” on page 343.
Caching and Updating Frames
The Cache node updates whenever the playhead moves, caching additional frames if
necessary because of changes that have been made in the preceding nodes. If
necessary, you can also render one or more Cache nodes and cache a range of frames
in advance using the Render Cache Nodes command.
If you later make changes to one or more nodes in a section of the node tree that’s
been cached, the affected cached frames are discarded, and can be re-cached.
To use the Cache node:
1 Insert a Cache node after the last node of a section of the node tree that you want to
cache.
2 Load the Cache node’s parameters into the Parameters tab.
3 Select an option from the forceCache parameter. The disk+memory option is the
default forceCache setting, and is almost always the preferred setting to use.
4 If you want to immediately cache that section of the node tree for a specified duration,
choose Render > Render Cache Nodes.
5 The Cache Render Parameters window appears, which automatically updates the
timeRange to the Global timeRange.
6 Click Render to render the Cache node. A Flipbook appears, allowing you to view the
progress of the render, and play through the cached image sequence.Chapter 13 Image Caching 347
Parameters in the Cache Render Parameters Window
The Cache Render Parameters window has the following parameters:
renderCacheNodes
If you have multiple Cache nodes in the node tree, you can select one or more of these
and render them simultaneously by setting renderCacheNodes to Selected in the
Cache Render Parameters window. Or you can render all Cache nodes in the node tree
by setting renderCacheNodes to All.
timeRange
If necessary, you can change the timeRange to cache a different frame duration.
useProxy
Images are cached using your script’s current proxy setting. You can manually override
the proxy setting in the Cache Render Parameters window, but those cache files won’t
actually be used by Shake until you change the script’s proxy setting to match. This
gives you the option to render multiple sets of cache images to match each proxy
setting you plan on using.
sequential
Turning sequential on causes each Cache node to process the tree above it for each
frame before allowing the next Cache node to process. When sequential is turned off, all
Cache nodes are rendered simultaneously. This is more efficient in cases where Cache
nodes share upstream nodes/trees. However, if there are too many processes running
at the same time they will compete for CPU and memory resources, which may cause
the overall processing time to increase.
Parameters in the Cache Node
The Cache node has the following parameters:348 Chapter 13 Image Caching
cacheStatus
This is a display-only parameter that shows whether the input image has been cached
or not.
• not cached: Nothing has been written to the cache.
• in disk cache: Input image data has been moved to the disk cache. This is a result of
the memory cache becoming full, or cache images having been saved after exiting a
previous Shake session.
• in memory cache: The input image data has been written to the memory cache.
• in transient memory cache: The input image data has been written to the transient
memory cache.
forceCache
This parameter lets you set how cached image data is stored when you update the
cache with the Render Cache Node command. The selected forceCache behavior
bypasses the Global cacheMode caching behavior. There are two options:
• disk+memory: The input image is written to the memory cache whenever the cache
node is updated, and then transferred to the disk cache when the memory cache is
full. All frames in the memory cache are moved to disk when Shake quits. In most
cases this is the preferred behavior.
• memory only: Moves the input image into the memory cache every time the Cache
node is updated, but never writes cache data to disk.
Internal Cache Parameter Display
The Cache node also displays the following parameters:
imageSize
The size (in megabytes) of the input image. The imageSize is determined by the
following formula:
Bit-depth * Image Width * Image Height
imageSizeLimit
The maximum allowable size of the input image, in megabytes. This is set with the
diskCache.cacheMaxFileSize global plug. The default value is 32 MB.
Note: The imageSizeLimit display lets you easily spot situations where a Cache node is
not rendering because the imageSize is greater than the imageSizeLimit.
totalCacheMemory
The total RAM (in megabytes) available for caching to memory. This is set with the
diskCache.cacheMemory global plug. The default value is 128 MB.
For more information on the global plugs referenced above, see “Customizing Image
Caching Behavior” on page 352.Chapter 13 Image Caching 349
Commands to Clear the Cache
Ordinarily, cached frames in memory are written to disk and cleared as appropriate
whenever you quit Shake. If necessary, you can also choose to clear parts of the cache
manually while Shake is running.
Important: It’s advisable to quit Shake normally whenever possible. If Shake quits
unexpectedly, or you force-quit Shake yourself, the disk cache is invalidated. As a result,
the remaining data on disk is unusable by Shake. Upon startup, you are prompted to
either move the current cache to a temporary location, or disable caching.
Two commands in the File menu in Mac OS X are available to clear the cache:
Flush Cache
When you choose Flush Cache, all appropriate images are copied from the memory
cache to the disk cache (depending on how the cacheMode parameter is set), but the
memory cache is not cleared. This command is similar to what Shake does when you
quit (the delay that occurs when you quit is Shake flushing the memory cache to disk).
Purge Memory Cache
Similar to the Flush Cache command, but the memory cache is cleared afterwards. This
is useful if most of your RAM is filled with cache data, and you want to free it up to
create and play a flipbook without needing to exit Shake in order to clear the memory
cache.
Memory and the Cache in Detail
Shake incorporates two separate caches: an image cache and a processing cache.
The image cache is used to store output images that nodes produce. By storing the
entire output image, the image cache can effectively “bake” portions of the processing
tree, thereby saving re-computation time. Whether or not a node’s image data is sent
to the image cache primarily depends on the node’s position in the node tree. When
editing, the nodes directly above the node that is currently being viewed have the
highest priority, which increases user interactivity. During playback, the node currently
being viewed has the highest priority. The global plugs used to control the image
cache are as follows:
• diskCache.cacheSize
• diskCache.cacheMemory
• diskCache.cacheMemoryLimit (Shake v3.5 and later)
• diskCache.cacheMaxFile
• diskCache.cacheMaxFileSize350 Chapter 13 Image Caching
The processing cache is mainly used to store image tiles (tiles are portions of the
complete image) generated by nodes that need surrounding pixels to perform their
image processing operations during a render. Example nodes include the Blur,
Transform, Warp, PinCushion, and Twirl nodes. The processing cache also provides
secondary functionality for caching rendering buffers (in particular for the QuickPaint
node that utilizes a full-frame rendering buffer), color look-up tables, and
transformation matricies. The global plugs used to control the processing cache are as
follows:
• cache.cacheMemory
• cache.cacheMemoryLimit (Shake version 3.5 and later)
Limits to Shake RAM Usage
Shake is currently compiled as a 32-bit application, which can theoretically address up
to 4 GB of virtual RAM (2^32). However, because of the RAM needs and constraints
imposed by the operating system, and competition for RAM from other running
applications, most 32-bit applications have a practical limit of approximately 2 GB of
addressable RAM per process.
Because of this, even if you install 4 GB or more of RAM on your workstation, each
Shake process can only take advantage of about 2 GB of that RAM. The good news is
that, if you launch a Flipbook while running Shake on a system with 4 GB of RAM, the
Flipbook (as a separate process) is able to take advantage of the additional 2 GB of
RAM and is less likely to swap to disk, which is slower.
Mac OS X v10.3 and above (a 64-bit operating system) running on a PowerPC G5
computer configured with 8 GB of RAM partly addresses this issue. A 32-bit application
running natively on a 64-bit OS is still limited to approximately 2 GB of addressable
RAM. However, a Macintosh G5 computer configured with 8 GB of RAM running
Panther can keep a larger number of applications in physical RAM without swapping
out any one application’s memory to disk. As a result, Shake is able to allocate larger
contiguous segments of physical RAM, allowing large Shake scripts to be edited and
rendered in less time.
The Image Cache
The main purpose of the image cache is to improve interactivity while you’re working
on a Shake script in the interface. Shake accomplishes this by attempting to output
image data from nodes in the compositing tree at, and near, the portion of the
compositing tree being edited or viewed. All nodes are capable of having their image
data cached.Chapter 13 Image Caching 351
Similar to the processing cache, the image cache has both a fast RAM-based
component and a slower disk-based component. However, the disk-based component
of the image cache is only active during interface sessions (unlike the processing cache,
which is active in all Shake run modes). In addition, the disk-based component of the
image cache is limited in size and, when the disk cache fills up, Shake discards image
data using an algorithm similar to that used by the processing cache.
Shake assigns cached image data one of three priorities. This determines which part of
the RAM cache images are written to, and how long they’re preserved. These priorities are:
• Low (transient): The low priority (transient) cache contains images which have only
been accessed once. When the cache mode is set to regular, updating a parameter
within a node moves the node directly upstream from it into the transient cache on
the first update.
When you use the Viewer playback controls to play through the frames in your script,
Shake caches every frame that’s been played into the high priority cache.
• Medium (RAM only): Shake keeps images in the medium priority cache as long as
possible—they’re only discarded when the RAM cache is completely full. Medium
priority is assigned to images that have been accessed more than once without
modification.
• High (disk cache): Shake also keeps images designated as high priority in the RAM
cache as long as possible, transferring them to the disk cache when the RAM cache is
full. All entries marked as high priority in the RAM cache are moved to disk when
Shake quits.
Cached images of medium priority are promoted to high priority when they have
been accessed from the image cache four (counting the progression from low to
medium) or more times without modification.
The Processing Cache
The processing cache has a fast RAM-based component and a slower disk-based
component. If the memory limit of the RAM-based component is exceeded, Shake
caches image tiles to disk using an algorithm that is based partially on when the image
was last used. There is no memory limit imposed on the on-disk component of the
processing cache.
Preservation of the Disk Cache
The Shake disk cache is preserved on disk after you quit Shake. When you open a
script with image data in the disk cache the next day, the cached image data from
the previous day is recalled and used.352 Chapter 13 Image Caching
The size of the RAM-based component of the processing cache is set in the nreal.h file
using the cache.cacheMemory global plug. The default size is 128 MB and Shake
internally sets a 256 MB upper limit on the size of this cache. This internal upper limit
can be modified using the cache.cacheMemoryLimit plug. This is only recommended
when working on systems with over 2 GB of RAM.
The following general guidelines apply when setting the cache.cacheMemory plug:
• For scripts with image resolutions of 2K or less, keeping the cache.cacheMemory at
128 MB should provide good performance.
• For scripts with larger image resolutions (over or equal to 4K) or scripts that include a
large number of nodes that perform warps and distortions, consider increasing the
size of cache.cacheMemory to 256 MB. However, you must first consider the amount
of physical RAM installed in the workstation. If a workstation has 1 GB (or less) of
RAM, it is not advisable to set cache.cacheMemory above 128 MB.
• On computers with 2 GB or more of RAM installed, you can raise cache.cacheMemory
even higher, but you need to make sure that you also raise the
cache.cacheMemoryLimit value.
• When running Shake on computers with limited RAM (for example, 512 MB) or when
running both a Shake interface session and a background render on workstations
with 1 GB (or less) of RAM, you may want to reduce cache.cacheMemory to 64 MB.
Both the RAM-based and the disk-based components of the processing cache are
active in all of Shake’s run modes—including interface sessions, background renders,
and renders that are started from the command line.
Customizing Image Caching Behavior
This section provides details on how to customize Shake’s caching behavior. The
following parameters for caching in Shake must be manually declared via a .h file in the
startup directory.
diskCache.cacheMemory
This global plug controls the size of the RAM-based component of the image cache.
Larger values enable Shake to cache more of the node tree currently being edited/
viewed. This enhances interactivity, especially when recursively viewing nodes both
near the top and near the bottom of the node tree. Larger values also cache a greater
number of images during playback—greatly increasing playback speed.
The default value for diskCache.cacheMemory is 128 MB, which enables Shake to cache
approximately 86 images (8bit @ 720 x 486) into the RAM-based portion of the image
cache. Chapter 13 Image Caching 353
The following guidelines apply when setting the diskCache.cacheMemory size:
• When editing large node trees in the interface, working at higher bit depths (that is,
float), or repeatedly playing back an image sequence, you should consider increasing
the diskCache.cacheMemory size to 256 MB, depending on the amount of physical
RAM installed in the workstation.
• When running Shake on workstations with limited RAM (512 MB, for example) or
when running both the Shake interface session and a background render on
workstations with 1 GB (or less) of RAM, you should reduce diskCache.cacheMemory
to 32 MB.
diskCache.cacheMemoryLimit
Internally, Shake sets an upper limit of 512 MB on the size of the RAM-based
component of the image cache. This global plug allows users to override this limit.
However, it is only recommended that you override (increase) this value when working
with scripts that have large image resolutions (greater than 2K) and higher bit depths
(float) on computers with greater than 2 GB of RAM. Increase at your own risk.
diskCache.cacheSize
This global plug controls the size of the on-disk component of the image cache. Larger
values enable Shake to keep more “high priority” images around that have been
pushed out of the RAM-based component of the image cache. Remember that this
component of the image cache is inactive during background or command line
renders.
• Now that workstations routinely have disk drives with hundreds of gigabytes of
capacity, it’s safe to increase the diskCache.cacheSize to 1 GB or more. This improves
interactivity in large scripts, as well as scripts with high bit depth images.
• Time Bar playback also results in images being cached to disk. If you scrub or play
through the Time Bar frequently, or play long sequences, increasing the
diskCache.cacheSize to 1 GB or more allows multiple sequences to reside on disk.
• The only reason to reduce diskCache.cacheSize is if a computer has very limited disk
space, or the very unlikely scenario that the workstation is using a remote disk
mounted over a network as its cache drive—under these circumstances, the latency
in retrieving cached images over the network may offset the computational
advantages.
diskCache.cacheMaxFile
This global plug sets the maximum number of files that are stored in the disk-based
component of the image cache. Larger values allow Shake to store more images, since
each cached image is stored as a separate file. However, some file systems have limits
on both the maximum number of open files allowed and the maximum size of those
files, so you can use this parameter to reduce the number of files being used in the
image cache if a particular system’s file limit is being exceeded.354 Chapter 13 Image Caching
diskCache.cacheMaxFileSize
The global plug sets the maximum file size (in bytes) that can be stored in the diskbased component of the image cache. Greater values allow Shake to store larger
images, since each cached image is stored in a separate file. However, some file
systems have limits on both the maximum number of open files allowed and the
maximum size of those files, so you can use this parameter to reduce the size of the
files being used in the image cache if a system’s file limit is being exceeded.
diskCache.cacheLocation
The directory to which disk cache files are written. By default, the cache is written to:
/var/tmp/Shake/cache
Note: Shake automatically creates cache directories if they do not already exist.
To free up disk space, you can remove this directory, but all caching information will be
lost. This is not vital to a script, it simply forces Shake to completely rerender the
compositing tree.14
355
14 Customizing Shake
Shake’s graphical interface can be highly customized. This
chapter covers how to create preference files, and
explains the different variables and settings that can be
modified by the user.
Setting Preferences and Customizing Shake
This chapter explains how to customize the appearance of Shake, macro interactivity,
and performance parameters. It also lists environment variables you can set to improve
Shake’s performance.
There are several other sections in the Shake documentation that cover similar
information:
• For information on creating Viewer scripts, see “Viewer Lookups, Viewer Scripts, and
the Viewer DOD” on page 61.
• For information on creating custom kernels for filters, see “Convolve” on page 865.
• For more information about creating macros, see Chapter 30, “Installing and Creating
Macros,” on page 905. For a tutorial on creating a macro, see Tutorial 8, “Working
With Macros,” in the Shake 4 Tutorials.
• For information on scripting, see Chapter 31, “Expressions and Scripting,” on
page 935.
Creating and Saving .h Preference Files
Unlike many applications that control user customizable settings with a preferences
window, Shake provides access to a wide variety of functionality using a system of
user-created preference files. This section discusses where to find the uneditable files
that contain Shake’s default functions and settings, and how to create and store your
own separate preference files, to overwrite these settings and customize Shake’s
functionality.356 Chapter 14 Customizing Shake
Finding Shake’s Default Settings
Shake uses two important files to set the original default settings. These files are named
nreal.h and nrui.h, in the following directory:
On Mac OS X
/Contents/Resources
On Linux
/include
The first file, nreal.h, lists every function and default setting in Shake. Although this file
should never be modified by the user, you can open it, and copy functions to use as a
formatting reference for creating your own custom .h preference files. The commands
found in the nreal.h file can be copied and saved in .h files within your own startup
directory.
The second file, nrui.h, contains the data Shake uses to build the interface. It assigns
menu names and contents, tabs, buttons, slider settings, and all of the default settings
used by the interface in its default state. The commands in the nrui.h file should be
copied and saved in .h files within your ui directory.
Note: To open these files for the Mac OS X version of Shake, Control-click or right-click
the Shake icon, then choose Show Package Contents from the shortcut menu to view
the Shake package contents (which include the nreal.h and nrui.h files).
Creating Your Own Preference Files
You can create your own preference files (.h files) to change Shake’s default settings,
add functionality, or change Shake’s performance.
To add your own .h files to customize Shake:
1 Using your text editor of choice, create a new file, then enter the variables and settings
you wish it to modify.
These variables are covered in depth later in this chapter, and are referenced
throughout the documentation.
2 Save the file as a plain text file.
Preference files can be given any name, (except “nreal.h” or “nrui.h,” which are reserved
files used by Shake as the standard list of functions and settings), and they must have
the .h extension to be recognized by Shake.
Note: A fast way to disable a preference file is to remove its .h extension.
3 Place the new .h file within one of the directories discussed in the next section.
Warning: You should never modify either of these files. Doing so risks damaging your
Shake installation, requiring you to reinstall Shake.Chapter 14 Customizing Shake 357
Possible Preference File Locations
Shake .h preference files can be saved in one of several locations (.h files in each of
these locations are read by Shake in the order listed):
• In the Shake directory, Contents/Resources/ and Contents/Plugins/startup (/
include/startup/ui for Linux installs). These directories are scanned every time Shake is
launched by any user that is using Shake called from this directory.
• In any directory listed in the $NR_INCLUDE_PATH list. Set the NR_INCLUDE_PATH
environment variable to point to a list of directories. This is usually done when
sharing a large project among many users.
Note: For information on setting environment variables in Mac OS X, see
“Environment Variables for Shake” on page 393.
Add the following line to a .cshrc or .tcshrc file in your $HOME directory:
setenv NR_INCLUDE_PATH “ //MyMachine/proj/include:/Documents/shake_settings/
include”
Use the above for facility-wide or machine macros and settings that are read by all
users. Because you can add multiple directories in the path list, you can have several
locations of files.
• In the User directory. This is for settings for your own personal use. Shake
automatically scans the $HOME/nreal/include subtree. A typical way to manage .h
files is to create a directory named ui in the following location:
$HOME/nreal/include/startup/ui
The User directory can have the following subdirectories:
• Include/startup/ui: For macros, machine settings, and interface settings.
• Settings: For window layout settings.
• Icons: For personal icons for the interface (also for the icons of macros).
• Fonts: For personal fonts.
• Autosave: For scripts saved automatically every 60 seconds (by default) by Shake.
Installing Custom Settings and Macros
Custom files that change default settings or add macros (see below) all have a .h file
extension, and are located in:
$HOME/nreal/include/startup
For example:
/Users/my_account/nreal/include/startup/memory_settings.h
This is referred to as the startup directory. Files in this location are referred to as startup
.h files.358 Chapter 14 Customizing Shake
Installing Custom Interface Settings
Settings that change the interface in some way (including macro interface files) are
usually located in:
/include/startup/ui
These also have a .h file extension, for example:
/Users/my_account/nreal/include/startup/ui/slider_settings.h
This is referred to as the ui directory or sometimes startup/ui directory. Files inside it are
referred to as ui .h files.
Files that change additional default settings or add extra controls are in the templates
directory, which is always within a ui directory:
/Users/me/nreal/include/startup/ui/templates/defaultfilter.h
Installing Custom Icons
Just as you can create preference files, you can create your own icons. The description
of the actual icons can be found in “Using the Alternative Icons” on page 375. Icons can
be found in one of three locations:
• /Contents/Resources/icons (/icons on non MacOS): a directory,
not to be confused with the important icons.pak file)
• $HOME/nreal/icons
• In any directory pointed to by $NR_ICON_PATH, set the same way
$NR_INCLUDE_PATH is set
Preference File Load Order
Within a startup or startup/ui directory, files are loaded in no specific order. If it is
important that a file is loaded before another file, this can be accomplished in a variety
of ways.
To explicitly control preference file load order, do one of the following:
m Add an include statement at the beginning of the file. For example, if macros.h relies on
common.h being loaded before, start macros.h with:
#include
m
Put all the files you want to load in a directory (for example include/myprefs) and create
a .h file in startup that contains only include statements:
#include
#include
#include
Include files are never loaded twice, so it is okay if two .h files contain the same
#include statement.Chapter 14 Customizing Shake 359
Troubleshooting Preference Files
If your custom preference files do not appear to be working, check the following:
• Does the file have a .h extension?
• Is the file in a startup directory in one of the three possible locations (as described
above)?
• If using a tcsh, and the file is in what you think is the NR_INCLUDE_PATH, is
NR_INCLUDE_PATH actually set for that window? To test this, type the following in a
tcsh window:
echo $NR_INCLUDE_PATH
• Have you checked the text window from which you launched Shake? This is where
syntax problems are shown.
Customizing Interface Controls in Shake
Two forward slashes (//) indicate that a line is commented out and inactive.
Color Settings for Various Interface Items
The following settings let you customize the colors of different interface controls.
Setting Tab Colors
In the ui directory:
nuiPushToolBox(“Image”);
nuiSetObjectColor(“ImageToolBox”,
tabRed, tabGreen, tabBlue,
textRed, textGreen, textBlue);
nuiPopToolBox();
This is an excerpt from the include/nrui.h file. The Image tab is opened and assigned a
color for both the tab and the text on the tab. Instead of numbers for the color values,
variables are used here to indicate the parameter. Search for the variable names above
or enter your own explicit values. Doing this does not automatically assign color to
nodes within the tab.360 Chapter 14 Customizing Shake
Setting Colors for the Nodes in the Node View
In the ui directory:
nuiSetMultipleObjectsColor(
nodeRed, nodeGreen, nodeBlue,
textRed, textGreen, textBlue,
“DisplaceX”,
“IDisplace”,
“PinCushion”,
“Randomize”,
“Turbulate”,
“Twirl”,
“WarpX”
);
This command assigns colors to nodes in the Node View. The nodeRed, green, etc., and
textRed, green, etc., are supposed to be float values. When coloring the nodes, keep in
mind that the default artwork is a medium gray, so you can have numbers above 1 for
the node color parameters to multiply it up. Chapter 14 Customizing Shake 361
Setting Colors for the Time Bar
In the ui directory:
gui.color.timeSliderTop = 0x373737FF;
gui.color.timeSliderBottom = 0x4B4B4BFF;
gui.color.timeSliderFocus = 0x5B5B5BFF;
gui.color.timeSliderText = 0x0A0A0AFF;
gui.color.timeSliderTextFocus = 0x000000FF;
gui.color.timeSliderRange = 0x373737FF;
gui.color.timeSliderRangeReversed = 0x505037FF;
gui.color.timeSliderRangeText = 0x0A0A0AFF;
gui.color.timeSliderLines = 0xFFFFFFFF;
gui.color.timeSliderCurrent = 0x00FF00FF;
gui.color.timeSliderMouseTime = 0xACAC33FF;
gui.color.timeSliderMouseTimeKey = 0xFCFC65FF;
These are just a few plugs to change the coloring of the text in all time-based windows,
such as the Curve Editor, Time Bar, and so on. The numbers are, obviously, in
hexadecimal format, just to make things more difficult. Ignore the 0x and the last FFs.
Note you often have control over a basic color and its mouse-focused variation.362 Chapter 14 Customizing Shake
Setting Colors for Groups in the Node View
In the ui directory:
nuiSetObjectColor(“Group”, .75, .75, .75);
This sets the color of collapsed groups. If you set them to 1, the group takes on the
color set in the Group’s color setting:
nuiSetObjectColor(“Group”, 1., 1., 1.);
Setting Colors for the Curves in the Editor
In the ui directory:
gui.color.curveDef = 0x658a61;
gui.color.curveDefFoc = 0xcccc26;
gui.color.curveDefSel = 0xcccc26;
gui.color.curveDefFocSel = 0xffff26;
//Curves starting with ’r’ or ’R’
gui.color.curveR = 0xa74044;
gui.color.curveRFoc = 0xff0000;
gui.color.curveRSel = 0xff0000;
gui.color.curveRFocSel = 0xff8888;
//Curves starting with ’g’ or ’G’
gui.color.curveG = 0x8de48d;Chapter 14 Customizing Shake 363
gui.color.curveGFoc = 0x00ff00;
gui.color.curveGSel = 0x00ff00;
gui.color.curveGFocSel = 0xaaffaa;
//Curves starting with ’b’ or ’B’
gui.color.curveB = 0x406bf7;
gui.color.curveBFoc = 0x1818ff;
gui.color.curveBSel = 0x1818ff;
gui.color.curveBFocSel = 0x8888ff;
//Curves starting with ’a’ or ’A’
gui.color.curveA = 0x888888;
gui.color.curveAFoc = 0xbbbbbb;
gui.color.curveASel = 0xbbbbbb;
gui.color.curveAFocSel = 0xeeeeee;
There are really only four basic curve types, the normal curve (Def), the focused curve
(DefFoc), the selected curve (DefSel), and the focused, selected curve (DefFocSel). You
then also have additional controls over curves that start with the letters r, g, b, and a.
Setting Colors for Text
In the ui directory:
gui.fontColor = 0xFFFFFF;
gui.textField.fontColor = 0xFFFFFF;
gui.textField.tempKeyBackClr = 0xFFFFFF;
//the color of text on an active tab
gui.tabber.activeTint.red = .9;
gui.tabber.activeTint.green = .9;
gui.tabber.activeTint.blue = .87;
//the color of text on an inactive tab
gui.tabber.tint.red = .65;
gui.tabber.tint.green = .65;
gui.tabber.tint.blue = .63;
This colors the text in hexadecimal format. There are a series of expressions near the
very end of the nrui.h file that allow you to put in normalized RGB values that are then
fed into the hex number, but you can also determine your color using the Color Picker.
• fontColor: The color of the actual parameter name, messages, and also of macros
without declared coloring.364 Chapter 14 Customizing Shake
• textfield.fontColor: The color of the values within the value field.
• tempKeyBackClr: A warning color for values entered when not in autosave mode for
animated parameters. The value is not saved until the Autokey button is enabled.
Some text colors can also be interactively modified in the Globals tab. These are saved
into /nreal/settings when you choose File > Save Interface Settings.
Setting Time View Colors
In the startup or ui directory:
gui.color.timeViewBarStd = 0x737373;
gui.color.timeViewBarTop = 0x909090;
gui.color.timeViewBarBtm = 0x303030;
gui.color.timeViewBarCut = 0x101010;
gui.color.timeViewBarRpt = 0x5a5a5a;
gui.color.timeViewBarMir = 0x5a5a5a;
gui.color.timeViewBarFrz = 0x424242;
gui.color.timeViewBarBlk = 0x0;
gui.color.timeViewBarClpLine = 0x0;
gui.color.timeViewFontInOut = 0x111144;
gui.color.timeViewFontStEnd = 0x441111;
gui.color.timeViewFontStd = 0xFFFFFF;
gui.color.timeViewIgnLine = 0xFF0000;
gui.color.stripedRowColLight = 0x373737;
gui.color.stripedRowColDark = 0x474747;
gui.color.timeViewDarkStripe = 0x373737;
gui.color.timeViewLightStripe = 0x474747;
The BarCut, BarRpt, BarMir, BarFrz, and BarBlk refer to the repeat modes, so each one
has a different color.Chapter 14 Customizing Shake 365
Creating a Custom Palette
In the ui directory:
nuiSetColor(1,1,0,0);
nuiSetColor(2,1,0.5,0);
nuiSetColor(3,1,1,0);
etc.
This assigns default colors to the palette icons, with the first number as the button
number.
Custom Stipple Patterns in the Enhanced Node View
Different stipple patterns can be set in a .h preference file. Each stipple pattern is
defined by a four-byte hex number that, when converted to binary, provides the
pattern of the line drawn for each bit depth—each 1 corresponds to a dot, and each 0
corresponds to blank space.
For example, 0xFFFFFFFF is the hex equivalent of 1111111111, which creates a solid line.
0xF0F0F0 is the hex equivalent of 1111000011110000, which creates a dashed line.
The default settings are:
gui.nodeView.stipple8Bit = 0x33333333;
gui.nodeView.stipple16Bit = 0x0FFF0FFF;
gui.nodeView.stipple32Bit = 0xFFFFFFFF;
Adding Custom Media Formats to the Format Menu
You can create your own custom entries in the format pop-up menu in the Globals tab
by adding the following declarations to a .h file in the startup directory.
Custom formats take the following form:
DefFormatType(
“string”,
width,
height,
aspectRatio,
viewerAspectRatio,366 Chapter 14 Customizing Shake
framesPerSecond,
fieldRendering
);
DefFormatType(
“Academy”, 1828, 1556, 1,1,24,“24 FPS”
);
Setting the Default Format Whenever Shake Is Launched
Add the following:
script.format = “FormatName”;
Setting Format Defaults
In the startup directory:
script.defaultWidth = 720;
script.defaultHeight = 486;
script.defaultAspect = 1;
script.defaultBytes = 1;
script.format = “Full”;
Using the script.format overrides the other settings—you either set the first four or the
format settings, as shown above.
Assigning Default Width and Height to a Parameter in a Macro
In either startup or ui (typically inside of a macro’s parameter setting):
image MyGenerator(
int width=GetDefaultWidth(),
int height=GetDefaultHeight(),
float aspectRatio=GetDefaultAspect(),
int bytes = GetDefaultBytes()
)
These four commands check the default global settings and return the value at the
time of node creation; they are not dynamically linked. Therefore, if you change the
default parameters, the node’s values do not change.
Setting Maximum Viewer Resolution in the Interface
In the ui directory:
gui.viewer.maxWidth = 4096;
gui.viewer.maxHeight = 4096;
By default, Shake protects the user from test rendering an enormous image by limiting
the resolution of the Viewer to 4K. If the user accidentally puts a Zoom set to 200 on
the composite, it does not try to render an enormous file, but instead only renders the
lower-left corner of the image cropped at 4K. To change this behavior, set a higher or
lower pixel resolution. These assignments have no effect on files written to disk.
Warning: Setting maxWidth and maxHeight to excessively high values may result in
Shake unexpectedly quitting during certain functions.Chapter 14 Customizing Shake 367
Creating Custom Listings for the Format Pop-Up Menu
In the startup directory:
DefTimecodeMode(
“Name”,
fps,
numFramesToDrop,
numSecondsDropIntervals,
numSecondsDropException
);
DefTimecodeMode(“24 FPS”, 24);
DefTimecodeMode(“30 FPS DF”, 30, 2, 60, 600);
These define the timecode modes for the timecodeMode pop-up menu in the Globals
tab.
To set the default timecodeMode, use:
script.timecodeMode = “24 FPS”;
Default Timecode Modes and Displays
In the startup or ui directory:
script.framesPerSecond = 24;
script.timecodeMode = “24 FPS”;
gui.timecodeDisplay = 0;
Set one or the other. Setting the timecodeMode allows you to use drop-frame settings.
See above to set the timecode modes. The third line is to display the frames in the
Curve Editor and Time Bar as frames or timecode. 1 = timecode; 0 = frames. The other
timecode modes are: “25 FPS,” “30 FPS DF,” and “30 FPS ND.”
Autosave Settings
The following declarations let you modify Shake’s autosave behavior.
Autosave Frequency
In the startup directory:
script.autoSaveDelay = 60;368 Chapter 14 Customizing Shake
This shows, in seconds, how often the autoSave script is performed. The script is saved
automatically in your User directory as autoSave1.shk, autoSave2.shk, and so on, up to
autoSave5.shk. It then recycles back to 1. If you lose a script because Shake
unexpectedly quits, you can load in the autoSave version.
Four other autosave behaviors can be customized within a .h preference file.
Autosave Directory
In the startup directory:
script.autoSaveDirectory = “//myMachine/myAccount/myDirectory/”;
Setting a directory with this declaration overrides the default behavior of placing
autosave scripts in ~/nreal/autosave/..
Autosave Prefix
In the startup directory:
script.autoSavePrefix = “MySweetScripts”;
Defines text to be prepended to autosave script names. This is blank by default.
Autosave File Count
In the startup directory:
script.autoSaveNumSaves = 20;
Sets the total number of autosave scripts to be saved. Files are discarded on a first in,
first out basis. The default autoSaveNumSaves value is 5.
Undo Level Number
In the ui directory:
gui.numUndoLevels= 100;
This determines how many steps of undo are available. Undo scripts are stored in the
TEMP directory.
Amount of Processors to Assign to the Interface
In the ui directory:
sys.maxThread = nrcGetAvailableProcessors();
This sets the number of processors when using the interface. The
nrcGetAvailableProcessors automatically calculates and assigns all of them. If you only
want to use a limited number of processors, assign that number here.
You can assign the number of processors to be used when batch processing with the
-cpus flag. The default is 1. For example:
shake -exec my_script.shk -cpus 2Chapter 14 Customizing Shake 369
Font Size for Menus and Pop-Up Menus
In the startup directory:
// It can take the following values:
//tiny, small, medium, big, std
gui.menu.fontSize= “std”;
This should be in a ui .h file, but it must be set before the interface is built, so it goes in
a startup file. The example is “tiny.” The default is “std.”
Adding Functions to the Right-Click Menu
In the ui directory:
nuiPushMenu(“NRiNodeViewPopup”,1);
nuiPushMenu(
“This Creates a Sub-Menu”,0
);
nuiMenuItem(
“Grad”,
nuiToolBoxMenuCall({{Grad()}})
);
nuiPopMenu();
This is an example that creates a subtab called “This Creates a Sub-Menu” in the Node
View, and attaches the Grad function to its list. This is just one example. Take a look at
the nrui.h file, where all right-click menus are built. The first line declares under what
menu it is built, so typically these commands are added directly into the nrui.h file.370 Chapter 14 Customizing Shake
Adding Functions Into a Menu
In the ui directory:
nuiOpenMenu(“Render”);
nuiMenuSeparator();
nuiMenuItem(
“Highend2D”,
LaunchBrowser(
“http://www.highend2d.com”,1
)
);
nuiPopMenu();
This creates an entry in the Render menu, split from the other entries by a separator.
Opening Scripts With Missing Macros
If you open a Shake script that contains one or more macros that you do not have on
your system, you have the option to load the script using a substitute node, or to not
load the script at all using the macroCheck parameter in the renderControls subtree of
the Globals tab. To set the default macroCheck behavior to substitute a MissingMacro
node, include the following in a .h file:
sys.useAltOnMissingFunc = 2
For information on the macroCheck parameter, see “renderControls” on page 96.
Linking an HTML Help Page to a Custom Node
To link a node’s HTML Help button to your own custom page, enter the following line
into its ui.h file:
nuiRegisterNodeHelpURL ("MyCustomFunction", "http://www.apple.com/
shake/");
The Curve Editor and Time Bar
The following settings let you customize the Time Bar.Chapter 14 Customizing Shake 371
Setting the Time Bar Frame Range
In the ui directory:
gui.timeRangeMin = 1;
gui.timeRangeMax = 100;
That pretty much says it all, doesn’t it?
Default Timecode Modes and Displays
In the startup or ui directory:
script.framesPerSecond = 24;
script.timecodeMode = “24 FPS”;
gui.timecodeDisplay = 0;
Set one or the other. Setting the timecodeMode allows you to use drop-frame settings.
See above to set the timecode modes. The third line specifies whether the frames in
the Curve Editor and Time Bar are displayed as frames or timecode. 1 = timecode;
0 = frames.
Customizing File Path and Browser Controls
This section lists ways of customizing the File Browser.
Setting Default Browser Directories
In the ui directory:
gui.fileBrowser.lastScriptDir = “$MYPROJ/shakeScripts/” ;
gui.fileBrowser.lastExprDir = “//Server/shakeExpressions/” ;
gui.fileBrowser.lastTrackerDir = “$MYPROJ/tracks/” ;
gui.fileBrowser.lastAnyDir = “C:/Jojo/” ;
You can assign specific directories for the Browser to scan when you start the interface.
You can assign different directories to different types of files, such as scripts, images,
trackers, and expressions.
Important: There must be a slash at the end of the path.372 Chapter 14 Customizing Shake
Using the UNC File Name Convention
In the startup directory:
script.uncFileNames = 1;
Shake automatically assigns the UNC file name, that is, the entire file path name using
the network address starting with //MachineName//DriveName/path. This ensures
proper network rendering. However, you can turn this off by assigning the
uncFileNames to 0, at which point local file paths are maintained. You can use local
paths in either case, but they get converted when UNC is on.
Using Relative Path Conventions
In the startup directory:
gui.fileBrowser.keepRelativePaths = 1;
Adding Personal Favorites to the Browser
In the ui directory:
nuiFileBrowserAddFavorite(
“D:/icons/scr/”
);
nuiFileBrowserAddFavorite(
“$nr_train/”
);Chapter 14 Customizing Shake 373
All directories assigned here appear in your Favorites area of the Directories pop-up
menu in the Browser.
To also bookmark a directory in the Browser, click the Bookmark button and then
choose File > Save Interface Settings. This saves a setting in your $HOME/nreal/settings
directory.
Assigning a Browser Pop-Up Menu to a Parameter
In the ui directory:
nuxDefBrowseControl(
“Macro.imageName”,
kImageIn
);
nuxDefBrowseControl(
“Macro.imageName”,
kImageOut
);
nuxDefBrowseControl(
“Macro.fileName”,
kAnyIn
);
nuxDefBrowseControl(
“Macro.lookupFile”,
kExprIn
);
nuxDefBrowseControl(
“Macro.scriptName”,
kScriptIn
);
nuxDefBrowseControl(
“Macro.renderPath”,
kAnyOut
);
This assigns a folder button to a string so that you can relaunch the File Browser. The
Browser remembers the last directories you used for any different type, so you can assign
the type of file the Browser should look for as well with kImageIn/Out, and so on. For
example, if you have a macro that browses an image to be read in, use kImageIn, so when
you click that button, it jumps to the last directory from which you read in an image.
• kImageIn: Directory of the last image input directory.
• kImageOut: Directory of the last image output directory.374 Chapter 14 Customizing Shake
• kAnyIn: Directory of the last input directory of any type.
• kAnyOut: Directory of the last output directory of any type.
• kScriptIn: Directory of the last script input directory.
• kScriptOut: Directory of the last script output directory.
• kExprIn: Directory of the last expression input directory.
• kExprOut: Directory of the last expression output directory.
Automatic Launching of the Browser When Creating a Node
In the ui directory:
nuiToolBoxItem(“ProxyFileIn”,
{{
const char *filename = getFileInName();
filename ? ProxyFileIn(filename,0,2) :
(image) 0
}}
);
In this example, the Browser is called for the parameter file name in the ProxyFileIn
macro. The macro has three parameters: Filename and two numbers (0 and 2). The
getFileInName function automatically launches the Browser when the user creates this
node in the interface. You can use:
• getFileInName()
• getFileOutName()
• getScriptInName()
• getScriptOutName()
Automatic Browser File Filters
In the ui directory:
gui.fileBrowser.lastImageRegexp = “*.tif” ;
gui.fileBrowser.lastScriptRegexp = “*.shk” ;
gui.fileBrowser.lastExprRegexp = “*.txt” ;
gui.fileBrowser.lastTrackerRegexp = “*.txt”;
gui.fileBrowser.lastAnyRegexp = “*”;
You can assign specific filters for the Browser for different types of Browser activity. For
example, if you only use Cineon files, you may want to use an assignment such as:
gui.fileBrowser.lastImageRegexp= “*.cin” ;Chapter 14 Customizing Shake 375
Tool Tabs
There are a number of ways you can customize the available Tool tabs.
Setting the Number of Node Columns in a Tool Tab
In the /include/nrui.h or a startup file:
gui.doBoxColumns = 8;
This sets the number of columns for the nodes in the Tool tab, which is sometimes
called the “Do Box.” Unlike the other ui. h files, this must go in /include/
nrui.h, placed right before the call to start building the Image tab. To activate it,
uncomment the bold line in the nrui.h file:
//These control the color of text on an inactive tab
gui.tabber.tint.red = .65;
gui.tabber.tint.green = .65;
gui.tabber.tint.blue = .63;
//gui.doBoxAltFxIcons = 1;
//gui.doBoxColumns = 5;
nuiPushMenu(“Tools”);
nuiPushToolBox(“Image”);
nuiToolBoxItem(“Average”, “const char *fileName = blah
nuiToolBoxItem(“Checker”, Checker());
nuiToolBoxItem(“Color”, Color());
nuiToolBoxItem(“ColorWheel”, ColorWheel());
Using the Alternative Icons
In startup or the /include/nrui.h file:
gui.doBoxAltFxIcons = 1;
gui.doBoxColumns = 8;376 Chapter 14 Customizing Shake
This calls the alternative icon set, which concentrates more on the name of the
function. The alternative icons are stored in icons/fxAlt, with the same name as the
normal icons set, for example, Image.Average.nri, and so on. The dimensions for these
icons are 130 x 26. Because they are wider, you typically limit the columns to five in a
normal Shake environment. For a macro on generating these icons, see “MakeNodeIcon
Macro” on page 998. You can activate the icons in two places, either a startup file, or by
uncommenting the following two bold lines in the nrui.h file:
//These control the color of text on an inactive tab
gui.tabber.tint.red = .65;
gui.tabber.tint.green = .65;
gui.tabber.tint.blue = .63;
//gui.doBoxAltFxIcons = 1;
//gui.doBoxColumns = 5;
nuiPushMenu(“Tools”);
nuiPushToolBox(“Image”);
nuiToolBoxItem(“Average”, “ const char *fileName = blah blah blah
nuiToolBoxItem(“Checker”, Checker());
...
Attaching a Function to a Button in the Tabs
In the ui directory:
nuiPushToolBox(“Image”);
nuiToolBoxItem(“Flock”, Flock(0,0,0));
nuiPopToolBox();
This places an icon that you have created into a tab that you assign. In this example,
the icon is placed in the Image tab. If you use a custom name, such as My_Macros, it
creates that tab. The second line first attaches the icon, and then assigns the function
with its default arguments to that button. They do not have to be the same name, but
both are case sensitive. The icon is either found in /icons, your
$HOME/nreal/icons, or in any directory pointed to with $NR_ICON_PATH. The icons have
the following characteristics:
• Although they can be any size, the standard resolution is 75 x 40 pixels.
• Do not use an alpha channel. Assign a SetAlpha (set to 0) or Reorder (set to rgbn) to
remove the alpha channel.Chapter 14 Customizing Shake 377
• The file name is TabName.Whatever.nri. This example is therefore called
Image.Flock.nri.
• The icon border is added automatically by Shake.
The section that says Flock(0,0,0) is the function of what that button actually does. You
can assign any function to these—read in scripts, call multiple nodes, and so on. If the
function does not have default values for its parameters, they must be provided here.
Attaching a Function to a Button Without an Icon
In the ui directory:
nuiPushToolBox(“Image”);
nuiToolBoxItem(“@Flock”, Flock(0,0,0));
nuiPopToolBox();
Note the @ sign before the icon name. This creates a button with whatever text you
supply.
Creating Multiple Nodes With One Function
In the ui directory:
nuiToolBoxItem(
“QuickShape”,
Blur(QuickShape())
);378 Chapter 14 Customizing Shake
You can create multiple nodes with one button click when you call up a function. For
example, if you always attach a Blur node to a QuickShape, you can do this by
embedding one function within another. The first argument for a function is usually the
image input. By substituting the value (usually 0) with a different function, that function
feeds into your first function. In the above example, QuickShape is fed into Blur.
Light Hardware Mode
In the ui directory:
sys.hardwareType = 1;
This command opens Shake without any borders on buttons, making the interactivity a
little faster for graphically slower machines. A value of 0 is the normal mode; a value of
1 is the lightweight mode. Note the artwork isn’t updated for the darker interface, so it
looks a bit odd.
Customizing the Node View
The Node View defaults can be customized in many ways.
Setting Default Node View Zoom Level
In the ui directory:
gui.nodeView.defaultZoom=1;
These are plugs specifically for the Time View. They use the same hex syntax as the
other color plugs.Chapter 14 Customizing Shake 379
Using Parameters Controls Within Macros
These are commands typically assigned to help lay out your macros by setting slider
ranges, assigning buttons, and so on. These behaviors are typically assigned to specific
parameters. They can be applied either globally (all occurrences of those parameters)
or to a specific function. For example, if there is a trio of parameters named red, green,
blue, Shake automatically assigns a Color control to it. However, for a parameter such as
depth, you want to specify actions based on whether it is a bit depth-related function
(and therefore assign a button choice of 8-, 16-, or float-bit depth) or a Z-depth related
function (in which case you probably want some sort of slider). To assign a parameter
to a specific function, preface the parameter name with the function name, such as
MyFunction.depth.
All parameters, unless overridden by Shake’s factory-installed rules, are assigned a slider
with a range of 0 to 1.
Assigning a Color Control
In the ui directory:
nuiPushControlGroup(“Color”);
nuiGroupControl(“Func.red”);
nuiGroupControl(“Func.green”);
nuiGroupControl(“Func.blue”);
nuiPopControlGroup();
nuiPushControlWidget(
“Color”,
nuiConnectColorTriplet(
kRGBToggle,
kCurrentColor,
1
)
);
This assigns a button to three sliders so that you can scrub across an image and
retrieve color information. You can select the current color, the average color, the
minimum color, or the maximum color values. You can also assign a toggle switch to
select the input node’s color or the current node’s color. For example, for pulling keys,
you probably want to use the input node color since you are scrubbing (usually) blue
pixels, rather than the keyed pixels. You can also choose to return different color spaces
other than RGB. Assigning a Color control creates a subtree of those parameters. 380 Chapter 14 Customizing Shake
Notice that you must first group the parameters into a subtree (the first five lines of the
above example).
Color controls automatically appear if you name your trio red, green, blue or red1,
green1, blue1, or red2, green2, blue2.
There are three parameters for the nuiConnectColorPControl function. The first one is
the color space, which can be declared with either a string (for clarity) or an integer:
kRGBToggle 0
kHSVToggle 1
kHLSToggle 2
kCMYToggle 3
The second parameter describes the type of value to be scrubbed—the current,
average, minimum, or maximum. Again, you can use either the word or the integer.
kCurrentColor 0
kAverageColor 1
kMinColor 2
kMaxColor 3
The last parameter is a toggle to declare whether you use the current node’s pixel
values or the input node’s pixel values. You use either 0 or 1:
• 0 = current node
• 1 = input node
Use of the current node may possibly cause a feedback loop. Typically, for color
corrections, you use current node; for keyers, the input node.
Therefore, the above example creates a subtree called Color for the function called
MyFunction. The scrubber returns RGB values, of which only the current value is
returned. When the Color control is called, the Use Source Buffer is turned on.
Assigning the Old Color Control
In the ui directory:
nuiPushControlGroup(“Func.Color”);
nuiGroupControl(“Func.red”);
nuiGroupControl(“Func.green”);
nuiGroupControl(“Func.blue”);Chapter 14 Customizing Shake 381
nuiPopControlGroup();
nuiPushControlWidget(
“MyFunction.Color”,
nuiConnectColorPControl(
kRGBToggle,
kCurrentColor,
1
)
);
This is an older version of the Color control without the cool extra controls.
Changing Default Values
In the /include/nrui.h file:
nuiPushToolBox(“Color”);
nuiToolBoxItem(“Add”, Add(0,0,0,0,0,0));
nuiToolBoxItem(“AdjustHSV”, AdjustHSV(0));
nuiToolBoxItem(“Brightness”, Brightness(0,1));
nuiToolBoxItem(“Clamp”, Clamp(0));
nuiToolBoxItem(“ColorCorrect”, ColorCorrect(0));
...
In the include/nreal.h file, most functions have their default values declared, but not all
of them. To override the default values when you call the function, modify the line that
loads the function in the interface. If every parameter in a function has a default value
in a function, you can call the function with something like:
nuiToolBoxItem(“Clamp”, Clamp(0));
Normally, Clamp has about 8 values. Here, the 0 represents in the first argument, the
input image. 0 is used to indicate that no images are expected to be inserted, so you
can attach it to the active node. However, you can add additional parameters. For
example, the Brightness line above it has (0,1), 0 for the image input (no input) and 1
for the brightness value. Change the 1 to a different value to override it. You only need
to supply the parameters up to the one you want. For example, the following is the call
for the Text function:
nuiToolBoxItem(“Text”, Text());
To override the default font for the Text function, you have to supply the width, height,
bytes, text, and finally the font. The rest you can ignore afterward:
nuiToolBoxItem(“Text”, Text(
GetDefaultWidth(),
GetDefaultHeight(),
GetDefaultBytes(),
“Yadda Yadda”,
“Courier”
)
);382 Chapter 14 Customizing Shake
Grouping Parameters in a Subtree
In the ui directory:
nuiPushControlGroup(“Func.timeRange”);
nuiGroupControl(“Func.inPoint”);
nuiGroupControl(“Func.outPoint”);
nuiGroupControl(“Func.timeShift”);
nuiGroupControl(“Func.inMode”);
nuiGroupControl(“Func.outMode”);
nuiPopControlGroup();
This groups parameters into a subtree that can be opened and closed by the user. This
example, although it says “Func,” is for the FileIn node.
Setting Slider Ranges
In the ui directory:
nuiDefSlider(
“Funct.yPan”, 0, height
);
nuiDefSlider(
“Funct.angle”, -360, 360
);
nuiDefSlider(
“Funct.aspect”, 0, 2, .2, .01
);Chapter 14 Customizing Shake 383
Even though the sliders are in relatively the same position, there are different numbers
in the value fields. You can set slider ranges and precision with this function. The first
line assigns a slider range just for the yPan parameter of the Move2D function. Note the
use of the height variable so the range adjusts according to the input image. The
second line assigns a range for the angle parameter in any node. The third line also has
optional precision parameters, which are granularity and notch spacing.
• granularity represents the truncation point. Shake cuts off all values smaller than your
truncation value, that is, if your granularity is .01, a value of .2344 becomes .23.
Granularity is a “hard” snap—if granularity is set to 0.001, you cannot get anything
but multiples of 0.001 when you slide. Also, granularity cannot be anything but a
multiple of 10 (+ or -).
• notch spacing represents at what value interval the magnets appear. A value of .1
means the magnets are at .1, .2, .3, and so on. The default value is .1. Notch spacing is
a “soft” snap—the slider tends to stick longer to multiples of notch spacing, but does
not prevent the selection of other values. Think of it as a tiny notch in a flat line
where a ball rolls: The ball tends to get stuck in the notch, but if you keep pushing, it
eventually gets out.
Adding Pop-Up Menus
In the ui directory:
nuxDefMultiChoice(“Defocus.shape”,
“fast gaussian|fast box|circle”
);
This pop-up menu, from the Defocus function, allows you to use a pop-up menu for
strings. Note this only supplies strings and not numbers, so you have to do some tricky
math inside the macro itself. For more information, see Chapter 31,
“Expressions and Scripting,” on page 935.384 Chapter 14 Customizing Shake
Creating Radio Buttons
In the ui directory:
nuxDefRadioBtnControl(
“Text.xAlign”,
1, 1, 0,
“1|ux/radio/radio_left”,
“2|ux/radio/radio_center”,
“3|ux/radio/radio_right”
);
This example is for the Text node. This code creates a series of radio buttons that are
mutually exclusive. The naming convention assumes that you have four icons for each
name, with the icon names name.on.nri, name.on.focus.nri, name.off.nri, and
name.off.focus.nri. If no icons exist, you can choose to not use icons, which then gives a
label with an on/off radio button instead. The code has these parameters:
nuxDefRadioBtnControl(
const char *name,
int useIcon,
int useLabel,
int animatable,
curve string state0, ....
);
You can place as many icons as you want. The height of Shake’s standard parameters
icons is 19 pixels, though this can change. The output parameter for the Primatte and
Keylight nodes is a good example.
You can make your own radio buttons with the RadioButton function. This function is
discussed in “RadioButton Macro” on page 999.
Creating Push-Button Toggles
In the ui directory:Chapter 14 Customizing Shake 385
nuxDefExprToggle(“Func.parameter”,
“repl.nri|repl.focus.nri”,
“interp.nri|interp.focus.nri”,
“blur.nri|blur.focus.nri”
);
This assigns a series of buttons to toggle through integers starting at 0. The first line is
assigned a value of 0, the second line assigned a value of 1, the third assigned a value
of 2, and so on. You can place as many toggles as you want. There are two buttons for
each assignment, the normal button, and a second button for when the pointer passes
over the button to signify that you can press it. Note the standard buttons are all in the
subdirectory ux, but this is not a requirement. Shake includes a series of precreated
icons that are packed into the icons.pak file and are inaccessible to the user, but are
understood by this code. Your custom icons can be any size, but the default height is 19
pixels. You cannot have an alpha channel attached to an icon. Use SetAlpha (set to 0) or
Reorder (set to rgbn) to remove the alpha channel. They can be placed in
/icons, the $HOME/nreal/icons, or $NR_ICON_PATH.
Creating On/Off Buttons
In the ui directory:
nuxDefExprToggle(“Func.param”);
This is similar to the push-button toggles, but you only have two values, on and off—
off a value of 0, and on a value of 1. The icon assignment is automatic.
Making a Parameter Non-Animateable
In the ui directory:
nriDefNoKeyPControl(“DilateErode.soften”);
This designates that no Autokey buttons appear.386 Chapter 14 Customizing Shake
Placing a Curve Editor Into a Parameters Tab
In the ui directory:
nuiPushControlGroup(“colorExpr”);
nuiGroupControl(“Lookup.rExpr”);
nuiGroupControl(“Lookup.gExpr”);
nuiGroupControl(“Lookup.bExpr”);
nuiGroupControl(“Lookup.aExpr”);
nuiPopControlGroup();
//Makes all curves invisible by default
registerCurveFunc(“colorExpr”);
//This makes all curves visible by default
registerCurveFuncVisible(“colorExpr”);
gui.colorControl.curveEditorDirection = 0;
//When it is 1, the layout is vertical
//When this equals 0, the layout is
//horizontal
This code loads a Curve Editor embedded inside the Parameters tab. The first six lines of
code simply group parameters together. The last line then attaches the parameters to
the Curve Editor embedded in the Parameters tab.
Viewer Controls
This section discusses Viewer settings and onscreen controls.
Setting Maximum Viewer Resolution in the Interface
In the ui directory:
gui.viewer.maxWidth = 4096;
gui.viewer.maxHeight = 4096;Chapter 14 Customizing Shake 387
By default, Shake protects the user from test rendering an enormous image by limiting
the resolution of the Viewer to 4K. If the user accidentally puts a Zoom set to 200 on
the composite, it does not try to render an enormous file, but instead only renders the
lower-left corner of the image cropped at 4K. To change this behavior, set a higher or
lower pixel resolution. These assignments have no effect on files written to disk.
Onscreen Controls
Onscreen controls are automatically built according to how you name your parameters
in your macro, with one exception—to make a cross-hair control. The following is the
list of parameters it takes to make certain controls. For the illustrations, the controls are
attached to their appropriate functions. For example, the pan controls are attached to a
Pan function and scaling to a Scale function. Simply naming the parameters does not
necessarily give you the functionality you want.
Panning Controls
In the startup macro file:
float xPan = 0,
float yPan = 0
This gives you the lattice to pan around. You can grab anywhere on the cross bars.
Scaling Controls
In the startup macro file:388 Chapter 14 Customizing Shake
float xScale = 1,
float yScale = 1,
float xCenter = width/2,
float yCenter = height/2
This gives you the border and center controls to change the scale center. You can grab
a corner to scale X and Y, or an edge to scale X or Y.
CornerPin Controls
In the startup macro file:
float x0 = 0,
float y0 = 0,
float x1 = width,
float y1 = 0,
float x2 = width,
float y2 = height,
float x3 = 0,
float y3 = height
In the ui file:
nuiPushControlGroup(“Func.Corner Controls”);
nuiGroupControl(“Func.x0”);
nuiGroupControl(“Func.y0”);
nuiGroupControl(“Func.x1”);
nuiGroupControl(“Func.y1”);
nuiGroupControl(“Func.x2”);
nuiGroupControl(“Func.y2”);
nuiGroupControl(“Func.x3”);
nuiGroupControl(“Func.y3”);
nuiPopControlGroup();
Grab any corner or the crosshairs in the middle to adjust the position of your image.
The grouping code for the ui file is included, so you do not have to look at all eight
parameters in your list.Chapter 14 Customizing Shake 389
Box Controls
In the startup macro file:
int left = width/3,
int right = width*.66,
int bottom = height/3,
int top = height*.66
In the ui file:
nuiPushControlGroup(“MyFunction.Box Controls”);
nuiGroupControl(“MyFunction.left”);
nuiGroupControl(“MyFunction.right”);
nuiGroupControl(“MyFunction.bottom”);
nuiGroupControl(“MyFunction.top”);
nuiPopControlGroup();
This creates a movable box. You can grab corners or edges, or the inside crosshairs. This
example is applied to a SetDOD function. The Layer–Constraint and Transform–Crop
nodes also use these controls. In this example, integers are used for values that assume
you are cutting off pixels, but you can also use float values.390 Chapter 14 Customizing Shake
Offset Controls
In the startup macro file:
float xOffset = 0,
float yOffset = 0
This is similar to the Pan controls, but with center crosshairs. This control is available in
the Other–DropShadow node.
Rotate Controls
In the startup macro file:
float angle = 0,
float xCenter = width/2,
float yCenter = height/2,
This gives you a rotation dial and a center control. This example is plugged into a
Rotate function. Chapter 14 Customizing Shake 391
Point Controls
In the startup macro file:
float xCenter = width*.33,
float yCenter = height*.33,
float xPos = width*.66,
float yPos = height*.33,
float myPointX = width/2,
float myPointY = height*.66
In the UI file:
nuiAddPointOsc(“Func.myPoint”);
These three sets of parameters create a crosshairs control. Center and Pos are default
names—the Center pair is also associated with the angle and the scale parameters.
However, the last point is completely arbitrary, as long as it ends in an uppercase X and
Y. In the ui file, you must also declare that these are an XY pair. 392 Chapter 14 Customizing Shake
Radius Controls
In the startup macro file:
float radius = width/6,
float falloffRadius = width/6,
float xCenter = width/2,
float yCenter = height/2
This is basically for RGrad, but maybe you can do some more with it.
Template Preference Files
You can add additional parameters and default settings by adding files into the startup/
ui/templates directory. Each time Shake is launched, it adds these extra parameters. For
example, if you always want the Proxy Filter to be “box” instead of “default,” and you
always want a slider in the Globals tab called switch1, create a .h file under the
templates directory with:
SetProxyFilter(“box”);
curve int switch1 = 0;
Basically, take a saved script and strip out the lines you want to set as defaults, and save
it as a .h file into templates.
Changing the Default QuickTime Configuration
You can change the default QuickTime configuration that appears when you set the
fileFormat parameter of a FileOut node to QuickTime. The default QuickTime
configuration is also the configuration that Shake falls back on when a script is opened
with a FileOut node that’s set to use a QuickTime codec that’s not available on that
computer.Chapter 14 Customizing Shake 393
The default settings in Shake are limited to the ones you find in the standard
QuickTime Compression Settings dialog.
To change the default QuickTime configuration:
1 Create a script with a FileOut node.
2 Select the FileOut node, then choose QuickTime from the fileFormat pop-up menu in
the Parameters tab.
3 Click codecOptions, then set the codec options in the Compression Settings dialog.
4 Save the script.
5 Open the script in a text editor and find the definition of the FileOut node you created.
After the name of the codec, you’ll see a 0, then a long, seemingly nonsensical string of
text in quotes. Copy the long nonsensical string, but not the quotes.
6 Create a .h file in your include/startup directory. Type:
sys.QTDefaultSettings = “ x ”;
where x is the long string you copied, in quotes with a semicolon at the very end of
the line. Here is an example:
sys.QTDefaultSettings =
"100W@u3000WDcsuHA#M000E@4J3In8q5CBRZ0VY2LKKPgATB3A9KSC7gXaC30q8v
W16OG5Koq10h2A5HIvi00ieKA9WT6a1rS9hH8dqIEiBqOHT0SJEZ8HHc8qtf4rlxS
AP9WYwcYJHfMMCKpWXYn2W893LCsk080000@00000000000;";
Note: The above declaration sets the Uncompressed 10-bit 4:2:2 codec as the default.
The default settings in Shake are limited to the ones you find in the standard
QuickTime Compression Settings dialog.
Note: If the default codec you’ve specified is not available, a message is sent to the
console, and the default codec reverts to Animation.
Environment Variables for Shake
This section discusses two ways to set environment variables, and the variables
recognized by Shake. At the end of the section, some examples of aliases are provided.
Warning: Incorrectly setting environment variables can lead to problems running
Shake and with your operating system. If you are not comfortable with changing
these types of settings, consult your system administrator for guidance.394 Chapter 14 Customizing Shake
Environment variables are strings of information, such as a specific hard drive, file
name, or file path, set through a shell (for example, in Terminal on a Mac OS X system)
that is associated with a symbolic name (that you determine). This information is stored
in a hidden file. Each time you launch Shake, the operating system and the Shake
application look at the hidden file to set the environment variables. In other words,
defining environment variables is the equivalent of setting user-defined system
preferences.
As a simple example, you can set an environment variable that specifies a folder that
Shake scans (on launch) for additional fonts used by the Text or AddText node.
To set environment variables on a Mac OS X system, create and edit a “.plist,” or
property list, file. Using the .plist sets variables for Shake whether it is launched from
the Terminal or from the Shake icon.
Using the above example of a font folder, to instruct Shake to read the /System/Library/
Fonts folder, set the following environment variable in your .plist file:
NR_FONT_PATH
/System/Library/Fonts
Another way to define environment variables is to use the setenv command in a .tcshrc
(enhanced C shell resource) file. Each time the Terminal is launched, the .tcshrc file is
read. The environment variables defined by the .tcshrc file are only read by Shake when
launched from the Terminal.
Using the above example of a font folder, to instruct Shake to read the /System/Library/
Fonts folder, set the following environment variable in your .tcshrc file:
setenv NR_FONT_PATH /System/Library/Fonts
Note: The .tcshrc file can be used on all Shake platforms (Mac OS X and Linux).
A common use for a user’s personal .plist or .tcshrc file is to define commonly used
aliases for commands. As a simple example, you can set an environment variable to
launch Shake from the Terminal.
An alias in the command line is not the same as an alias on the Macintosh operating
system. In the OS, an alias merely points to another file. In the command line, you
create an alias to assign your own name to a command.
Note: If you do not have environment variables set on your Mac OS X system, you can
still launch Shake from the Terminal by typing the complete path to Shake:
/Applications/Shake4/shake.app/Contents/MacOS/shakeChapter 14 Customizing Shake 395
To set the Shake path in the Terminal, do the following:
1 Launch Terminal.
2 In the Finder, navigate to the Shake application (usually located in the Shake4 folder in
the Applications folder).
3 Drag the Shake icon to the Terminal.
The Shake path is automatically entered in the Terminal.
4 Set environment variables for Shake. For example, you can specify the location of
important files that your Shake script needs when opened.
5 Specify the Shake directory.
Creating the .plist Environment File
Each time you log in, the system searches for an environment file, named
environment.plist. This file sets an environment for all processes (launched by the
logged-in user). In the Terminal, you create a directory called .MacOSX that contains the
environment file. You also create the environment file (using a text editor), and move
the file into the .MacOSX directory.
To set environment variables in Shake on Mac OS X using the .plist file:
1 Log in using your personal login.
2 Launch Terminal.
By default, you should be in your Home ($HOME) directory. Your Home directory is your
own directory in the Users folder. For example, if John Smith logs in and launches the
Terminal, the following message is displayed in the Terminal:
john-smiths-Computer:~] john%
3 In the Terminal, type the following command to create a directory in your Home
directory called .MacOSX:
mkdir $HOME/.MacOSX
4 Press Return.
An invisible directory (indicated by the “ . ” in front of the directory name) is created in
your Home directory.
5 To ensure the .MacOSX directory was created, type:
ls -als
6 Press Return.
All files, including the new invisible .MacOSX directory, are listed.
7 Next, launch TextEdit (or another text editor) to create a file to set your variables.
Note: If you have installed and are familiar with the Apple Developer tools, you can use
the Property List Editor application to create or edit variables. The Property List Editor
application is located in Developer/Applications. 396 Chapter 14 Customizing Shake
8 In the text document, create the following file (if you’re reading this in the PDF version
of the user manual, you can copy the following and paste it into the text document)
and edit the information.
Note: The following is an example file for instructional purposes only.
MyProject
/Documents/MyBigFilm
NR_INCLUDE_PATH
/Documents/MyBigFilm/macros
NR_ICON_PATH
/Documents/MyBigFilm/icons
This sets the variable MyProject to /Documents/MyBigFilm. This tells Shake that all files
associated with MyProject (that could be a script, directory, and so on) are located in
/Documents/MyBigFilm. As a result, if you type MyProject in the browser, it returns
/Documents/MyBigFilm, and can then be set as a favorite. This file also sets the
NR_INCLUDE_PATH (points to the directory or directories that you want Shake to scan
for macros and personal machine or user interface settings), and NR_ICON_PATH (points
to a directory where you can save your own icons for Shake functions).
9 In TextEdit, choose Format > Make Plain Text.
The document is converted to a .txt file.
10 Choose File > Save.
11 In the “Save as” field of the Untitled.txt window, enter:
environment.plist
Be sure to remove the .txt file extension.
12 Save the file to your Home directory (in TextEdit, choose Home from the Where pop-up
menu), then click Save.
13 In the Save Plain Text window, click “Don’t append.”
The file is saved in your Home directory with the extension .plist.
14 Quit TextEdit.
In order for Shake and your system to access the environment variables, the
environment.plist file must be saved to the .MacOSX directory (created in step 3).
15 To save the environment.plist file to your .MacOSX directory, move the file (using the
Terminal) from your Home directory to the .MacOSX directory. In the Terminal, do the
following:Chapter 14 Customizing Shake 397
a To ensure you are still in your Home directory, type the “present working directory”
command:
pwd
Using the example from step 2, this should return:
/Users/john
b Enter the following:
mv environment.plist .MacOSX
The environment.plist file is moved into the .MacOSX directory.
c To confirm the environment.plist file is located in the .MacOSX directory, enter:
cd .MacOSX
This command moves you into the .MacOSX directory.
d Enter:
ls
The content of the .MacOSX directory, the environment.plist, is listed.
16 Log out and then log in again.
To edit the .plist file:
1 In the Finder, choose Go > Go to Folder (or press Command-Shift-G).
2 In the “Go to the folder:” text field, enter the path to the invisible .MacOSX folder:
/Users/john/.MacOSX
3 Click Go.
The environment.plist file appears in the folder.
4 Open the .plist file in TextEdit (or another text editor).
5 Once your changes are made, choose File > Save.
6 Quit TextEdit.
Using the .tcshrc Environment File
You can also set environment variables (or aliases) using a .tcshrc file. Like the above
.plist file example, you can create the .tcshrc file in a text editor, or directly in a shell
using vi, pico, or another shell editor. Unlike the .plist file, however, you do not save the
.tcshrc file to the .MacOSX directory. Instead, the .tcshrc file is saved into your Home
($HOME) directory.
Usually, you define environment variables in tsch with the setenv command, for
example:
setenv audio /Volumes/shared/footage/audio_files/
This variable instructs Shake to automatically look in /Volumes/shared/footage/
audio_files/ when you import an audio file into Shake. 398 Chapter 14 Customizing Shake
At login, your computer runs the default /etc/csh.cshrc, followed by any .tcshrc files in
your login directory. This sequence is repeated whenever a new tsch is spawned—for
example, when you launch Terminal.
Note: As mentioned above, Shake only reads the .tcshrc environment file when Shake is
run from the Terminal (the file is not applied when Shake is launched from the
application icon).
To add a variable for Terminal commands, enter the following formatting (edit to suit
your own project) into $HOME/.cshrc or $HOME/.tcshrc:
setenv NR_INCLUDE_PATH “ //MyMachine/proj/include;/Documents/shake_settings/
include”
The following is an example of a .tcshrc file for illustration purposes only:
setenv shake_dir /Applications/Shake4/shake.app/
Contents/MacOS/shake
setenv shk_demo /Documents/project_03
set path = (. $shake_dir $path)
setenv NR_INCLUDE_PATH /Documents/project_03
setenv NR_FONT_PATH /System/Library/Fonts
alias ese vi $HOME/.tcshrc
alias s. source $HOME/.tcshrc
alias lt ls -latr
alias ui cd $HOME/nreal/startup/ui
alias st cd $HOME/nreal/include/startup
alias shake $shake_dir
This file sets the Shake directory, as well as points to the directories that you want
Shake to scan for macros, user interface settings, and so on. (/Documents/project_03),
and fonts (/System/Library/Fonts).
Note: Alias definitions or environment variables saved in a .tcshrc file are read the next
time you log in. To make the alias or environment variable effective immediately,
update your alias definition by sourcing out .tcshrc. Type the following:
source .tcshrc
To edit the .tcshrc file, use pico or vi (or another shell editor). Once your changes are
made, save the .tcshrc file.
Shake Variables
Shake recognizes the following variables:
• shell variables: The File Browser recognizes an environment variable, for example,
$pix in the Browser if Shake is run with that environment setting.
• NR_CINEON_TOPDOWN: When set, that is,
setenv NR_CINEON_TOPDOWNChapter 14 Customizing Shake 399
Cineon frames are written in the slower top-down method for compatibility with other,
less protocol-observant, software.
• NR_FONT_PATH: Points to a directory where you want Shake to scan for additional
fonts used by the Text/AddText functions. In Mac OS X, fonts are stored in
/Library/Fonts and $HOME/Library/Fonts. On Linux systems, fonts are
typically stored in /usr/lib/DPS/AFM.
• NR_ICON_PATH: Points to a directory where you can save your own icons for Shake
functions. Typically, this would be an nreal/include/startup directory that you create.
• NR_INCLUDE_PATH: Points to the directory or directories that you want Shake to scan
for macros and personal machine or Shake interface settings. These directories
should have startup/ui as subdirectories. For example:
setenv NR_INCLUDE_PATH /shots/show1/shake_settings
should have /shots/show1/shake_settings/include/startup/ui
• NR_SHAKE_LOCATION: Points Shake to a nonstandard installation area. Default
installation is /usr/nreal/.
• NR_TIFF_TOPDOWN: This is identical for NR_CINEON_TOPDOWN, except it applies to
TIFF files.
• TMPDIR: Points to the directory you want to use as your temporary disk space
directory.
• NR_GLINFO: Information is printed for Flipbooks.
Using Aliases
An alias is a pseudonym or shorthand for a command or series of commands, for
example, a convenient macro for a frequently used command or a series of commands.
You can define as many aliases as you want (or, as many as you can remember) in a
.tcshrc file.
To see a current list of aliases, type the following in a shell:
alias
To start Shake from the Terminal window:
Alias shake /Applications/Shake4/shake.app/Contents/MacOS/shake
To determine how many users are currently working on the system:
Alias census ’who | wc -l’
To Test Your Environment Variable
There is a simple way to test if your environment variable exists. In Terminal, type
“echo,” followed by the environment variable, for example:
echo $myproj
and the proper value should be returned. 400 Chapter 14 Customizing Shake
To display the day of the week:
alias day date +“%A”
To display all Shake processes that are running:
alias howmany ’ps -aux | grep shake’
Interface Devices and Styles
This section discusses considerations when using a stylus, setting mouse behavior,
using a two-monitor system, and setting the monitor resolution.
Using a Stylus
1 In the Globals tab, open the guiControls subtree.
2 Set the virtualSliderSpeed parameter to 0.
When virtualSliderMode is enabled, dragging left or right in a value field decreases or
increases the parameter value beyond the normal slider limits.
Note: The stylus does not allow you to use your desktop space the same way as with a
mouse, so you have to enable virtualSliderMode.
Dual-Head Monitors
Choose View > Spawn Viewer Desktop to create a new Viewer window that floats
above the normal Shake interface. You can then move this Viewer to a second monitor,
clearing up space on the first for node-editing operations.
Important: This only works when both monitors are driven by the same graphics card.
The following is a handy ui Directory command to automatically create the second
Viewer Desktop:
gui.dualHead= 1;
// This is an example of what you can do to open a second
// viewer desktop on the other monitor.
if(gui.dualHead) spawnViewerDesktop(1290,10,1260,960);
For information on using a broadcast video monitor, see “Viewing on an External
Monitor” on page 330.Chapter 14 Customizing Shake 401
Customizing the Flipbook
The following arguments have been added to the Flipbook executable as global plugs,
allowing you to specify an external Flipbook as the default. Specify these plugs using a
.h file in the startup directory. The global plugs and their default values are:
gui.externalFlipbookPath = "shkv"; // the flipbooks name -- this
should include the full path
gui.flipbookStdInArg = "-"; // instructs the flipbook to take data
from StdIn
gui.flipbookExtraArgs = ""; // allows you to enter any extra
arguments the flipbook needs.
gui.flipbookZoomArg = "-z"; // sets the zoom of the flipbook
gui.flipbookTimeArg = "-t"; // the time range argument
gui.flipbookFPSArg = "-fps"; // the frames per second argument
Note: If the specified external Flipbook doesn’t support one of these arguments,
setting its value to an empty string ("") prevents that value from being passed to it.
Configuring Additional Support for Apple Qmaster
You can enable additional support for Apple Qmaster by adding the following global
plug to a .h file in the startup directory:
sys.useRenderQueue = “Qmaster”;
This setting causes additional options to appear in the Render Parameters window
when you choose Render > FileOut Nodes. These options become visible when you
open the renderQueue subtree.
If Apple Qmaster isn’t installed but the sys.useRenderQueue plug is declared, a
message is sent to the console upon startup, and the following options do not appear.
RenderQueue Options
• queueName: The name of the render queue software being used. If Apple Qmaster is
installed, “Qmaster” appears here.
• useQueue: When useQueue is turned on, the FileOut nodes specified by the
renderFileOuts parameter are sent to the render queue when you click Render. By
default, useQueue is turned off. Setting renderFileOuts to All sends all FileOut nodes
to the render queue software. Setting renderFileOuts to Selected only sends the
selected FileOut nodes to the render queue software.
• jobTitle: Enter the name you want to use to keep track of this job here.
• workingDir: The directory in which you want to store the temp script used by the
render queue. The temp script is a temporary duplicate of your script that the
computers in the specified cluster can access to perform the job.
• cluster: A pop-up menu that allows you to choose which cluster you want to use to
perform the job. All clusters set up in your render queue software will appear here.402 Chapter 14 Customizing Shake
• minFrames: Use this field to specify the minimum number of frames you want to be
processed by each computer in the cluster.
• timeout: The time, in seconds, a computer on a cluster can be idle before that part of
the job is re-routed to another computer.
• priority: A pop-up menu that allows you to choose the priority of the job. The
options delay: A pop-up menu that allows you to delay when the render queue
software starts the job you’re submitting. The options are 15 minutes, 30 minutes, 1
hour, or 2 hours.
• batchMonitor button: Click batchMonitor to launch the Apple Qmaster Batch Monitor
application.II
Part II: Compositing With Shake
Part II contains detailed information on how to perform
compositing tasks using all the tools and functions Shake
provides.
Chapter 15 Image Processing Basics
Chapter 16 Compositing With Layer Nodes
Chapter 17 Layered Photoshop Files and the MultiLayer Node
Chapter 18 Compositing With the MultiPlane Node
Chapter 19 Using Masks
Chapter 20 Rotoscoping
Chapter 21 Paint
Chapter 22 Shake-Generated Images
Chapter 23 Color Correction
Chapter 24 Keying
Chapter 25 Image Tracking, Stabilization, and SmoothCam
Chapter 26 Transformations, Motion Blur, and AutoAlign
Chapter 27 Warping and Morphing Images
Chapter 28 Filters 15
405
15 Image Processing Basics
Shake gives you explicit control over every aspect of
image processing. This chapter covers the basics of image
processing, and how to control the flow of image data in
your own Shake scripts.
About This Chapter
This chapter covers important information about topics that are fundamental to
compositing within Shake.
These topics include:
• Resolution handling via the Infinite Workspace
• Bit depths
• Alpha channel information
• Premultiplication
• Logarithmic colorspace support
If you’re a new user, or an experienced user who has always wondered why some
things don’t seem to turn out like you’d expect, it is well worth your time to review this
information to better understand and control how images are processed in your scripts.
Taking Advantage of the Infinite Workspace
One of the most powerful features of Shake is the Infinite Workspace. Shake optimizes
image processing by rendering only the portions of each image that are currently
exposed within the frame, no matter what the original resolution.406 Chapter 15 Image Processing Basics
This means that if, for example, you have a very small element that is 100 x 100 pixels,
and you pan it 50 pixels in the X axis and 50 pixels in the Y axis, three-fourths of the
image will extend outside of the 100 x 100-pixel frame. Although Shake does not
calculate the “unseen area,” the portion of the image outside the boundary of the
frame is preserved. If, later in your script, you composite that image over another image
with a 400 x 400 pixel frame, the parts of the image that were previously outside of the
frame reappear. Because of the Infinite Workspace, you never lose image data as a
result of transformations or resolution changes.
In the following example, the moon image is scaled, and panned up and to the right,
resulting in the image moving completely offscreen. When the traffic image is later
composited with the moon image using a Screen node, the hidden moon image
appears in its entirety.
Even though Shake calculates only the visible parts of the image within the frame, the
image information outside of the frame is preserved for later use. This powerful feature
optimizes the operations within your script, has almost no memory or calculation cost,
and eliminates many potential difficulties when combining and transforming images of
varying resolutions.
Tree Moon image
Move2D1 Screen1Chapter 15 Image Processing Basics 407
Nodes that modify image resolution also take advantage of the Shake Infinite
Workspace. For example, if you apply a Crop node to a 20,000 x 20,000-pixel image,
Shake calculates only the area of the image specified in the node. This is true even if
you employ other nodes prior to the Crop. The Infinite Workspace allows Shake to limit
the processing power needed by your script, because only the contents of the Crop
window are calculated. This makes Shake ideally suited for high-resolution functions
such as scrolling a large background image under lower-resolution foreground
elements.
When working with the Infinite Workspace, bear in mind the following:
• You do not need to crop a small image before it is composited with a larger image
when you are panning the image. Simply read in your image, apply the pan, and
composite it over or under the larger image.
• The Blur node gives you the option to blur pixels outside of the frame using the
spread parameter. When set to 0, only pixels inside the frame are considered. When
set to 1, outside pixels are calculated into the blur as well. When you read in an
image and then blur the image, set the spread to 0. Otherwise, black ringing occurs
around the edge because Shake adds the empty black region beyond the image
border into its blur calculation. If you read in the image, scale it up, and then blur,
you should set spread to 1, since there are now non-black pixels outside of the
frame.
• If you transform an object, apply a color correction, then transform the object back
to its original state, the entire image retains the result of the color correction, not just
the portion that was in the frame when you applied the color correction.
Clipped Images
If an image is clipped, it is usually because a Crop node has been applied, or because
you have applied a Blur with a spread set to 0, which is including black outside the
image area. Set spread to 1 in the Blur node’s parameters.
spread = 1 spread = 0408 Chapter 15 Image Processing Basics
Note: You must be careful when pulling a bluescreen matte with the ChromaKey
node. The outside black pixels are considered invisible because the node is keying a
non-black color.
To disable the effect of the Infinite Workspace, insert a Crop node and don’t modify its
default values (which does not change the resolution). This cuts off the area outside of
the frame, replacing it with black pixels. The Viewport node is similar to Crop, but it does
not disable the Infinite Workspace.
Note: Be very careful when scaling elements up, applying an operation, then scaling
back down. When you apply an operation to the scaled element, even though your
frame is small, Shake will calculate everything outside of the frame when you scale the
image back down to fit in the frame.
For more information on the Infinite Workspace, see “Color Correction and the Infinite
Workspace” on page 617. You can also see “The Domain of Definition (DOD)” on
page 82.
Bit Depth
Bit depth describes how many values are used to describe the range of colors in an
image. The number of steps in a range of color is calculated by taking 2 to the nth
power, where n represents the number of bits. For example, a 1-bit image gives you
two values—black and white. A 2-bit image gives you 22
, or 4 color values per channel.
Bit depth directly affects image quality in several ways.
Comparing Different Bit Depths
Higher bit depths allow you to more realistically represent a wider range of color, by
ensuring that the gradients between similar colors are smooth. Using a bit depth that’s
too low results in what is sometimes described as color banding—where, for example,
you can actually see the limited number of colors in between two shades of blue.
For a better understanding of how this happens, you can look at how a range of color
is represented at varying bit depths on a graph. In a simplification, the following charts
display a grayscale ramp in 1-bit, 2-bit, 3-bit, and 8-bit depths.Chapter 15 Image Processing Basics 409
Note: These examples of 1-bit, 2-bit, and 3-bit images are not supported by Shake, but
are used for demonstration purposes. In Shake, you ordinarily work with 8-bit, 16-bit, or
32-bit float (floating point) images.
At 1-bit resolution, the graph shows the harsh difference between black and white.
At 2-bit resolution, the graph is still harsh, but there are more colors between.
1 bit, 2 values total Graph of 1-bit image
2 bits, 4 values total Graph of 2-bit image
3 bits, 8 values total Graph of 3-bit image410 Chapter 15 Image Processing Basics
At 3-bit resolution, you begin to see a gradient from black to white, although the graph
is still choppy.
Finally, at 8 bits, you can see a smooth transition, and the graph line is almost straight.
These graphs demonstrate that more bits used to represent an image results in finer
color transitions. Digital film compositing refers to bit depth on a per-channel basis, so
8 bits refers to 8 bits per channel, or 32 bits total for an RGBA (Red, Green, Blue, and
Alpha) image. Shake can calculate up to 32 bits per channel, or 128 bits total in an
RGBA image. To complicate things, because 8 bits equals 1 byte, images in Shake are
set with a byte value of 1 (8 bits), 2 (16 bits), or 4 (32 bits, or float).
Ultimately, the color depth you decide to work in depends on the destination of the
end result. For example, because of the responsiveness of film and the size of the
screen, an image that looks fine at 8 bits on video can look terrible on film. On the
other hand, higher bit depths are more processor-intensive. You need to strike a
balance between quality and speed.
Avoiding Color Banding
Most non-film composites work fine at 8 bits, which is typically the standard output of
most 3D renderers and paint packages. However, there are times when you need to use
a higher bit depth to process your images—requiring you to increase an image’s bit
depth to 16 bits, or a whopping 65,000 (more or less) values per channel.
A typical example of when higher bit depths are better is whenever you process a
ramp of similar color values (for example, the light-to-medium blue found within an
image of the sky) across a wide screen space. An 8-bit image, though sometimes
indiscernible from 16 bits on a computer monitor, will probably exhibit color banding
when printed to film. If your sky image is generated in 8 bits in a different software
package, there is no immediate improvement if you bump it up to 16 bits in Shake. In
this example, the ramp needs to be generated in 16 bits to take advantage of the extra
precision of 16 bits (for example, using the Shake Ramp or RGrad node).
8 bits, 256 values total Graph of 8-bit imageChapter 15 Image Processing Basics 411
Note: In 8-bit images there is no 50 percent point—you have a smidgen less than 50
percent gray and a smidgen more than 50 percent, but you cannot get an exact 50
percent value. This occasionally becomes an issue when creating and using macros.
If this is the case, then why not always work at 16-bit resolution? Most film houses do,
but it comes at the expense of slower calculations, more memory required, and largersized image files requiring significantly more hard disk space.
Float
The Shake 32-bit representation is “float”—values can go above 1 or below 0. In all of
the above examples, the ramp ranges from 0 to 1. If you add two 8-bit ramps together,
the white values are added together (1+1), but clipped at 1. This is fine visually, but you
may later do other mathematical computations in which it is important to realize that
1+1 is 2, not 1. A good example is the Z channel, which is always in float. The Z channel
is usually generated by a 3D render, and supplies the distance on a per-pixel basis from
the object to the “camera.” Therefore, values could go from 0 to infinity. If you swap
your Z channel into your red channel, you do not want it clipped off at 1, because you
could not tell the difference between the pixels that are 2 units away and the pixels
that are 1000 units away. A float representation, however, maintains these values.
Bit Depth Independence
Shake recognizes and maintains the bit depth of incoming images—except for 10-bit
Cineon files, which are automatically boosted to 16 bits. Because Shake concatenates
color corrections, in Shake you are penalized less frequently when working at 8 bits
than you are in other software. This is because adjacent color corrections are collapsed
into a single mathematical lookup table, enabling Shake to perform the overall
computation in float. The resulting image is returned to its source bit depth.
With the use of the Bytes node, you have the option of modifying your image to a
higher or lower bit depth. As the name implies, the Bytes node takes bytes as its
argument, so a value of 1 equals 8 bits, 2 equals 16 bits, and 4 equals 32 bits (or float).
(There is no “3 bytes” setting.) For information on the Bytes node, see “The Bytes Node”
on page 413.412 Chapter 15 Image Processing Basics
You might need to use a higher bit depth when employing certain nodes, such as
Emboss and Blur, since they naturally create smooth gradations. In the following
example, the image on the left has a Blur node and an Emboss node applied. At 8 bits,
terracing appears. By inserting an Other—Bytes node at the beginning of the node tree
set to 2 bytes (16 bits), the Emboss effect is smoothed.
Be sure to increase the bit depth of the image before the Blur node. This does not
mean you need to insert a Bytes node before every Blur. Rather, use the Bytes node
when you plan to apply numerous operations to a blurred image. Why? Because
blurred images are likely to display unwanted banding after multiple operations are
applied.
8 bits 16 bits
2-bit image, 4 values per channel
possible
16-bitimage, 65,535 values per
channel possibleChapter 15 Image Processing Basics 413
You can seamlessly layer images of different bit depths together. This results in the
lower bit-depth image being automatically promoted to the higher of the two bit
depths. (For example, an Over node compositing an 8-bit image with a 16-bit image
results in a 16-bit image.) This is an automatic operation, invisible to the user.
To reverse this behavior, insert a Bytes node before the Over node on the 16-bit image
to reduce the image to 8 bits.
Bit-depth level is calculated locally in the node tree. In the previous example mixing 8-
bit and 16-bit images, only the sections of the node tree that come after the 8-bit to 16-
bit conversion in the Over node are calculated at 16 bits.
Cineon File Bit Depth
10-bit Cineon files are automatically promoted to 16 bits when read in or written by
Shake, so you don’t have to worry about any data loss. However, the linearization of the
log files may result in loss unless you first promote your images to float. For more
information, see “The Logarithmic Cineon File” on page 437.
The Bytes Node
The Bytes node converts the input image to a different bit depth. The bit depth is
counted in bytes per channel. To view the current bit depth of an image, you can look
at the title bar of the Viewer, or look at the output of a Shake -info in the Command
Line field at the bottom of the interface. 10-bit Cineon files are automatically pushed to
16 bits when read by Shake.
Note: When compositing images of different bit depths, you do not need to force the
images to conform; Shake automatically pushes the lower bit-depth image to the
higher bit depth. 414 Chapter 15 Image Processing Basics
Parameters
This node displays the following control in the Parameters tab:
outBytes
Forces the incoming image into a new bit depth. There are three buttons,
corresponding to three values in the outBytes parameter field.
• 1 = 1 byte per channel, or 8 bits per channel.
• 2 = 2 bytes per channel, or 16 bits per channel.
• 4 = 4 bytes per channel, or 32 bits per channel (float).
Channels Explained
Shake supports and tracks different numbers of channels in an image in your
composition, giving you channel independence as well as bit-depth and resolution
independence.
For information on displaying different channels in the Viewer, see “Using and
Customizing Viewers” on page 45.
An additional Z channel can be added to any of the above, so the maximum number of
channels you can use to represent a single image is five: RGBAZ. Unfortunately, the Z
channel does not show up in the Viewer unless you use the View Z script button.
(Whether or not the View Z script is active, you can always check the Viewer title bar to
see whether currently loaded image contains a Z channel.)
Combining Images With Different Channels
In Shake, you can combine images that use different channels. For example, you can
composite a 2-channel image over a 4-channel image.
Shake is optimized to work on a per-channel basis—a 1-channel image usually
calculates about three times faster than a 3-channel image. For this reason, if you read
in masks from a different package, you are encouraged to make them 1-channel
images to reduce disk space usage and processing time.
Code Description
BW (Black and White) 1-channel grayscale image.
A (alpha) 1-channel matte image.
Z (depth) 1-channel depth image, always in float.
BWA 2-channel grayscale image with matte channel.
RGB 3-channel color image, representing Red, Green, and Blue
information.
RGBA 4-channel color image with matte channel.Chapter 15 Image Processing Basics 415
If you apply an operation that changes channel information, Shake automatically
updates which channels are used. For example, if you place a Color–Monochrome or
Filter–Emboss node on an RGB image, that image becomes a BW image at that point,
speeding up the calculation of the subsequent nodes. If you then composite the image
over an RGB image or change its color (for example, via a Mult node with values of 1.01,
1, 1), the BW image becomes an RGB image again.
In certain situations, this behavior may seem nonintuitive. For example, a 3-channel
image composited with an Inside node to a matte image still results in a 3-channel
image—no matte channel is added to the result. This eliminates the need to add an
alpha channel to the 3-channel image just to combine it. If, however, you want to add
or remove channels at some point, you can use the Copy, SwitchMatte, Color–Set, or
Color–Reorder node.
Viewing the Number of Image Channels
There are several ways you can see how many image channels are being used by the
current node.
To determine the number of channels in an image, do one of the following:
m
Load the image into the Viewer, then the look in Viewer title bar display.
m
Position the pointer over the node in the Node View and look at the Info field at the
bottom of the interface.
m
In the Command Line field, type:
shake my_image -info
m
To view the Z channel, use the View Z script button.
For more information on displaying channels in the Viewer, see “Using and Customizing
Viewers” on page 45.
Displaying Individual Channels in the Viewer
If necessary, you can display each individual channel in the Viewer to help you finetune your composite.
To view the alpha channel of an image, do one of the following:
m
Position the pointer in the Viewer (or the Flipbook), then press A.
m
Click the View Channel button, then choose the alpha channel option from the pop-up
menu.416 Chapter 15 Image Processing Basics
To return to viewing the RGB channels, do one of the following:
m
Position the pointer in the Viewer, then press C.
m
Click the View Channel button, then choose the RGB channel option from the pop-up
menu.
To display the R, G, or B channels individually, do one of the following:
m
Press R to display the Red channel.
m
Press G to display the Green channel.
m
Press B to display the Blue channel.
m
Click the View Channel button, then choose a color channel from the pop-up menu.
m
Right-click the View Channel button, then choose a color channel from the shortcut
menu.
Changing the Number of Image Channels
As stated above, certain operations automatically add or remove channels. For
example, the Emboss and Monochrome nodes change an RGB image to a BW image,
and a non-uniform Add node changes a BW image to an RGB image. You can also
explicitly change the number of channels in an image with specific nodes.
The following nodes also potentially modify image channels:
Node Effect Operation
Color–Add Adds R, G, B, A, or Z. A value raised above 0 creates the specified channel.
Color–
Brightness
Removes RGB. A brightness value set to 0 removes the RGB channels.
Layer–Copy Adds R, G, B, A, or Z. Copies a channel from the second input to the example.
If you copy Z, the second image must have the Z channel.
Filter–Emboss Turns an RGB image to a
BW image.
This, of course, radically alters your image.
Color–
Monochrome
Turns an RGB image to a
BW image.
Uses a luminance balance, but you can adjust this to
push specific channels.
Color–Mult Removes R, G, B, A, or Z. Setting the R, G, B, A, or Z to 0 removes the specified
channels.
Color–Reorder Adds or removes
R, G, B, A, or Z.
By using n or 0, you remove the specified channel:
• rgbn or rgb0 removes the alpha channel.
• rgbal adds the luminance into the Z channel, thereby
creating a Z channel.
• rrra creates a 2-channel image, assuming the “a”
channel is not black.
• 000a or nnna turns the image into a 1-channel alpha
image. Chapter 15 Image Processing Basics 417
Many operations allow you to select which channel is used as the modifying channel.
For example, the SwitchMatte, KeyMix, and IBlur nodes give you the option to select the
R,G,B, or A channel as your control or alpha channel. This often removes the need to
swap your channels before you do many operations. Two exceptions to this are the
Inside and Outside nodes, which always depend on the second image’s alpha channel.
To convert a BW (or BWA) image into an RGB (or RGBA) image without changing its
values, use the Command Line field to specify the following operation:
shake myBlackAndWhiteImage.iff -forcergb -fo myRGBImage
For more information on channel/compositing functions, see Chapter 16, “Compositing
With Layer Nodes,” on page 451.
Compositing Basics and the Alpha Channel
The Shake compositing nodes are located in the Layer tab. The primary compositing
nodes are the Over and KeyMix nodes. For more information on how to use these
nodes, see Chapter 16, “Compositing With Layer Nodes,” on page 451.
Example 1: Compositing Using the Over Node
The following images are used in this discussion to demonstrate how the primary
compositing nodes work:
As its name implies, the Over node places a foreground image with an alpha channel
over a background image. The foreground RGB channels should be premultiplied by
the alpha channel. In a premultiplied image, the RGB channels are multiplied by the
alpha channel.
Color–Set Adds or removes
R, G, B, A, or Z.
A value set to 0 removes the specified channel. A channel
parameter set to something other than 0 adds that
channel.
Layer–
SwitchMatte
Adds A. Copies in any channel from the second input for use as
the new alpha channel for the first input.
Node Effect Operation
Foreground (with mask) Foreground alpha channel
(part of foreground image)
Background418 Chapter 15 Image Processing Basics
Important: Premultiplication plays a vital role in compositing, and Shake gives you
explicit control over premultiplying and unpremultiplying your images. For more
information, see “About Premultiplication and Compositing” on page 421.
If the image is not premultiplied, it can be premultiplied in one of two ways:
m Add a Color–MMult node before the Over node in the process tree.
m Use the preMultiply toggle in the Over node’s parameters.
Example 2: Compositing Using the KeyMix Node
KeyMix, the second most important compositing node, mixes a foreground input image
and a background input image through a third, separate, input image—the mask. You
can select which channel of the third image works as the mask. You can also invert the
mask and control its intensity.
As mentioned previously, a successful Over composite requires an alpha channel for the
foreground and foreground RGB channels that are premultiplied by that alpha channel.
3D-rendered elements are almost always premultiplied. Scanned elements or other 2Dgenerated plates require an added alpha channel (also called the matte or mask
channel) that is used to premultiply that image with the Color–MMult node.
To get the necessary alpha channel, you have several options:
• Pull a key with a Shake keying node (or combination of nodes).
• Pull a key in a different software package and read the images into Shake. Copy the
key into the alpha channel of the foreground image (with the Copy or SwitchMatte
node). Finally, apply an MMult, and then composite.
• Draw a mask with an Image–RotoShape node.
• Paint a mask with an Image–QuickPaint node.
• All of the above, combining masks with the IAdd, Max, Inside, or Outside node.
Tree with Over node ResultChapter 15 Image Processing Basics 419
Example 3; Assigning an Alpha Channel With the SwitchMatte
Node
In the following example, the mask is drawn using the QuickShape node and copied in
as the alpha channel for the bus via the SwitchMatte node. Because no color corrections
have been made, the MatteMult toggle is used in SwitchMatte to premultiply the
foreground.
The bus is color corrected in the next example, so preMultiply is disabled in the
SwitchMatte node, and enabled in the Over node. Or, you can also insert a Color–MMult
node between SwitchMatte1 and Over2.
Tree with SwitchMatte node Matte bus (no alpha) QuickPaint
Background Over result Premultiplied result
of SwitchMatte node420 Chapter 15 Image Processing Basics
In the following example, MDiv and MMult nodes are added to color correct a 3Drendered element. Again, you can alternatively omit the MMult, and enable preMultiply
in the Layer–Over node.
For an example of color correcting premultiplied elements, see Tutorial 3, “Depth
Compositing” in the Shake 4 Tutorials.
Combining Images in Other Ways
Shake also has a set of mathematical and Boolean layering operators. The IAdd, IDiv,
IMult, ISub, and ISubA nodes add, divide, multiply, or subtract two images together. The
“I” stands for “Image.” The second subtracting node, ISubA, returns the absolute value of
the image. If you place a dark gray image in the first input (value .2, .2, .2), and then a
white image (1, 1, 1) in the second input, the ISubA operation, it returns a light gray
image (.2 - 1 = -.8, take the absolute value of, returning .8, .8, .8). This is a quick way to
compare two images.
The more flexible tool for generating difference mattes is the Common node, which is
used to isolate common or different elements between two images. The other
mathematical operators are Min and Max, which return the lower or higher values of
two images, respectively. The Boolean operators Inside, Outside, and Xor are also useful
for masking images:
• Inside places the foreground image inside of the background alpha.
• Outside places the foreground image outside of the background alpha.
• Xor reveals only areas without a common alpha area.
Shake contains several effect operators:
• ZCompose for Z-based compositing
• Screen, to composite light elements and preserve highlights
• Atop, to place an element only on the foreground image. (For example, you can use
the Atop node to place a smoke element over a CG character, matching smoke in the
background plate.)
Color correcting ResultChapter 15 Image Processing Basics 421
Using the ClipMode Parameter of Layer Nodes
You can easily composite elements of any resolution. To set the output resolution of a
composite that contains images of multiple resolutions, go to the compositing node’s
parameters and use clipMode to toggle between the foreground or background (as the
output resolution). This applies to all layering commands. An element composited over
a differently sized background is one way to set your output resolution. For more
information on setting resolution, see Chapter 3, “Adding Media, Retiming, and
Remastering,” on page 107.
As outlined in the “About Channels” section above, you can easily combine 1-channel,
2-channel, 3-channel, 4-channel, or 5-channel images. For example, you can combine a
luminance image with an alpha channel (2 channels) over a 5-channel image, using an
Over node.
For more information on compositing math and the individual layer functions, see
Chapter 16, “Compositing With Layer Nodes,” on page 451.
About Premultiplication and Compositing
An understanding of premultiplication in compositing is essential to understanding
how to combine the tasks of compositing and color correction. This section details the
process that makes standard compositing functions work, and defines what
premultiplication actually is. Regardless of your compositing software, the concept of
premultiplication is important for understanding what can go wrong, and how to fix it.
The definition of a premultiplied image is simple—an image that has its RGB channels
multiplied by its alpha channel. Typically, images from a 3D renderer are premultiplied.
This means that the transparent areas are black in both the RGB channels and in the
the alpha channel. In premultiplied images, the RGB channels never have a higher
value than the alpha channel.
Premultiplication should always be considered whenever you have to modify a
foreground element and composite it over a background image. The premultiplication
of two or more composited images should be considered whenever you do one of the
following things:
• Perform color correction—in particular using nodes that raise the black level such as
Add, Clamp, ColorMatch, ColorCorrect, Compress, and Contrast
• When using filtering nodes
Some software packages take care of image premultiplication for you—they hide the
state of premultiplication. This works fine nine times out of ten, but for that last
problematic 10 percent, there is no practical solution. Still other compositing packages
pretend premultiplication doesn’t exist and encourage bad habits, such as chewing on
your matte, or modifying foreground/background alpha multiplication curves. 422 Chapter 15 Image Processing Basics
Shake takes a third approach, giving you explicit control over premultiplication for
every image in your composition. Although this can be inconvenient, it helps you get
around the problems that are typical in other software packages.
Problems Caused When Premultiplication Is Ignored
There are two typical problems that occur when the premultiplied state of an image is
ignored:
• Unwanted fringing around a masked subject
• Unwanted side effects that occur when a node affects parts of the image that ought
not to be affected
The following exercise demonstrates these problems.
To experience the heartbreak of premultiplication errors:
1 Open the Image tab, and click the FileIn node.
The File Browser opens.
2 In the File Browser, navigate to the directory where you copied the tutorial media.
The default path is $HOME/nreal/Tutorial_Media/Tutorial_Misc/premult.
3 Select the bg.jpg and munch_pre.sgi images, then click OK.
The nodes appear in the Node View.
The image munch_pre.sgi, rendered in Maya, is premultiplied (the blacks in the alpha
match the blacks in the RGB), and has an embedded alpha channel.
In a Nutshell—the Rules of Premultiplication
If you don’t read the full explanation of the mathematics of premultiplication, here
are the two rules you must always follow when creating a composition in Shake:
• Rule Number 1: Always color correct unpremultiplied images. To unpremultiply an
image, use a Color–MDiv node.
• Rule Number 2: Always filter and transform premultiplied images. To premultiply an
image, use a Color–MMult node.
munch_pre.sgi munch_pre.sgi, alpha channel bg.jpg
Images from Munch’s Oddysee courtesy of Oddworld Inhabitants.Chapter 15 Image Processing Basics 423
4 In the Node View, select the munch_pre node, click the Layer tab, then click the
Over node.
An Over node is added to the munch_pre node.
5 Connect the bg node to Background input (the second input) of the Over node.
6 Now, insert a Color–ContrastLum node between the munch_pre node and the Over
node, and set the ContrastLum value to .5.
If you zoom into the edges, a white edge appears around Munch. This unwanted fringe
is problem number one.
7 To replace the ContrastLum node, select it in the Node View and Control-click the
Color–Add node.
The ContrastLum node is replaced by an Add node.
8 In the Add parameters, boost the Color (red/green/blue) values to approximately .4.
Note: To adjust the color values, you can also press O (for Offset) and drag the pointer
to the right over the Add color control.
Although the Add node is only applied to the munch_pre node, the entire image is
brightened. A node affecting more of the image than it’s supposed to is problem
number two.424 Chapter 15 Image Processing Basics
If you ignore the premultiplication of your composites, you may have problems with
edges, or with raised global levels. Many people see these types of errors, and assume
it is a mask problem, so they pinch in the mask a bit. Even more extreme people have
erroneously assumed you cannot color correct premultiplied images. These problems
are easily solved through proper management of premultiplication.
The Math of Over and KeyMix
To understand why premultiplication problems occur, it is best to cut straight to the
heart of compositing by understanding how a standard composite operator (foreground
through a mask over a background) works. This has nothing to do with Shake, but in
fact was worked out in the 1970s by two clever fellows named Porter and Duff.
The following is the math equation they developed. In this equation, A is the
foreground’s alpha channel:
Comp = (Fg * A) + ((1-A)*Bg)
Rather than go into this too deeply, look briefly at the significant part, (Fg*A). This is
the definition of premultiplication—“RGB multiplied by its alpha.” To avoid the math,
you can build the composite with other nodes, in effect constructing an Over node
from scratch.
The following example (continued from the above section) shows how to build the
compositing equation.
To build the compositing equation:
1 Read in the munch_unpremult.jpg and munch_mask.iff images from the /Tutorial_Misc/
premult directory.
The munch_unpremult image is an unpremultiplied image. It has no mask, so there is
no correspondence between black pixels in the RGB channels and black pixels in the
mask to describe transparency in the image.
2 The first step in the formula is (Fg*A). To duplicate this using nodes, attach an IMult
node to munch_unpremult and connect the munch_mask node to the IMult background
input.
munch_unpremult.jpg munch_mask.iffChapter 15 Image Processing Basics 425
The result is identical to munch_premult—the RGB is multiplied by the mask.
Next, invert the foreground alpha channel (1-A).
3 To do this, select the munch_mask node, then Shift-click the Color–Invert node.
The Invert node is attached to the munch_mask node as a separate branch.
The next step in the formula ((1-A)*Bg) calls for you to multiply the background by
the inverted alpha.
4 In the Node View, select the Invert node, then click Layer–IMult.
An IMult node is added to the Invert node.
5 Connect the bg node to the Background input of the IMult2 node.
Next, the crucial step—add the two results together.
6 In the Node View, select IMult1, then click Layer–IAdd. Connect the IMult2 node to the
background input of the IAdd node.
Comp = (Fg * A) + ((1-A)*Bg)
Tree IMult2426 Chapter 15 Image Processing Basics
The result is exactly the same as the Over node.
By punching a hole in the background, the alpha determines what is transparent when
the two plates are added together. The key concept is that because you are adding,
anything that is black (a value of 0) does not appear in the composite.
The KeyMix and Over nodes do this math for you, saving you from having to create four
nodes to do a simple composite. The difference between Over and KeyMix is that Over
assumes that the foreground is premultiplied (identical to IMult1). KeyMix is only for
non-premultiplied images (identical to munch_unpremult). Strictly speaking, the math
breaks down like this:
KeyMix = (Fg * A) + ((1-A)*Bg)
Over = Fg + ((1-FgA)*Bg)
In these formulas, the foreground of Over is already multiplied by the foreground alpha
channel, thus the term premultiplied—it isn’t multiplied in the composite because it
was previously multiplied, usually by the 3D renderer.
Unpremultiplying an Image
With this knowledge, you can go back and start to understand the errors that occur
when the ContrastLum and Add functions are used with the Over node.
An Add node was originally attached to the premultiplied Munch.
IMult1 IMult2Chapter 15 Image Processing Basics 427
To create the same error here, continue with the previous node tree and do
the following:
1 In the Node View, select the IMult1 node, then click Color–Add.
An Add node is inserted into the tree, between the IMult1 node and the IAdd1 node.
2 In the Add node parameters (click the right side of the node to load its parameters into
the Parameters tab), set Color to .5.
The same error occurs—the entire image brightens, not just Munch. This is not what
we want to happen.
For a clue, you can double-click the Add (Add2) node. There is no longer an exact
correlation between the black areas of munch_mask and the black areas of the RGB
channels.
Add2 munch_mask428 Chapter 15 Image Processing Basics
Now things get a little odd. To reassert the mask, you might be tempted to insert
another IMult node after Add2, and connect the munch_mask node to the IMult3
background input. The image is premultiplied again and appears to work.
Although this looks fine, mathematically speaking, there is an error. You need to have
the correct compositing equation:
Comp = (Fg * A) + ((1-A)*Bg)
But, in fact, the above example multiplies the foreground twice (there are two IMult
nodes), so you have this:
Comp = (Fg * A * A) + ((1-A)*Bg)
While this solution appears to have worked in this example, further manipulation of the
resulting image data can reveal problems further down the tree. To test to see if this
solution looks fine in all cases, switch the Add node to a ContrastLum node.
To test the equation:
1 In the Node View, select the Add (Add2) node in the tree, then Control-click Color–
ContrastLum.
2 In the ContrastLum parameters, set the contrast value to .5.
Tree with IMult3 inserted IAdd2Chapter 15 Image Processing Basics 429
If you zoom into the Viewer and look very closely, you’ll notice a dark rim appears
around the edge.
3 In the Node View, select the IMult3 node and press I.
The node is ignored, and the same nasty edges appear as before.
4 Press I again so the node is no longer ignored.
So, what is the solution? Simple math will help resolve this issue. Dividing something
by a number and then multiplying by the same number is the same as multiplying by
1—the equivalent of no change at all. If you multiply the foreground by the mask one
too many times, divide it by the alpha to balance it out.
Mathematically, here’s how it looks:
Comp = (Fg * A / A * A) + ((1-A)*Bg)
Or, abbreviating the equation, you return to the proper formula, since the two extra
alphas (A) cancel out:
Comp = (Fg * A) + ((1-A)*Bg)
To translate this into the node tree:
1 In the Node View, select the IMult1 node, then click Layer–IDiv.
An IDiv1 node is attached to the IMult1 node.
Add switched to ContrastLum IAdd1 detail
IAdd1 detail with IMult3 ignored430 Chapter 15 Image Processing Basics
2 Connect the munch_mask node to the IDiv1 background input.
The edge appears clean.
3 To test the result, select the IDiv1 node and press I several times to ignore and show
the node.
By dividing by the mask, the image is unpremultiplied and ready for color correction.
Therefore, you can conclude that color correction should be applied to an
unpremultiplied image.
After going through all this, you may be surprised to find out that the insertion of IDiv1
and IMult3 can be bypassed entirely.
To bypass the insertion of IDiv1 and IMult3:
1 Delete the IDiv1 and IMult3 nodes.
2 Select the ContrastLum1 node, then press E to extract it from the tree.
3 Drag the ContrastLum1 node between the munch_unpremult node and the IMult1 node
to snap it back into the tree.
IDiv inserted IAdd1 detail Chapter 15 Image Processing Basics 431
Since munch_unpremult is already an unpremultiplied image, you get the same clean
result.
Managing Premultiplication
The above steps are an elaborate illustration to explain Over, KeyMix, and
premultiplication. This explanation can be simplified a bit.
The basic difference between the KeyMix and Over nodes is:
• KeyMix is used for unpremultiplied foreground images. The alpha channel can be
anywhere, including in the foreground image.
• Over is used for premultiplied foreground images, but can also enable
premultiplication if necessary for unpremultiplied images. The alpha channel is in the
foreground element.
Shake does not require you to constantly separate your alpha channel and to perform
IMult and IDiv operations with the second image. Instead, there are nodes to do this for
you. The following examples duplicate the current complex tree with simplified
versions.
Remember This
Rule Number 1: Only color correct unpremultiplied images.
IDiv and IMult3 deleted,
ContrastLum moved up
IAdd1 detail432 Chapter 15 Image Processing Basics
This example uses the KeyMix node, which handles unpremultiplied foreground
elements, the background, and a mask to split between the two. Enable invert in the
KeyMix parameters (you could also switch the foreground and background inputs).
If you are using a premultiplied image that was rendered straight out of a 3D
animation package, there are special tools that unpremultiply and premultiply the RGB
channels by the alpha. These tools are Color–MDiv and Color–MMult. “MDiv” is “Mask
Divide,” and “MMult” is “Mask Multiply.” Used in a node tree, MDiv unpremultiplies the
image, then you color correct, and then premultiply using MMult. When using a
premultiplied image as the foreground, the tree should look something like the
following example.
Identical tree using KeyMix
Identical tree with a premultiplied foregroundChapter 15 Image Processing Basics 433
The Over node has a premultiplication parameter built into it. In the following tree, the
preMultiply flag in the Over parameters is turned on, which allows you to omit the
MMult node entirely.
You are not required to apply an MDiv for each color correction. You only need to add a
single MDiv to the beginning of the branch of your node tree where you need to
perform color correction. After the first MDiv, you can stack as many correctors up as
you want.
After you’ve added all the color-correction nodes you require, add an MMult node to
the end, and the image is ready for further compositing.
Filters and Premultiplication
Now to the second rule of premultiplication—using filters (such as Blur).
Identical tree with
premultiplication handled by Over1
Over1 parameters
Multiple corrections on one MDiv
Remember This
Rule Number 2: Only apply Filter and Transform nodes to premultiplied images.434 Chapter 15 Image Processing Basics
In the next example, an unpremultiplied image is accidentally filtered, to show you
what artifacts to look for.
Adding a filter to an unpremultiplied image—the wrong way:
1 In a preexisting node tree, add a Blur node to the munch_unpremult node, in an
attempt to blur it against the background.
The mask clips any soft gradation.
2 Copy and clone the Blur1 node. (Copy the node, then press Control-Shift-V.)
Note: Press Command-E or Control-E to activate the enhanced Node View and see that
the cloned Blur node is linked to the Blur1 node.
3 Connect the Blur1_Clone1 node to the munch_mask node.
You should notice that an unwanted glowing highlight has appeared around the edge
of the image.
To eliminate this glow, you should apply the blur to an image that has nothing added
to the rim (against a black background)—since black has a value of 0 and therefore
does not add to the filtering. Chapter 15 Image Processing Basics 435
Adding a filter to an unpremultiplied image—the right way:
m
To fundamentally change the compositing order, you should premultiply the
foreground with the mask with a SwitchMatte node, then apply an Over node to create
the composite. (KeyMix is only used for unpremultiplied images.)
The following image shows the same tree with a single premultiplied image. The
preMultiply parameter is disabled in the Over1 node (otherwise a black edge appears
around the image).
Because transforms do a bit of filtering, technically speaking, you should perform
transforms on a premultiplied image as well, although the tolerances are a little more
forgiving.
To Sum Up
Make sure your image is unpremultiplied before you do any color correction, by
applying an MDiv to premultiplied images. Then, apply an MMult prior to applying
transforms and filters. Finally, use any of the layering nodes to composite the images
together.
Blur on a
premultiplied image
Over1 detail 436 Chapter 15 Image Processing Basics
Nodes That Affect Premultiplication
The following nodes can change the premultiplication status of an image, although
anything that operates an image’s mask differently from the RGB channels changes the
effect.
Node Description
Color–ColorCorrect This color corrector has an option in its Misc tab to specify when
the incoming image is premultiplied. If it is, set the premultiplied
parameter to yes. MDiv and MMult are internally inserted into the
computation.
Color–Reorder This node allows you to switch a channel into the alpha channel,
and may disrupt your premultiplication status. However, it isn’t
often used on images in the normal chain of color correct, position,
and composite operations—but it is more of a utility node.
Filter–DilateErode This node is typically used to add to or chew the matte by a few
pixels. Set your channels to just “a.” Because you then modify the
matte separately from the RGB, you make the image
unpremultiplied.
Key–LumaKey, DepthKey,
DepthSlice
These keyers all have the option to immediately set your image to
a premultiplied state.
Key–KeyLight This keyer can output either a premultiplied or an unpremultiplied
version, or just the alpha channel with the RGB untouched.
Key–Primatte This keyer can output either just the alpha or a premultiplied
version. Setting it to unpremultiplied requires an external MDiv.
Layer–AddMix This modified Over node allows you to manually break the
premultiplied relationship by tweaking curves that specify how the
matte multiplies the foreground and background images. It was
inherited from another package, where it was used extensively
because this particular system had no control over
premultiplication.
Layer–SwitchMatte This node copies a channel from the second image to the alpha
channel of the first image. You have the option to immediately
premultiply the image (enabled by default).
Other–DropShadow This node should be applied to premultiplied images only.
Non-Black Premultiplied Images
Some packages output images that are considered premultiplied, but in which the
background is not black. This is mathematically bizarre. To work with these images,
try the AEPremult macro included Chapter 32, “The Cookbook,” on page 989. Chapter 15 Image Processing Basics 437
The Logarithmic Cineon File
Kodak created the Cineon file format to support their line of scanners and recorders.
Two things are typically associated with the Cineon file: the Cineon file itself (an
uncompressed 10-bit image file) and the file’s particular color representation. When
graphed, this representation takes the form of a logarithmic curve, hence the term
“logarithmic color” (or “log” for short). Although some refer to it as a “log space,” it is
not actually a “space” such as YUV, RGB, or HSV.
Note: (A caveat, if you will.) This section is to be treated only as an overview of a
subject that causes flame wars of such staggering proportions that everybody walks
away dazed and filled with fear and loathing. You have been warned.
The image set below represents linear space—every value has an equal mathematical
distance in brightness to its neighbor.
The following image set represents a ramp converted to logarithmic color, where the
brightness is weighted.
Linear image Graph
Logarithmic image Graph 438 Chapter 15 Image Processing Basics
The logarithmic image does not appear to have a pure black or white. Its graph shows
that the highlights are flattened (compressed), and that the blacks have more
attention. This is why Cineon frames typically seem to have a very low contrast, mostly
in the highlights. Because the Y axis represents the output data, it contains less data
than the X axis. You can therefore store this in a smaller file than you might otherwise
use, to which the Cineon 10-bit format is nicely adapted as good balance between
color range and speed.
Log compression is not inherent or unique to the Cineon file format. You can store
logarithmic color data in any file type, and you can store linear color data into a .cin file.
The easiest way to think about a logarithmic file is as a “digital negative,” a concept
encouraged by Kodak literature. Unfortunately, Kodak decided to not actually invert the
image. While the idea of a negative piece of film is easy to understand, this concept
may cause some confusion. To see it properly, you must invert the film. In the same
manner, you cannot work with a “digital negative” in your composite. You must first
convert the image, in this case from the filmic log color to the digital linear color. This is
called a log to lin color correction, and is performed with the Color–LogLin node.
It is important to remember that log to lin (or lin to log) conversion is simply a color
correction. This workflow is explained below. However, it is first necessary to discuss
how this color correction works.
A piece of film negative has up to approximately 13 stops of latitude in exposure. A
stop is defined as a doubling of the brightness. The “digital negative,” the Cineon file,
contains approximately 11 stops. A positive print cannot display as much range as a
negative. The same rule applies to a computer monitor—it can only display about 7
stops of latitude. To be properly handled by the computer, information must be
discarded. In a simplified example, an image contains a rounded-off range of 0 to 11
stops for black to white. Since the monitor can only display about 7 stops (for example,
steps 1 through 8), the rest of the image (below 1 and above 8) is thrown away. These 7
steps, only a portion of the original negative, are then stretched back to 0 (black) and
11 (white). Because what used to be dark gray to medium-bright gray is now black to
white, the image appears to have a higher contrast. However, information has been
thrown away in the process. This loss is permanent when working in 8 or 16 bits, but
can be retrieved when working in 32-bit float. For more information, see “Logarithmic
Color and Float Bit Depth” on page 444.
The extraction process is one way to control exposure. If you select the higher portions
of the image (for example, 4 to 11 rather than 1 to 8), the image appears darker
because you are remapping the brighter parts of the image down to black. This is the
same as selecting to expose your camera for the brightest portion of your scene. The
opposite occurs when selecting the lower portions of the image brightness.Chapter 15 Image Processing Basics 439
These types of controls are paralleled in the LogLin node with the black and white
point parameters. Every 90 points represents one stop of exposure. You can therefore
control exposure with the LogLin node.
Note: Some Kodak documents state that 95 points represents one stop of exposure.
Once images are converted to linear color, they can be treated like other “normal” linear
images. When a linear image is broken down to its steps of brightness, there is equal
distance between two steps in the dark areas and two steps in the light areas. In a
logarithmic file, however, the digital negative sees light areas differently than it sees
dark areas, as described by the logarithmic curve that it is stored in. The distance
between two steps in the dark areas is different than the distance between two steps
in the bright areas. This is why color corrections and compositing should be done in
linear color. This is explained in “The Hazards of Color Correcting in Logarithmic Color”
on page 439.
Once you have finished your compositing, the images must be converted back to
logarithmic representation (another LogLin node set to “lin to log”) and then rendered
to disk. These images are then properly treated by the film recorder.
A Little Further Reading
Two websites are recommended for more information on this subject. The first is the
specification for the logarithmic conversion and all of the parameters. It is somewhat
dense, but contains useful information:
http://www.cineon.com/conv_10to8bit.php
The second recommended site contains a nice discussion of the film negative’s
response to light:
http://www.slonet.org/~mhd/2photo/film/how.htm
The Hazards of Color Correcting in Logarithmic Color
If the logarithmic format is so great, why bother to convert back to the linear color?
First, logarithmic color is unnatural to the eye—you have to convert back to linear color
to see it properly. More importantly, compression also means that any color corrections
applied in log color produce unpredictable results, since shadows, midtones, and
highlights have an uneven application in many color correctors. 440 Chapter 15 Image Processing Basics
In the example below, the first image is the original plate in log color. The second image
has been converted back to linear color, and therefore looks more natural to the eye.
A Color–Mult node is applied to the log image and to the linear version with an
extremely slight red color.
Note: You are invited to view the online PDF documentation to see the color images.
It is difficult to gauge the result in the first image, since it is still in log color. The second
image lends a nice tint to the pink clouds (assuming you want pink clouds).
When the log image is converted to the more natural linear space, the “slight” red
multiplier applied before the conversion has completely blown the image into the
red range. This is bad.
Plate in log color Plate in linear color
Mult in log color Mult in linear color
Mult in log color viewed
in linear representationChapter 15 Image Processing Basics 441
Therefore, you are mathematically urged to color correct in linear color, or view a
conversion to linear using a VLUT while you adjust the color correction.
Converting Between Logarithmic and Linear Color
As previously mentioned, logarithmic images are simply a result of a color correction.
This correction has been standardized by Kodak. Every conversion function is
essentially the same. The Shake node that handles this correction is the LogLin node,
and is located in the Color Tool tab. Using this node, you can convert from log to linear
color, or linear to log color.
Typically, a LogLin node is applied after a FileIn node to convert the image from log to
linear color. You do your composite in linear color, convert back to log at the very end,
and attach the FileOut node. In the following node tree, LogLin1 has a conversion
setting of log to lin, and LogLin2 has lin to log.
This example also includes an imaginary 3D-rendered element, read in by the FileIn
node named LinearCGElement. These elements are almost always rendered in linear
color, so no conversion is necessary.442 Chapter 15 Image Processing Basics
If you are simply doing color timing, as in the following example, you have an added
benefit: All the color corrections concatenate into one internal operation.
Wedging and Color Timing
So, why have numbers in the LogLin node? These numbers are used to color correct
your plate as a sort of calibration of your entire input/output system. There are (at least)
two fundamental ways to do this: wedging and color timing. The following two
methods are only suggestions. Every facility probably has its own system—there is no
standard.
When wedging a shot, you ensure that the files output from your entire digital process
match the original film print. There are multiple places that color changes can
intentionally or accidentally occur—exposing film in the camera, printing the
workprint, scanning the data into digital format, digitally compositing, recording the
digital plate onto film, and developing the print. You are saved on the one hand in that
much of this process is out of your hands—the print lab guy was cranky because his
delivered coffee was too cold, so what can you do? To limit discrepancies, use the
Wedge macro in Chapter 32, “The Cookbook,” on page 1000. This is a macro for LogLin
that puts in preset variations of the offset values and contrast values, and steps up and
down to try to find a match of the original scan.
The process usually involves the following steps:
1 Make a workprint of your original negative. This is the standard to which you compare
your digital output.
2 Scan the negative into a digital format (which usually creates log Cineon files).
3 Create a FileIn node for a single frame from the scanned image files, then attach the
Wedge macro.
4 Visually, take your best shot at the exposure level. (You may have to boost the blue up
or down 22 points to see what may work.) Remember that 90 points is 1 f-stop of
exposure and 45 points is half a stop. Chapter 15 Image Processing Basics 443
5 Render out 48 frames. The Wedge macro automatically brackets your initial pick up and
down by whatever value you set as the colorStep. For a wide bracket, use a high
number (such as 90). For a narrow bracket, use a lower number (such as 22). This
process prints 48 different color, brightness, and contrast tests, then automatically
returns the image to log color with the default settings. The internals of the Wedge
macro are basically LogLin (log to lin) > ContrastRGB > LogLin (lin to log).
6 Record and print your 48 frames.
7 Compare the actual physical pieces of film to the frame of the original workprint on a
light box using a loupe. The exposure numbers are printed on the frame by the Wedge
macro. Select the frame that exactly duplicates the original print. If no frames match
up, adjust your starting points for red, green, and blue, narrow your colorStep, and
wedge again until you have a frame that looks correct.
8 For all composites that use this plate, use the values you selected in the Wedge macro in
your LogLin node, converting from log to linear color. Keep your default values when
returning to log color, with the exception of the conversion setting, which is linear to log.
This process is generally done for every shot. At a larger studio, there is likely to be an
entire department to do the color timing for you. Count yourself lucky to be in such a
wise and far-sighted studio.
The second technique to handle the color process is to not handle it at all—let the
Color Timer for the production or the developing lab handle it. While this is easier, you
do surrender some control over the process. However, this is a perfectly acceptable
technique used in many large effects houses. When you employ this technique, you use
the same values in your LogLin in and out of your color. You may adjust the numbers
slightly, but make sure the same numbers are used in both operators. For example,
because a physical piece of negative has a slight orange cast, your positive scan may
have a slight green cast. You may want to adjust for this in the LogLin node. A good
technique is to adjust your log to lin LogLin, copy the node (Command-C or Control-C),
and then paste a linked copy back in (Shift-Command-V or Shift-Control-V). Next,
adjust the conversion setting of the second LogLin node to lin to log. If you adjust the
original LogLin node, the copy takes the same values since it is a linked copy.444 Chapter 15 Image Processing Basics
Logarithmic Color and Float Bit Depth
Here is where it gets tricky. If you examine the following logarithmic-to-linear
conversion curve, you can see that clipping occurs above 685.
The LogLin node does have a roll-off operator, which helps alleviate the sharp corner at
the 685 point, but this is inherently a compromise—you are throwing data away. You
do not really see this data disposal in linear color. However, once you convert back to
logarithmic, the color clipping is evident.
In the following grayscale examples, the left image is a ramp with an applied log to lin
conversion. It is now in linear color. The right image is the left image converted back
into log color representation. Quite a bit of clipping has occurred.
Log grayscale
converted to linear color
Previous example
converted back to log colorChapter 15 Image Processing Basics 445
The following images are from a log plate. The left image is the original plate. The right
image is the output plate, also in log color, that has been passed through the log-tolin-to-log conversion process.
Notice that the highlights in the hair detail are lost.
The following images are graphs that represent the log plates in the above illustrations.
The left graph represents the entire range of the log image. Keep in mind that this
represents all of the potential values—few log plates have this entire range. The right
image displays the result of the log-to-lin-to-log conversion process.
So, why is this happening? In the first part of this chapter (“The Logarithmic Cineon
File” section), using an 11-stop example image, 7 stops were extracted and the rest
thrown away in order to view and work on the image in the computer. These same 7
stops can be thought of as the stops between 95 and 685. As mentioned earlier, the
rest are thrown away, or clipped.
Original log image Output log image
Graph of original ramp Graph of output ramp 446 Chapter 15 Image Processing Basics
If you look back at the original log-to-lin conversion graph, the curve suggests that it
should continue past 1, but so far, the curve has been clipped at 1. In the following
illustration, the red line represents the potential information derived from the color
conversion. The curve is clipped at 1 because values can only be stored from 0 to 1 in 8
or 16 bits.
As the illustration suggests, if data could be preserved above 1, you could always
access the data, and life would be happy. Fortunately (by design), you can convert your
images to a higher bit depth called “float.” Whereas 8 and 16 bits are always in a 0 to 1
range, float can describe any number and is ideal for working with logarithmic images
when converting to linear representation and back to log representation. If you keep
your images in linear color because you are reading out to a linear format, float is not
necessary. Chapter 15 Image Processing Basics 447
The following image shows a modification of the compositing tree shown on page 441.
An Other–Bytes node is inserted, and the image is bumped up to 4 bytes (32 bits, or float).
Notice that you are not obligated to promote your 3D-rendered element (here
represented with LinearCGPlate) to float, since it is already in linear color. The second
Bytes node at the end of the node tree is included in case you render to an .iff format—
you may want to convert it down to 16 bits for smaller storage costs. Cineon is always
stored at 10 bits, and therefore does not need the second Bytes node.
The following is a table of file sizes for 2K plates. Draw your own conclusions.
2K RGB Plate Size
Cineon, 10 bits 12 MB
IFF, 8 bits 9 MB
IFF, 16 bits 18 MB
IFF, float 36 MB448 Chapter 15 Image Processing Basics
In addition to storage requirements, working in float costs you render time, a minimum
of 20 percent, but usually significantly higher than that.
If you are just color timing (doing color corrections that concatenate), there is no need
to convert to float. All concatenating color correctors, including the two correctors to
convert in and out of linear representation, are calculated simultaneously. Therefore,
the following tree is still accurate.
Looking at Float Values
You can use the Float View ViewerScript to view values that are above 1 or below 0.
To look at these values, create a Ramp node, and set depth to float. Then, apply a
Color–LogLin node. When you turn on the Float View, the top 35 percent turns white,
indicating values above 1, and the lower 9 percent turns black, indicating values
below 0. Everything else turns gray, indicating values between 0 and 1.Chapter 15 Image Processing Basics 449
As an alternative to float, you can use the roll-off parameter in the LogLin node.
However, this involves inherent compression and banding of your data. The roll-off
parameter gives your curve a roll-off (compared to the standard log curves), which can
help preserve some highlights, and allows you to stay in 16 bits. However, as shown in
the next example, when the same image is converted back to log representation, there
is still some cutoff in the upper and lower ranges, but the lower ranges also band. Use
at your own risk.
Float Bit Depth and Third-Party Plug-Ins
Most third-party plug-ins, including Primatte and Keylight, do not support float (rather,
only 8 and 16 bits).
To ensure that your highlights are maintained:
1 Convert the images back to log color.
2 Execute the third-party plug-in (assuming it is still accurate on non-linear images).
3 Convert the images to linear representation.
4 Continue with your composite.
If the plug-in works on a single channel (for example, both Primatte and
Keylight can isolate their effect on the alpha channel), do the following:
1 Create one branch for color modifications, and keep that branch in float.
2 Create a second branch from the FileIn node and pull the key.
3 Copy the alpha back to the float RGB chain with a SwitchMatte node.
4 Since the keyers also do color correction, you have to compensate by using a Color–
HueCurves or Key–SpillSuppress node to do your spill suppression.
Graph of linearization With roll-off Graph of relog With roll-off450 Chapter 15 Image Processing Basics
The following is a sample tree:
There are two exceptions:
• Keylight allows you to key, color correct, and composite in log color. Simply toggle
the colourSpace control to log.
• Ultimatte preserves float data. No tricks necessary.
Applying gentle pressure to the plug-in manufacturer to support float images is also a
nice alternative, but less productive in the short run.16
451
16 Compositing With Layer Nodes
Layer nodes form the foundation for compositing two or
more images together in Shake. This chapter covers the
basic Shake compositing nodes—how they work, and
how to use them.
Layering Node Essentials
The Shake compositing nodes are located in the Layer Tool tab.
There are three types of layering nodes:
• Atomic nodes: Atomic nodes (Over, IAdd, Atop, and so on) do one thing—combine
two images according to a fixed mathematical algorithm. (Hence the term atomic:
they apply a single, elemental operation to a pair of images.) They are useful for
command-line compositing, scripting, and are also convenient for the Node View in
that you can quickly see which type of operation is occurring.
• More flexible nodes: The second node type are the more flexible MultiLayer,
MultiPlane, and Select nodes. MultiLayer allows you to duplicate most of the atomic
nodes (with the exception of the AddMix, AddText, Interlace, and KeyMix nodes). These
nodes have the additional benefit of allowing unlimited inputs to the node.
• LayerX node: The third category is the unique node LayerX, which allows you to enter
your own compositing math.
Before getting into the layer nodes in more detail, here are some important rules to
keep in mind when compositing in Shake.
Don’t Mask Layer Nodes
The side-input masks for layer nodes should not be used, as they behave counterintuitively and will not produce the result you might expect. If you want to mask a
layering node, mask the input nodes, or use the KeyMix node.
Remember the Rules of Premultiplication
There are two unbreakable rules that you must always follow when creating a
composition in Shake:452 Chapter 16 Compositing With Layer Nodes
• Rule Number 1: Only color correct unpremultiplied images. To unpremultiply an
image, use the Color–MDiv node.
• Rule Number 2: Only apply Filter and Transform nodes to premultiplied images. To
premultiply an image, use the Color–MMult node.
For more detailed information on premultiplication, see “About Premultiplication and
Compositing” on page 421.
Using the clipMode Parameter of Layer Nodes
You can easily composite elements of any resolution. To set the output resolution of a
composite that contains images of multiple resolutions, go to the compositing node’s
parameters and use the clipMode button to toggle between foreground or background
(as the output resolution). This applies to all layering commands. An element
composited over a differently sized background is one way to set your output
resolution. For more information on setting resolution, see Chapter 3, “Adding Media,
Retiming, and Remastering,” on page 107.
Compositing Math Overview
If Shake had only one layer node, it would have to be LayerX, since it can be used to
mimic all of the other compositing nodes. The math for most of the operators is
included in this node, both in general notation and in LayerX syntax. The LayerX syntax
has expressions for each channel.
The following table provides a quick reference to the Shake layer nodes and their
common uses, math, and LayerX syntax. For specific node descriptions, see “The Layer
Nodes” on page 453.
Layer Node Common Uses Math LayerX Syntax
Atop Add effects to
foreground elements,
like smoke over a CG
character to match the
background.
A*Ba+(B*(1-Aa)) r2+(r*a*a2)
or
(a2==0 ||
(r2==0 && g2==0
&& b2==0 && a2==0) ?
r2 : (a2*r+(1-
a2*a)*r2))
Common Create difference masks.
IAdd Add fire effects, adding
mattes together.
A+B r+r2
IDiv A/B r2==0?1:r/r2
IMult Mask elements. A*B r*r2
Inside Mask elements. A*Ba r*a2
Interlace Interlace two images,
pulling one field from
one image, and the
second field from the
other image.Chapter 16 Compositing With Layer Nodes 453
The Layer Nodes
This section provides a detailed description of each of the layer nodes.
AddMix
The AddMix node is similar to the Over node, except that you have control over curves
to help blend the edges together. For more information on the Over node, see “Over”
on page 466.
ISub A-B r-r2
ISubA Find the difference
between elements.
absolute(A-B) fabs(r-r2)
KeyMix Mix two images
through a third mask.
A*(1-M*C)+ (B*M*C) M represents the
percentage mix.
Max Combine masks. If (A > B) then A,
else B
r>r2?r:r2 or
max(r,r2)
Min If (A < B) then A,
else B
r Import Photoshop File.
The file is imported as a script—each layer in the Photoshop file is imported as a FileIn
node that is then fed into a MultiLayer node. The MultiLayer node is named Composite
by default.474 Chapter 17 Layered Photoshop Files and the MultiLayer Node
2 Double-click the Composite node to display the Photoshop image (the composite) in
the Viewer and load the MultiLayer node parameters.
Unsupported Photoshop Features and Issues
When you import a Photoshop file that contains one or more layers that are set to an
unsupported transfer mode, the “Unsupported Transfer Mode (Mode Name)” message
appears in the Viewer, and the mode for that layer in the Parameters tab reads
“Unsupported.”
Shake has no support for the following Photoshop features:
• Photoshop layer masks
• Alpha channels created in the Photoshop Channels tab
• Layer styles
• Text layers
• Fill layers
It’s important to name the layers in a Photoshop file carefully. Observe the following
restrictions:
• Never name a layer “color,” “time,” or “track.”
• Never use a name that’s identical to a C++ keyword.
• Never use a name that’s identical to a C++ function call, for example “sin” or “rnd.”
Click and hold to select a different
compositing operation.Chapter 17 Layered Photoshop Files and the MultiLayer Node 475
The postMMult Parameter
In the MultiLayer parameters, postMMult is enabled by default. When turned on, the
postMMult parameter premultiplies the output image produced by the MultiLayer node
(in the same manner as a composite in the Photoshop application).
For more information on the other buttons in the MultiLayer parameters, see “Using the
MultiLayer Node” on page 478.
Photoshop Layer Visibility
When you import a Photoshop file that contains invisible layers (the eye icon is
disabled in Photoshop), the visibility for that layer (the FileIn node) is disabled in Shake.
To show the layer, toggle the layer’s visibility button in the MultiLayer node parameters.
Premultiplication is enabled by default.
Layer Visibility button476 Chapter 17 Layered Photoshop Files and the MultiLayer Node
Photoshop Layer Opacity
To change the opacity of a layer, expand the subtree for the layer and set the opacity
parameter to a value between 0 and 1. By default, the layer opacity is set to 1. A layer
that is set to a Photoshop transfer mode contains an additional opacity control—the
PSOpacity parameter.
This additional opacity control is necessary because Photoshop and Shake handle
transparency differently. Photoshop’s transparency setting works by varying the
intensity of the alpha channel on the foreground image (prior to the blending
operation). In Shake, the opacity setting on each layer of the MultiLayer node varies the
intensity of all channels (RGBA). The results can differ between the two methods,
depending on the selected blend mode and the way it uses color. The default
PSOpacity setting is the opacity of the layer as set in Photoshop.
Note: To view the name of the Photoshop file, click the right side of a FileIn (Photoshop
layer) node to display the FileIn parameters. In the Source tab, the name of the
imported Photoshop file is displayed in the imageName parameter.
Supported Photoshop Transfer Modes
The layer modes listed in the following table are accessed within the MultiLayer node.
Note: The true math of the Photoshop transfer modes is the proprietary information of
Adobe Systems Incorporated. As a result, the descriptions listed are not guaranteed to
be technically accurate.
Photoshop mode opacity
Shake layer
opacity
Layer Function Description
ColorBurn Takes the color of the second image and darkens the first image by
increasing contrast. White in the second image has no effect on the first
image.
ColorDodge The opposite of ColorBurn, this lightens the image. Black in the second
image does not affect the first image.
Darken Identical to Min, taking the lower pixel value.Chapter 17 Layered Photoshop Files and the MultiLayer Node 477
Importing a Photoshop File Using the FileIn Node
If you don’t want to import every layer within a Photoshop file, you can simply use a
FileIn node, as you would with any other media in your script. When you do this, you
have the option of reading any single layer from that file into Shake.
To import a Photoshop file using a FileIn node:
m
In the Image tab, click FileIn and navigate to the Photoshop file in the File Browser.
Select the file (or files), then click OK.
By default, the image is imported as merged.
Exclusion The second image acts as a mask for inverting the first image. White
areas of the second image completely invert the first image; black areas
of the second image leave the first image unmodified.
HardLight Screens or multiplies the first image, depending on the strength of the
second image. Values below 50 percent in the second image multiply
the first image, making it darker. Values above 50 percent in the second
image act as a Screen effect. Values of pure black or white in image 2
replace image 1.
Lighten Identical to Max. Takes the maximum value when comparing two pixels.
Good for adding mattes together.
LinearBurn Similar to ColorBurn, except it decreases brightness, not contrast, to
darken the first image. Whites in the second image do not modify the
first image.
LinearDodge The opposite of LinearBurn, brightens the first image. Black areas in
image 2 leave the first image unaffected.
LinearLight A combination of LinearBurn and LinearDodge. Values above 50 percent
in image 2 increase brightness of the first image. Values below 50
percent darken the first image. Values of pure black or white in image 2
replace image 1.
Overlay To shade an image. 50 percent in the second image keeps the first image
the same. Brighter pixels brighten the first image, darker pixels darken it.
PinLight Performs Min or Max, depending on the strength of the second image. If
the second image is brighter than 50 percent, a Max (Lighten) is
performed. If the second image is darker than 50 percent, a Min (Darken)
is performed. Values of pure black or white in image 2 replace image 1.
SoftLight Raises or lowers brightness, depending on the strength of the second
image. Values above 50 percent in the second image decrease the
brightness of the first image; values below 50 percent increase the
brightness.
VividLight Raises or lowers contrast, depending on the strength of the second
image. Values above 50 percent in the second image decrease the
contrast of the first image; values below 50 percent increase the contrast.
Layer Function Description478 Chapter 17 Layered Photoshop Files and the MultiLayer Node
To select an individual layer:
1 Load the Photoshop file’s FileIn parameters (click the right side of the node).
2 In the Source subtree, disable readMerged.
3 In the whichLayer parameter, select the layer you want to display.
In the following example, the third layer of the Photoshop file is selected.
When Shake imports a multilayer Photoshop file, the first layer in the file is numbered 0
in the whichLayer parameter. In the following image, the whichLayer parameter is set
to 2—the third layer of the Photoshop file (since the first layer is imported as 0, the
second layer imported as 1, and so on).
Using the MultiLayer Node
The MultiLayer node is a multi-function layering node that distinguishes itself from
most of the other layering nodes in two respects—it accepts an unlimited number of
input images, and each input image has independent settings interior to the MultiLayer
node that let you control that layer’s compositing mode, opacity, and channels. In a
way, the MultiLayer node is its own small compositing environment inside of Shake.
When its parameters are opened, you can rearrange the layers via drag and drop
directly in the Parameters tab, which allows you to work using a layer-based, rather
than node-based logic. This can clean up your tree if you’re compositing many different
images together in a fairly straightforward way.
Selected Photoshop layerChapter 17 Layered Photoshop Files and the MultiLayer Node 479
Connecting Inputs to a MultiLayer Node
The MultiLayer node accepts a variable number of inputs. Drag input noodles onto the
+ (plus) sign on the top of the MultiLayer node that appears when the pointer passes
over it. The + sign always appears to the right of all other previously connected knots.
You can also attach several nodes to a single MultiLayer node simultaneously.
To connect several nodes to a multi-input node at once:
1 Select all of the nodes you want to attach to the multi-input node.480 Chapter 17 Layered Photoshop Files and the MultiLayer Node
2 Shift-click the + sign input of the multi-input node.
All selected nodes are connected.
The Order in Which You Connect Nodes
Unlike the other layer nodes, the order in which you connect input images determines
the initial layer order of the resulting composite. The first input represents the
background layer, the second node is the next deepest, and so on, until you reach the
input furthest to the right, representing the foreground.
Layer Order in the MultiLayer Node
Each image that you connect to a MultiLayer node is represented by its own set of
controls and parameter subtree, located at the bottom of the Images tab.
New images are inserted from the bottom up. Unlike the other layering nodes in Shake,
layer ordering in the MultiLayer node is determined by that layer’s position in the layer
list—the background layer is the one at the bottom, and the foreground layer is the
one at the top. Layers that appear above other layers in the layers list appear in front in
the Viewer. This compositing order can be rearranged by rewiring the node in the Node
View, or by dragging the Reposition control in the layers list of the Parameters tab.
Reposition controlChapter 17 Layered Photoshop Files and the MultiLayer Node 481
To change layer order:
m Drag that layer’s Reposition control up or down between other layers until a horizontal
line appears, which indicates the new position of that layer.
The compositing order is rearranged, and the nodes are rewired in the Node View. In
the following illustration, Layer_1 is the background, and Layer_2 is the most prominent
foreground layer.
Each layer has associated parameters and controls. In the following illustration, several
controls are visible on each line.
These controls are, in order:
Control Description
Input Shows the input image for the layer in the Viewer.
Layer Visibility Toggles the visibility for that layer. Layers with visibility turned
off are not rendered.
Solo Turns off the visibility of all other layers. You can only solo one
layer at a time. Click an enabled solo button to disable solo.
Ignore Above Turns off all layers above the current layer, keeping only the
current layer and those behind it visible.482 Chapter 17 Layered Photoshop Files and the MultiLayer Node
In the subtree of a layer, you can controls its parameters. Note that the parameter names
are all prefixed by layerName_ (L1_ in the following image). The only tricky parameter is
channels—it determines what channels get bubbled up from below. For example, to
add several files together to fill the matte in a Keylight node, insert all the mattes first,
then the Keylight node on top of the list, using “a” as your channel for that layer.
To rename a layer, expand the subtree and enter the new name in the layerName
value field.
clipLayer
A pop-up menu that lists all the input layers currently connected to the MultiLayer
node. Select which input layer should determine the output resolution.
postMMult
Premultiplies the node.
img
The input image for that layer. This is the area that contains the interface controls for
that layer’s Visibility, Solo, and so on.
Input Layer Name
Field
Displays the name of the input image node. By blanking it out,
the node is disconnected, but the layer information remains.
Composite Mode This pop-up menu lets you set each layer with its own
composite mode, which affects how that image’s color data
interacts with other overlapping layers. Certain composite
modes add parameters to that layer’s parameter subtree. For
more information on composite modes, see “Supported
Photoshop Transfer Modes” on page 476.
Disconnect Node Clicking the Disconnect Node button disconnects that input
image from the MultiLayer node, and removes it from the layer
list without removing the input nodes from the node tree.
Control DescriptionChapter 17 Layered Photoshop Files and the MultiLayer Node 483
Layer Parameters Subtree
Each layer in the layers list has additional parameters within a subtree that provide
additional control.
layerName
The name of the layer. All associated parameters for that layer are prefixed by the
layerName.
opacity
Opacity of the input layer. With an imported Photoshop file, there is an additional
PSOpacity parameter. See “Supported Photoshop Transfer Modes” on page 476 for
more information.
preMult
Indicates whether the layer is to be premultiplied.
compChannels
Sets which channels are passed up from below (behind) this layer.18
485
18 Compositing With the MultiPlane
Node
The MultiPlane node provides a simple 3D compositing
environment within Shake. This environment can be used
as a way to arrange and animate images within a 3D
space, or as a way to integrate generated or tracked 3D
camera paths into your scripts.
An Overview of the MultiPlane Node
The MultiPlane node provides a compositing environment for positioning 2D layers
within a 3D space. A virtual camera controls the scope of the output image, similar to
that found within 3D animation packages. This camera can be animated via keyframed
parameters, or by importing 3D camera and tracking data from other 3D animation or
3D tracking applications. As with the MultiLayer node, the MultiPlane node accepts
unlimited input images.
The MultiPlane node has two primary uses:
• You can composite background or foreground elements against a moving
background plate using 3D camera tracking data, imported from a variety of thirdparty applications.
• You can arrange multiple layers within a 3D coordinate space for easy simulation of
perspective, parallax, and other depth effects.
Note: There is one important limitation to positioning layers within the 3D space of the
MultiPlane node—there is currently no support for intersecting planes. Planes that
intersect appear either in front or behind, depending on the position of each layer’s
center point.
Additionally, the MultiPlane node provides transform controls for each layer connected
to it. Layers can be moved, rotated, and scaled within the MultiPlane environment.
See Tutorial 3, “Depth Compositing,” in the Shake 4 Tutorials for a lesson on using the
MultiPlane node.486 Chapter 18 Compositing With the MultiPlane Node
Viewing MultiPlane Composites
When you double-click a MultiPlane node to open it into the Viewer, the Viewer
switches to a multi-pane interface, unique to the MultiPlane node.
Each pane of this interface can be toggled among single, double, triple, and quadruplepane layouts. Each individual pane can be set to display any available camera or angle
in the 3D workspace, to help you position and transform objects from any angle.
Hardware Acceleration in the MultiPlane Node
The MultiPlane node supports OpenGL hardware acceleration of images displayed in
the Viewer. Hardware rendering provides a fast way of positioning layers and the
camera in space, at the expense of rendering quality.
Whichever pane of the Viewer is set to display the currently selected renderCamera
(the default is camera1, although any camera or angle can be in the renderCamera
pop-up list) can be toggled between hardware and software rendering using the
Render Mode button in the Viewer shelf.
Render Mode Button Description
Hardware Rendering Mode
This mode is the fastest for arranging your layers in 3D space, but
doesn’t provide an accurate representation of the final output. In
hardware rendering mode, every layer is composited with an Over
operation, regardless of that layer’s selected composite type.Chapter 18 Compositing With the MultiPlane Node 487
The Render Mode button only affects the image that’s displayed in the Viewer when
the MultiPlane node is selected. The image output from the MultiPlane node to other
nodes in the tree is always at the highest quality, as are MultiPlane images that are
rendered to disk.
MultiPlane Node Parameters
The Parameters tab of the MultiPlane node is divided into two subtabs, the Images and
Camera tabs:
• The Images tab contains all the parameters for the individual layers that are being
arranged in the 3D space of the MultiPlane node. For more information on Image tab
parameters, see “Parameters in the Images Tab” on page 512.
• The Camera tab contains the positioning and optical simulation parameters for each
of the cameras and angles used by the MultiPlane node. For more information on
parameters in the Camera tab, see “Parameters in the Camera Tab” on page 522.
Using the Multi-Pane Viewer Display
Similarly to a 3D application, the MultiPlane node’s multi-pane Viewer interface displays
several angles of the 3D MultiPlane workspace. When you first open a MultiPlane node
into the Viewer, the Viewer switches by default to a four-pane layout, which displays
the Side, Perspective, Camera1, and Top angles.
Whether or not the multi-pane interface appears depends on how you open the
MultiPlane node.
To view the final output from a MultiPlane node only:
m
Click the left side of the node to display it in the currently selected Viewer without also
loading its parameters.
The Viewer displays the final output from that node, just like any other node.
Hardware Mode While Adjusting
This mode sets the Viewer to use Hardware rendering while you’re
making adjustments using onscreen controls. Once you’ve finished,
the Viewer goes into Software rendering mode to show the image
at the final quality. To turn this setting on, click and hold the
Render Mode button, then choose this option from the pop-up
menu that appears.
Software Rendering Mode
This mode displays the selected renderCamera as it appears at its
actual quality. All composite types are displayed properly.
Render Mode Button Description488 Chapter 18 Compositing With the MultiPlane Node
To work within a MultiPlane node using the multi-pane interface:
m Double-click a MultiPlane node to load its image into the Viewer and its parameters
into the Parameters tab. Now, the Viewer switches to the MultiPlane’s multi-pane Viewer
interface.
Once the multi-pane interface is displayed, you can toggle among four different layouts.
To change the MultiPlane Viewer layout:
m
Click the Viewer Layout button in the Viewer shelf. Keep clicking to cycle among all the
available layouts.
The relative size and orientation of each pane in all four layouts is fixed, although you
can zoom and pan within any pane using the standard methods.
Using Favorite Views in the Multi-Pane Viewer Display
You can use the standard Favorite Views commands within the Viewer to save and
restore the framing of each individual pane. In addition, when you right-click in the
Viewer shelf to access the Favorite Views commands that are available from the
shortcut menu, you can save and restore the state of all visible panes at once.Chapter 18 Compositing With the MultiPlane Node 489
Changing Angles Within a Pane
Although the multi-pane layouts are fixed, you can change the angle displayed by
each pane at any time. The assigned angles appear in white at the lower-left corner of
each pane.
To change the angle displayed by a single pane, do one of the following:
m
Right-click within a pane, then choose a new angle from the shortcut menu.
m
Position the pointer within the pane you want to switch, and press one of the numeric
keypad keyboard shortcuts (0-5) to switch layouts.
The following table lists the available keyboard shortcuts (using the numeric keypad
only) that are available for changing angles in a pane.
Using and Navigating Isometric Display Angles
The various angles that are available are intended to help you position layers and the
camera within 3D space. Internally to Shake, each angle is actually an invisible camera.
The first three angles are isometric views—if you set one of these angles as the
renderCamera, you’ll notice that the focalLength parameter is set to 0.
You can pan and zoom within any pane using the middle mouse button, and the same
keyboard modifiers that apply to other areas in Shake.
The isometric angles are:
• Front: Useful for transforming a layer’s X and Y pan, angle, and scale parameters. You
can pan and zoom to navigate within the Front view.
• Top: Useful for transforming a layer’s X and Z pan, angle, and scale parameters. You
can pan and zoom to navigate within the Top view.
• Side: Useful for transforming a layer’s Y and Z pan, angle, and scale parameters. You
can pan and zoom to navigate within the Front view.
Numeric Keypad Description
0 Cycles through every angle.
1 Displays the currently selected renderCamera.
2 Front
3 Top
4 Side
5 Perspective490 Chapter 18 Compositing With the MultiPlane Node
Using and Navigating Within the Perspective Angle
In addition, there is a single non-isometric angle, Perspective (Persp). This is the only
pane where you can transform a layer’s X, Y, and Z parameters all at once.
In addition to panning and zooming to navigate within this angle, you can also orbit
the Perspective angle.
To orbit the Perspective view:
m Move the pointer within a pane displaying the Perspective angle, press X, and drag
with the middle mouse button held down.
The Perspective angle rotates about the perspective view’s orbit point.
To center the Perspective view on a selected object:
m
Select an object that you want to be the new position of the orbit point (the camera or
a layer), and press Shift-B.
The Perspective view’s orbit point is moved so that it is centered on the selected
object, and the view within that pane is repositioned to a default position.
Note: You can also use the Shift-B keyboard shortcut in the camera view. However,
Shift-B only changes the interestDistance parameter of the camera to best match the
position of the selected object.
The renderCamera Angle
The camera or angle that’s assigned to the renderCamera parameter in the Camera tab
is special, since it represents the final output of the MultiPlane node. Each MultiPlane
node is created with one default camera, named camera1. However, you can use the
copy button to create duplicate cameras, or import additional cameras into a
MultiPlane node by importing one or more .ma (Maya ASCII) files.
Before rotating the Perspective angle Before rotating the Perspective angle After rotating the Perspective angleChapter 18 Compositing With the MultiPlane Node 491
Whichever angle is assigned as the renderCamera has the following additional
properties:
• It’s the only angle that can be switched between viewing the software-rendered final
output, and the hardware-rendered preview.
• It’s the only angle with a compare control (in software-rendering mode only).
• The image border ROI (Region of Interest) appears only for the renderCamera angle.
• There are additional keyboard shortcuts available for transforming a camera. For
more information, see “Manipulating the Camera” on page 517.
How the renderCamera Defines Image Output
Even though the renderCamera angle shows the region defined by the camera target’s
ROI, the image data appearing outside of this area is not cropped. Shake’s Infinite
Workspace once again preserves this information for use by downstream nodes
elsewhere in your script.
MultiPlane Viewer Shelf Controls
When you open a MultiPlane node into the Viewer, a group of controls specific to the
MultiPlane node appear in the Viewer shelf. These controls let you toggle the visibility
of various onscreen controls in the Viewer.
Customizing the MultiPlane Default Camera
You can customize the default angles that appear for new MultiPlane nodes.
Use the following declaration within a .h preference file:
DefDefaultMPCamera( Camera(0, "v4.0", "Front", imageWidth/(2*settings-
>yPixelUnit), imageHeight/(2*settings->yPixelUnit), 3000, 0, 0, 0,
"XZY", 0, 0.980, 0.735, 2, "Fill", 0, 0, 0, "default", xFilter, 0, .5,
0 ) );
Button Description
Point Cloud Display Displays/hides the individual locator points that display the
imported point cloud data. Displaying this data can make it easier
to align layers with the approximate locations of features that
were tracked. Hiding this data can make it easier to manipulate
the individual layers.
For more information on locator points, see “Viewing and Using
Locator Points” on page 498.
XYZ Angle Displays/Hides the X, Y, and Z angle controls, which can make it
easier to reposition objects without accidentally rotating them. A
third option is available by clicking and holding down the mouse
button to reveal a pop-up menu.492 Chapter 18 Compositing With the MultiPlane Node
Global Parameters That Affect MultiPlane Display
Two subparameters in the Globals tab let you adjust the quality of hardwareaccelerated images displayed within the multi-pane Viewer, and the relative scale of
distances represented by imported locator points.
textureProxy
Located within the useProxy subtree, textureProxy sets the proxy level at which
texture-rendered images that are used by the MultiPlane’s hardware-rendering mode
are displayed in the Viewer. This is similar to the interactiveScale setting, in that the
proxy level set here is used to generate on-the-fly Viewer images.
multiPlaneLocatorScale
Adjusts the size of locator points that appear in the Viewer when you load data from a
.ma file into a MultiPlane node. This lets you scale them up to make them easier to
select, or down to get them out of the way.
Path Display Shows/hides animation paths for image plates.
Rendering Mode Toggles the Camera View pane of the Viewer between hardware
(HW) and software (SW) rendering. Hardware rendering is faster,
but the color and quality of the image is less accurate; this mode
makes it easier to position objects. Software rendering is slower,
but allows you to accurately see the final image as it will be
rendered.
Viewer Layout Toggles the Viewer among four different multi-pane layouts,
single, double, triple, and quadruple. Each pane can be set to
display a different view of the 3D layout.
Keyframe All or
Selected Layers
When this button is turned on, adjusting a single layer using the
MultiPlane node produces a keyframe that only affects that layer.
Animation applied to any other layer is not affected by this new
keyframe.
Button DescriptionChapter 18 Compositing With the MultiPlane Node 493
Connecting Inputs to a MultiPlane Node
Like the MultiLayer node, the MultiPlane node accepts a variable number of inputs.
Drag input noodles from other nodes onto the + sign on the top of the MultiPlane
node that appears when the pointer passes over it. The + sign always appears to the
right of all other previously connected knots.
You can also attach several nodes to a single MultiPlane node simultaneously.
To connect several nodes to a MultiPlane node at once:
1 Select all of the nodes you want to connect to the MultiPlane node.
2 Shift-click the plus sign input of the MultiPlane node.
All selected nodes are connected to the MultiPlane node. By default, all new layers you
connect appear centered on the Camera view; rotate to face the camera’s current
position.
Before After494 Chapter 18 Compositing With the MultiPlane Node
Using Camera and Tracking Data From .ma Files
The MultiPlane node supports .ma (Maya ASCII) files, allowing you to import 3D camera
paths from a variety of 3D animation packages, or use 3D tracking data clouds
generated by third-party tracking applications. Every new MultiPlane node is created
with one camera, named camera1. Importing a .ma file adds a camera to the
renderCamera pop-up menu in the Camera tab of the MultiPlane node’s parameters.
You can add as many cameras as you like to a single MultiPlane node.
Important: When exporting camera path data from Maya, you must bake the camera
data for it to be usable by Shake.
A single .ma file can also contain data for multiple cameras. Shake imports every
camera that’s found within a .ma file, adding each one to the renderCamera pop-up
menu. Choosing a camera from the renderCamera pop-up menu displays that camera’s
parameters within the Camera tab.
Note: There is currently no support for the culling of 3D tracking data from within
Shake. Any manipulation of point cloud data should be performed before that data is
imported into Shake.
Importing .ma File Data
Data from .ma files is managed using a group of buttons at the bottom of the Camera
tab within the MultiPlane Parameters tab.
To import a .ma file into a MultiPlane node:
1 Load a MultiPlane node into the Parameters tab, then click the Camera tab that
appears.
2 Click the Load button, choose a .ma file, and click OK. Chapter 18 Compositing With the MultiPlane Node 495
The data from the .ma file appears in the Viewer as a cloud of points.
The camera or tracking data populates the parameters of the Camera tab, and a new
camera appears at the bottom of the renderCamera pop-up menu.
Important: Many 3D applications export camera paths with timing that begins at
frame 0. Shake scripts begin at frame 1, which can result in a one-frame offset. To
correct this, edit the timing of each camera path point as described in “Editing Locator
Point Data” on page 499. (This is not an issue for 3D tracking point clouds—they have
no timing data associated with them.)
Once a 3D camera path or tracking data cloud has been imported into the MultiPlane
node, you can attach the layer that produced the tracking data to the camera. Doing so
forces the attached layer to follow along with the camera as it moves to conform to the
tracking data. As a result, the attached layer itself doesn’t appear to be moving in the
camera output. You can tell that the correspondence is correct if the imported tracking
points conform to the matching features within the attached layer.496 Chapter 18 Compositing With the MultiPlane Node
Unattached layers that are positioned within the 3D workspace should now appear as if
they’re moving along with the features within the attached layer. For more information
about attaching a layer to the camera, see “Attaching Layers to the Camera and to
Locator Points” on page 506.
Deleting and Duplicating Cameras
To delete a camera, use the Delete button at the bottom of the Camera tab.
To delete a camera and its data:
1 Choose the camera you want to delete from the renderCamera pop-up menu of the
Camera tab.
2 Press the Delete button (at the bottom of the Camera tab).
You can duplicate any camera or angle in the MultiPlane node with the Copy button.
For example, you might want to create a custom viewing angle, or make some changes
to a camera without losing the original camera path.
To copy a camera, creating a duplicate:
1 Choose the camera you want to copy from the renderCamera pop-up menu of the
Camera tab.
2 Click the Copy button.
A duplicate camera appears in the renderCamera pop-up menu.
Linking to a Camera in Another MultiPlane Node
You can link a camera in one MultiPlane node to a camera in a different MultiPlane
node, so that both cameras move together. When you do so, each camera parameter in
the current MultiPlane node is linked via expressions to the same parameter in the
second MultiPlane node. By default, each linked parameter is locked to prevent
accidental deletion of the link.
Importing Data Clouds From Maya
In Shake, the aspectRatio parameter of a layer within the MultiPlane node is
determined by the width and height values of the filmBack parameter in the Camera
tab. These values are obtained from the imported .ma file.
When tracking a scene in Maya, do one of the following to make sure that the
resulting point cloud matches the features of the tracked media:
• Set the film back aspect ratio in Maya to match the render resolution aspect ratio.
• Turn on the “lock device aspect ratio” option in the render globals—this also sets
the device aspect ratio to match the film aspect ratio in the camera settings.Chapter 18 Compositing With the MultiPlane Node 497
Thus, you can set up animated MultiPlane composites using multiple MultiPlane nodes,
instead of connecting all your images to a single MultiPlane node. For example, you
might set up a composite using three different MultiPlane nodes—one for background
layers, one for midrange layers, and one for foreground layers. This way you can apply
separate color correction and Defocus nodes to the output of each sub-composite.
In the above example, one of the foreground image sequences has a matching 3D
tracking point cloud that has been imported into the MultiPlane1 node. For this
composition to work, you need to link the cameras in all three MultiPlane nodes
together.
To link a camera in one MultiPlane node to a camera in another:
1 Load a MultiPlane node into the Parameters tab.
2 Open the Camera tab.
3 Click the Link button, at the bottom of the Camera tab.498 Chapter 18 Compositing With the MultiPlane Node
The “Link camera” window appears. The “Link to” pop-up menu presents a list of every
camera and angle within every MultiPlane node in your script.
4 Choose the camera you want to link to from the “Link to” pop-up menu, then click OK.
The camera in the current MultiPlane node is now linked to the camera you chose.
Every parameter in the Camera tab is linked, with expressions, to the matching camera
parameter of the MultiPlane node you chose.
To unlink a camera:
m Unlock each parameter in the Camera tab, then clear each parameter’s expression.
Viewing and Using Locator Points
If you’ve imported 3D tracking data, it’s represented in the Viewer by a cloud of locator
points arranged within the 3D space of the MultiPlane node. Each locator point
corresponds to a feature that was tracked by the camera tracking software that created it.
Note: Depending on how detailed the track is, the point cloud may visibly conform to
the significant features within its corresponding image sequence. This can help you
position elements you’re placing within the scene.
To view an imported point cloud:
m
Click the Point Cloud Display button to hide and show the point cloud.Chapter 18 Compositing With the MultiPlane Node 499
To change the size of the locator points displayed in the Viewer:
m Adjust the multiPlaneLocatorScale parameter, located at the bottom of the guiControls
subtree of the Globals tab.
If the data cloud has individually labeled locator points, these labels are viewable
within Shake. Some 3D tracking applications also allow you to add locator points by
hand—labeling specific areas of the image that may be useful for layer positioning
from within Shake.
To view information about a specific locator point:
m
Position the pointer directly over an individual locator point in the Viewer to
automatically reveal a tooltip with information about that point’s name and location.
In addition to simply viewing locator points, you can connect layers directly to
individual locator points. For more information about attaching layers to locator points,
see “Attaching Layers to Locator Points” on page 510.
Editing Locator Point Data
You can view and edit the parameters of any locator point. You can also load points
from imported camera paths into the Curve Editor to change their timing.
To reveal an individual locator point’s data in the Parameters tab:
m
Position the pointer over the locator point you want to edit, right-click it, then choose
Expose Point from the shortcut menu.
Default MultiPlaneLocatorScale of 1 MultiPlaneLocatorScale set to 3500 Chapter 18 Compositing With the MultiPlane Node
A new subtree named pointCloud appears at the top of the Images tab of the
MultiPlane node’s parameters. Opening the subtree reveals a list of every point that’s
been added using the Expose command.
Opening an individual locator point’s subtree in this list reveals its xPos, yPos, and zPos
parameters. These parameters can be loaded into the Curve Editor to edit, animate, or
slip their values in time.
Transforming Individual Layers
The MultiPlane node allows 3D transformations of all images that have been connected
to it using either onscreen controls, or parameters within the Layers tab of the
MultiPlane parameters. The onscreen controls are similar to those found within the
Move3D node, except that these transformations occur within an actual 3D workspace.
Layer Transformation Values
Layers can be panned, rotated, and scaled. Transformations made using a layer’s
onscreen controls are relative to each layer’s center point. When moving layers through
space, the numeric values for all transformations are relative to the 0,0,0 center point of
the MultiPlane node’s world view.
Layer Onscreen Viewer Controls
Unlike other nodes that present onscreen controls in the Viewer, the MultiPlane node
lets you select the layer you want to manipulate directly in the Viewer. A layer must first
be selected before you can transform it.
To select a layer, do one of the following:
m
Position the pointer within the bounding box of any layer’s image in the Viewer to
highlight that layer. When the pointer is over the layer you want to select, click it to
expose the 3D transform controls for that layer.
m
Right-click in a pane of the Viewer, then choose a layer from the Current Plan submenu
of the shortcut menu.
As you move the pointer over different layers in the Viewer, the affected layer’s name
appears in yellow text in the upper-left corner of the screen. Once you’ve selected a
layer, its name appears in white text in the same corner.Chapter 18 Compositing With the MultiPlane Node 501
If you have many layers stacked up within a single MultiPlane node, you may want to
turn one or more layers invisible to make it easier to manipulate the remaining layers.
Invisible layers aren’t output when the script is rendered. For more information, see
“Showing and Hiding Layers” on page 506.
Layer Controls
When you select a layer, that layer’s onscreen controls appear superimposed over it.
Center Controls
All transformations you make to a layer occur relative to the center point, indicated by
crosshairs in the middle of the onscreen controls.
To move the center point:
m Hold down the Control key, and drag the center point to a new location.
Rotate X, Y, and Z
controls
Global axis
controls
Pan controls
Scale controls
Center controls502 Chapter 18 Compositing With the MultiPlane Node
Pan and Center Controls
Selected layers in the Viewer have two sets of onscreen controls for panning in 3D
space—global axis and local axis pan controls.
Global axis controls pan a layer relative to the overall 3D space, even if the layer has
been rotated. Panning a layer up with a global axis control pans it straight up in space.
To pan around the global axis, do one of the following:
m
Click a layer anywhere within its bounding box, and drag in any direction.
m Drag one of the three global axis pan controls to constrain the layer’s direction.
m
Select a layer, and press Q or P while dragging anywhere within the Viewer to pan a
layer without positioning the pointer directly over it.
Local axis pan control
Global axis pan control
Before global axis pan After global axis panChapter 18 Compositing With the MultiPlane Node 503
The local axis pan controls pan the layer relative to its own orientation. If a layer has
been rotated using the angle controls, using the local pan controls moves it along the
axis of its own rotation. Panning a layer up with a local axis control pans it diagonally in
space.
To pan along the Local Axis:
m Drag one of the two local axis pan controls to move the layer in that direction.
Angle Controls
Selected layers have three angle controls that rotate the layer around the X, Y, and Z
axes of that layer’s center point. These angle controls work identically to those found in
the Move3D node. The color of each angle control corresponds to the color
representing each dimension of the global axis pan controls.
The visibility of the angle controls can be toggled by clicking the XYZ Angle button in
the Viewer shelf. Hiding these controls may make it easier to use the pan controls in
some situations.
Before local axis pan After local axis pan504 Chapter 18 Compositing With the MultiPlane Node
To rotate a layer in the Viewer without using the angle controls:
1 Select a layer.
2 Press W or O and click in a pane of the Viewer.
3 When the dimension pointer appears, move the pointer in the direction in which you
want to rotate the layer. The colors in the pointer correspond to the angle controls.
4 When you move the pointer, the axis in which you first move is indicated by a single
axis arrow, and the layer rotates in that dimension.
Scale Controls
Selected layers have eight scale controls located around the outer edge of the image.
• The four corner controls rescale the layer, keeping the relative width and height of
the layer constrained.
• The left and right controls let you scale just the width of the layer.
• The top and bottom controls let you scale just the height.
By default, all scale transformations occur about the layer’s center point. Alternatively,
you can hold down the Command or Control key while you drag any of the scale
controls to scale a layer relative to the opposite scale control.
To scale a layer in the Viewer without using the scale handles:
1 Select a layer.
2 Position the pointer over a pane of the Viewer and hold down the E or I key.
The dimension pointer appears.Chapter 18 Compositing With the MultiPlane Node 505
3 Drag the dimension pointer in the direction in which you want to scale the layer. The
colored arrows correspond to the pan controls.
4 When you move the pointer, the direction in which you first move is indicated by a
single axis arrow, and the layer scales up and down in that dimension.
Creating Layer Hierarchies
You can create hierarchical relationships between layers within a MultiPlane node using
the parentTo parameter. You can lock one layer’s transformations to those of another by
assigning it a parent layer from this pop-up menu, which contains a list of every layer
that’s currently connected to the MultiPlane node. By default, this parameter is set to off.
Note: The MultiPlane node only supports one level of parenting.
To assign a layer a parent layer:
m
Choose a parent from that layer’s parentTo pop-up menu.
Once a layer has been assigned a parent layer, it is removed from the parentTo pop-up
menu. When you select a layer that is parented to another, the parent layer appears
with a red border in the Viewer.
You can make local transformations to a layer after you’ve assigned it a parent. This lets
you create an offset between it and the parent layer. Transformations that are applied
to a parent layer also affect all layers that are assigned to it.
Important: Always assign a parent layer before making any local transformations to a
layer itself. Otherwise you may encounter unpredictable results.506 Chapter 18 Compositing With the MultiPlane Node
Deleting Parent Layers
When you disconnect a layer that’s being used as a parent, a warning appears:
Clicking Yes removes the parent layer, eliminating the parent-child hierarchy. The
remaining layers’ parentTo parameters still show the original layer, in the event you
decide to reconnect it later on.
Showing and Hiding Layers
You can toggle the visibility of layers. For example, if you need to transform a layer
that’s hard to select because it’s obscured by other layers in the composition, you can
“solo” it to hide all other layers, or simply hide the individual layers that are in the way.
HIdden layers are not rendered.
To show or hide layers, do one of the following:
m
Right-click a layer in the Viewer, and turn it on or off in the Plane Visibility submenu.
The Plane Visibility submenu also has commands to Hide All Planes and Show All
Planes.
m Open the Images tab, then click the Layer Visibility button for that layer.
To solo a layer:
m Open the Images tab, and turn on the Solo button for that layer.
When you solo a layer, every other layer in that MultiPlane node is hidden.
Animating Layers
Layer transformations within the MultiPlane node can be animated similarly to the
parameters within any of Shake’s transform nodes. For more information on keyframing
parameters, see Chapter 10, “Parameter Animation and the Curve Editor,” on page 291.
Attaching Layers to the Camera and to Locator Points
When you import camera path or 3D tracking data into a MultiPlane node, you gain the
ability to attach layers to the camera, or to one of the locator points distributed within
the 3D workspace.Chapter 18 Compositing With the MultiPlane Node 507
Attaching Layers to the Camera
To use a MultiPlane node to matchmove one or more images into a scene using 3D
tracking data, you need to do three things:
• Import data from a .ma file.
• Position the images you want to matchmove.
• Attach the originally tracked image sequence to the camera.
Attaching the layer that produced the tracking data to the camera forces the attached
layer to follow along with the camera as it moves according to the tracking data. As a
result, the attached layer itself doesn’t appear to be moving in the camera output,
while the unattached layers that are positioned within the 3D workspace appear as if
they’re moving along with the features in the attached layer.
To attach a layer to the camera:
m
Click the Attach to Camera button of a layer in the Images tab—the lock icon closes
when Attach to Camera is on.
That layer’s image is automatically locked to the full area of the renderCamera angle in
the Viewer.
When you turn on a layer’s Attach to Camera button, the faceCamera, parentTo, pan,
angle, scale, center, and aspectRatio parameters all disappear from that layer’s subtree
in the Images tab, replaced by a single parameter—cameraDistance.
The cameraDistance parameter lets you adjust the relative spacing between layers that
are attached to the camera, and the other unattached layers that are arranged within
the 3D workspace. This lets you determine which layers appear in front of and behind
attached layers. 508 Chapter 18 Compositing With the MultiPlane Node
Decreasing the cameraDistance value brings the layer closer to the front of the
composition, while increasing the cameraDistance pushes the layer farther away.
Attached layers move back and forth along the lines that are projected from the
camera itself to the four corners of the frustum surrounding the camera target.
Regardless of how the cameraDistance parameter is set, a layer that’s attached to the
camera always lines up with the locator points imported from its matching .ma file.
Similarly, attached layers always appear, within the camera angle, at their original scale,
regardless of their position in the 3D workspace.
You can attach multiple layers to the camera. A typical example is when you need to
isolate one or more foreground elements in a tracked shot, so that a matchmoved
element can be placed behind it. In the following example, a hot-air balloon is inserted
into a tracked cityscape background plate.
Example: Isolating an Element in a MultiPlane Composite
1 First, duplicate the cityscape image sequence that’s used as the background plate, and
create a rotoshape to isolate the front building.
2 Next, attach the original cityscape image, the isolated building, and the hot-air balloon
images to a MultiPlane node.
They appear within the Images tab as separate layers.
cameraDistance reduced cameraDistance increasedChapter 18 Compositing With the MultiPlane Node 509
3 Turn on Attach Layer to Camera for the background and isolated building layers to lock
both images to the full area of the renderCamera angle in the Viewer.
4 Open the subtree for each attached layer, and adjust the cameraDistance parameters to
create the necessary spacing between each element for your composition.
In this case, the front building is moved forward so there’s room between the building
and the rest of the background plate for the hot-air balloon.
5 Connect the hot-air balloon image to the MultiPlane node.
A third layer appears in the layer list.
6 Using the onscreen controls, you can now position this new layer between the isolated
building and the overall cityscape plate.510 Chapter 18 Compositing With the MultiPlane Node
As a result, the balloon appears positioned between the front building and the rest of
the city in the camera view.
Because both the building and city layers are attached to the camera, they look
identical to the original input image from which they’re derived, regardless of each
layer’s position within the 3D workspace.
Attaching Layers to Locator Points
In addition to attaching layers to the camera, you can also attach a layer to any locator
point in the data cloud. This lets you quickly set a layer’s position in space to match that
of a specific feature in a tracked background plate.
When you attach a layer to a locator point, the layer is transformed to match the
position of the locator point using expressions. As a result, operations that change the
position of locator points, such as changing the sceneScale parameter, also change the
position of any layers that are attached to locator points. In addition, animated locator
points (from a tracking application capable of tracking moving subjects in addition to
backgrounds) will transform any layers that are attached to them as well.
You can attach a layer to either a single locator point, or to a group of three locator
points. Attaching a layer to a single locator point only pans the layer, it is not rotated.
To attach a layer to a locator point:
1 If necessary, move the layer’s center point to a position at which you want the layer to
be attached to the locator point.
2 Right-click a locator point in the Viewer, then choose a layer from the Attach Plane to
Point shortcut menu.
The layer is panned to the position of the locator point, and attached at the layer’s
center point. Expressions are added to that layer’s pan parameters that maintain the
relationship between the attached layer and the locator point.
A balloon image inserted between. the front
building and cityscape
The resulting camera angleChapter 18 Compositing With the MultiPlane Node 511
To attach a layer to three locator points:
1 If necessary, move the layer’s center point to a position at which you want the layer to
be attached to the locator point.
2 Shift-click three locator points in the Viewer.
3 Right-click one of the selected locator points, then choose a layer from the Attach
Plane to Point shortcut menu.
The layer is panned so that its center point is at the center of the triangle defined by
the three selected locator points. In addition, it is rotated to match the orientation of
the plane defined by the three points. The order in which you select the locator points
determines the orientation of the attached layer. If you select the same three points in
the reverse order, then choose Attach Plane to Point, the layer will be flipped.
Note: You can attach a layer to three locator points to orient its rotation, then attach a
layer to a single point afterwards to nudge its position in space, while maintaining the
current rotation.
Adjusting sceneScale
The sceneScale parameter, at the top of the Camera tab, lets you scale the relative
distribution of locator points making up a camera path or tracking data cloud within
the 3D workspace of the MultiPlane node. This lets you increase or decrease the space
between different planes within your composition, to make it easier to arrange layers.
If you adjust the sceneScale parameter in the Camera tab, layers that are attached to
locator points move along with the expanding or contracting point cloud:
• Lowering sceneScale contracts the distribution of locator points, bringing any layers
that are locked to locator points closer to the camera. The locator points themselves
appear to bunch up.512 Chapter 18 Compositing With the MultiPlane Node
• Raising sceneScale expands the distribution of points, moving any layers that are
locked to locator points away from the camera. The locator points themselves appear
to stretch out.
Changing the sceneScale parameter has no effect on layers that are attached to the
camera, nor does it affect the position of layers that are not attached to locator points.
It does, however, affect the size of the frustum, increasing or decreasing the area that is
seen by the camera. It also changes the position of the camera—if you make a big
enough change to the sceneScale parameter, the camera may move past layers in the
3D workspace.
Parameters in the Images Tab
The first three parameters in the Images tab determine the overall image that is output
from the MultiPlane node.
ClipLayer
Defines the output resolution of image data produced by the MultiPlane node.
postMMult
When turned on, the postMMult parameter premultiplies the output image produced
by the MultiPlane node.
autoOrder
When turned off, layer order is determined by the position of layers in the Parameters
tab, much like with the MultiLayer node. When autoOrder is turned on, layer order is
determined by each layer’s position in 3D space.
Note: The layer order of coplanar layers (occupying the same coordinate in the 3D
workspace) is determined by their position in the Parameters tab.Chapter 18 Compositing With the MultiPlane Node 513
Individual Layer Controls
Each image that you connect to a MultiPlane node is represented by its own set of layer
controls and subtree parameters in the Images tab. These controls are similar to those
found within the MultiLayer node, and are labeled in the order in which the layers were
connected to the MultiPlane node. For example, L1 is the name of the first connected
layer, followed by L2, and so on.
Layer Button or Control Description
The number of each layer corresponds to the input knot that
image is connected to. L1 is the first knot at the left.
Input
Clicking this button shows the input image for that layer in the
Viewer.
Layer Visibility
Toggles the visibility for that layer. Layers with visibility turned off
are not rendered.
Solo
Turns off the visibility of all other layers. You can only solo one layer
at a time. Click an enabled solo control to disable solo.
Ignore Above
Turns off all layers above the current layer, keeping only the current
layer and those behind it visible.
Attach Layer to Camera
If a particular layer corresponds to a tracked sequence that’s
imported from a .ma file, turn this button on to attach the image to
the camera, so that it’s not transformed when the camera moves to
follow a track. As a result, unattached layers appear locked relative to
the tracking information when composited against the attached
layer.
Reposition
Dragging this control up and down lets you rearrange the layer
order within the parameter list. This only affects layers that are
coplanar, unless autoOrder is turned off.
Input Layer Name Field
Gives the name of the preceeding node that’s actually connected
to the MultiPlane node.
Composite Mode
A pop-up menu that lets you set each layer with its own composite
mode, which affects how that image’s color data interacts with
other overlapping layers. Certain composite modes add parameters
to that layer’s parameter subtree. New layers default to the Over
mode. For more information on composite modes, see “Supported
Photoshop Transfer Modes” on page 476.
Disconnect Mode
Disconnects that input image from the MultiPlane node, and
removes it from the layer list without removing the input nodes
from the node tree.514 Chapter 18 Compositing With the MultiPlane Node
Individual Layer Parameters
Opening up a layer’s parameter subtree reveals a group of compositing and transform
parameters affecting that particular layer.
layerName
The name of the layer. All associated parameters for that layer are prefixed by the
layerName.
opacity
Opacity of the input layer. With an imported Photoshop file, there is an additional
PSOpacity parameter. See “Supported Photoshop Transfer Modes” on page 476 for
more information.
preMult
When this is on (1), the foreground image is multiplied by its alpha mask. If it is off (0),
the foreground image is assumed to already be premultiplied.
compChannels
Sets which image channels are passed up from below (behind) this layer.
addMattes
This parameter appears when a layer is set to Over. When enabled (1), the mattes are
added together to create the composite.Chapter 18 Compositing With the MultiPlane Node 515
faceCamera
The faceCamera pop-up menu lets you choose a camera. Once set, the layer will always
rotate to face the same direction as the selected camera. If the camera is animated, the
layer animates to match the camera’s movement so that the object remains facing the
camera at every frame. This automatically animates that layer’s angle parameters, even
though no keyframes are applied. In the following example, the camera starts off facing
a 2D balloon layer.
With faceCamera turned off, moving the camera around the layer results in the image
flattening as the camera reaches the side of the layer. When faceCamera is set to
Camera1, the image automatically rotates to face the direction the camera is facing.
Unlike ordinary rotation made with a layer’s angle controls, the automatic rotation
that’s made as a result of faceCamera being turned on is relative to the center point of
the bounding box that defines the actual image of the layer, and not by the layer’s
center point.
Rotated camera with faceCamera turned off Rotated camera with faceCamera 516 Chapter 18 Compositing With the MultiPlane Node
One useful application of this parameter is to offset a layer’s center point, then use layer
rotation to control layer position, even though faceCamera is turned on. In the
following example, an image of a planet is arranged to the right of an image of the sun.
The planet layer’s center point is offset to the left to match the center of the sun layer.
The arrangement in the Camera pane looks like this:
Rotating the planet layer’s yAngle parameter (corresponding to the green axis arrow)
now moves the planet layer around the sun layer, so that it appears to orbit about the
sun. By default, the 2D planet layer thins out as it approaches the camera. Setting
faceCamera to Camera1 results in the planet layer rotating to face the camera as it
moves around the sun.
Sun and Venus images courtesy NASA/JPL-Caltech
Rotated planet with faceCamera turned off Rotated planet with faceCamera set to Camera1Chapter 18 Compositing With the MultiPlane Node 517
parentTo
This parameter lets you create layer hierarchies within a single MultiPlane node. You can
link one layer to another by choosing a parent layer from this pop-up menu. Layers
with a parent can still be transformed individually, but transformations that are applied
to the parent are also applied to all layers linked to it. For more information on using
the parentTo parameter, see “Creating Layer Hierarchies” on page 505.
pan (x, y, z)
Corresponds to the onscreen pan controls, allows you to pan the layer in 3D space.
angle (x, y, z)
Corresponds to the onscreen angle controls, allows you to rotate the layer in 3D space.
scale (x, y, z)
Corresponds to the onscreen scale controls, allows you to resize the layer in 3D space.
center (x, y, z)
Defines the center point of the layer, about which all pan, rotate, and scale operations
occur. By default the center of newly added layers corresponds to the horizontal and
vertical center of the layer’s bounding box.
aspectRatio
This parameter inherits the current value of the defaultAspect global parameter. If
you’re working on a nonsquare pixel or anamorphic image, it is important that this
value is set correctly to avoid unwanted distortion in the image.
Manipulating the Camera
The camera can either be positioned or animated manually, like any other layer, or by
importing 3D tracking or camera data via a .ma file. 518 Chapter 18 Compositing With the MultiPlane Node
3D Transform Controls
Click the camera to expose its 3D transform controls.
The camera consists of two groups of controls, the camera itself, and the camera target.
Both controls are connected so that they always face one another—moving one
rotates the other to match its position.
Camera Controls
Similar to layer controls, the camera has rotate x, y, and z controls, and translate (move)
x, y, and z controls to constrain movement of the camera to one of these directions.
Dragging the camera image itself freely moves the camera within the Viewer.
Camera
Camera targetChapter 18 Compositing With the MultiPlane Node 519
Moving the camera in relation to the camera target
An additional control at the front of the camera constrains its movement so that it
moves either closer to or farther away from the camera target—which also adjusts the
interestDistance parameter in the Camera tab of the MultiPlane parameters.
No matter where you move the camera in space, it rotates to face the position of the
camera target. The orientation of the camera target, in turn, rotates to face the camera.
Additionally, a special group of keyboard shortcuts lets you move the camera by
dragging anywhere within the renderCamera view, without selecting the camera itself.
Note: These keyboard shortcuts only work within the camera view:
Before moving interestDistance control After moving interestDistance control
Before panning camera After panning camera
Keyboard Explanation
V-drag Rotates the camera about its Z axis.
S-drag Rotates the camera about the X and Y axes, about the camera’s
own center point, changing the position of the camera target.
Z-drag Pans the camera in and out along the Z axis.520 Chapter 18 Compositing With the MultiPlane Node
To move the camera and the camera target together in any view:
m
Press T and drag the camera or camera target controls in the Viewer.
Camera Target Controls and Frustum
The camera target represents the image that is displayed by the camera1 view. In
addition to the standard angle and panning controls, the camera target displays the
orbit point as a yellow sphere at the center of the target controls. This is the point
about which the camera rotates when you move the camera independently of the
camera target, or when you manipulate the camera target angle controls.
Note: The orbit point is also the point about which the Persp (perspective) view rotates
when you rotate this view using the X key.
D-drag Pans the camera and camera target together along the X and Y
axes.
X-drag Pivots the camera about the camera target’s orbit point.
Keyboard Explanation
Before T-dragging camera After T-dragging camera
Orbit pointChapter 18 Compositing With the MultiPlane Node 521
The outer bounding box represents the frustum, which defines the camera view frame
that produces the output image. The size of the frustum determines, in part, the area of
the 3D workspace that is output as the renderCamera image that’s output by the
MultiPlane node.
Note: Unlike frustum controls found in other 3D animation packages, Shake’s
MultiPlane frustum does not crop the image outside of the frustum boundary. Thanks
to Shake’s Infinite Workspace, this image data is preserved for future downstream
operations in the node tree.
The controls within let you move the camera target. Using the translate (move) x, y, and
z controls moves the camera target—causing the camera to rotate so that it continues
to face the target.
Using the rotate x, y, and z controls rotates the camera about the camera target’s orbit
point.
Moving the camera target closer to or farther away from the camera adjusts the
interestDistance parameter in the Camera tab of the MultiPlane parameters.
Animating the Camera
The camera can either be animated manually, like any other object, or by importing 3D
tracking or camera data via a .ma file.
Animating the Camera Using .ma Data
You can use the Copy, Load, Delete, and Link buttons at the bottom of the Camera tab
of the MultiPlane parameters to import, use, and clear camera and tracking data from a
.ma file.
Use the sceneScale parameter at the bottom of the guiControls subtree of the Globals
tab to change the size of the locator points displayed in the Viewer.
Before rotating camera target After rotating camera target522 Chapter 18 Compositing With the MultiPlane Node
Parameters in the Camera Tab
All of the parameters that affect the camera are located within the Camera tab of the
parameters.
renderCamera
A pop-up menu that lets you choose which angle provides the output image from the
MultiPlane node.
Lock, Unlock, Reset
Click the Lock button to lock all of the camera parameters at once, preventing them
from being accidentally edited. Click the Unlock button to unlock these parameters
again. Click the Reset button to restore the camera to its default position.
cameraName
A text field that lets you rename the camera or angle to more accurately describe your
setup.
sceneScale
Scales the depth of the virtual space used to distribute the locator points that are
displayed in the Viewer (which represent 3D tracking data clouds imported from .ma
files). This parameter allows you to expand or compress the relative distance from the
camera to the tracked background plate. Adjusting this parameter lets you more easily
position layers in space when camera tracking data comes from a subject that’s either
very far away, or very close. This parameter is for reference only, and has no effect on
the data itself. The default multiPlaneLocatorScale value is 50.
Camera Transform Data
The following parameters contain transform data for the camera, as well as parameters
that determine how the image is resolved based on mathematically simulated camera
attributes such as focal length and film gate resolution. These parameters are adjusted
whether you simply reposition the camera within the 3D workspace, keyframe the
camera manually, or import a camera path or 3D tracking data from a .ma file.
Note: Many of the parameters found within the Camera tab are identical to parameters
exported by Maya. For more information, see the Maya documentation.
Lock, Unlock, Reset
Three buttons let you lock all camera parameters, unlock all camera parameters, or
reset all camera parameters.
focalLength
Sets the focal length of the camera’s virtual lens. Shake measures the focal length in
millimeters. The default focalLength is 35mm.Chapter 18 Compositing With the MultiPlane Node 523
angleOfView
A subparameter of focalLength. This value is provided for convenience, and represents
the horizontal field of view of the frustum (as opposed to the vertical field of view that
the Move3D node uses), based on the current focalLength. The value of the
angleOfView has an inverse relationship to that of the focalLength parameter—raising
one lowers the other, and vice versa.
translate (x, y, z)
Transform data for the camera’s position. By default, the translate parameters are set to
expressions that automatically set their values.
rotate
Transform data for the camera’s rotation. If you keyframe rotation, the data is stored
here.
rotateOrder
The order in which rotations are executed, by dimension. Defaults to XZY.
interestDistance
This parameter defines the distance of the camera target to the camera. By default, this
parameter is set to an expression that automatically sets its value.
filmGate
The filmGate pop-up menu provides presets for setting the filmBack width and height
parameters. There are options corresponding to most standard film formats.
filmBack (width, height)
Represents the width and height of the image that strikes the virtual film, in inches.
Note: If you’re not importing camera data and you want to avoid squeezing the image
improperly, make sure the aspect ratio of the filmBack width and height parameters
matches that of the input image.
scale
Represents the size of the camera compared to the scene. Lowering this value reduces
the visible area taken in by the virtual lens, increasing the size of the scene relative to
the viewer. Increasing this value increases the visible area seen by the lens, reducing
the size of the scene relative to the viewer.
Changing the scale parameter also changes the effective focal length of the camera. For
example, if the focalLength parameter is set to 50, setting the scale to 2 changes the
effective focal length to 25. Setting the scale to 0.5 changes the effective focal length
to 100.524 Chapter 18 Compositing With the MultiPlane Node
fitResolution
A pop-up menu with four options: Fill, Horizontal, Vertical, and Overscan. This
parameter determines how differences between the filmBack aspect ratio and that of
the input image are resolved.
filmFitOffset
If the filmBack resolution is different from that of the clipLayer, this parameter offsets
the image within the filmBack resolution in inches.
filmOffset (x, y)
Offsets the image within the filmBack in relation to the area defined by the clipLayer.
This parameter is measured in inches.
useDeviceAspectRatio
If this parameter is turned off, the camera uses the aspect ratio of the input image. If
this parameter is turned on, the deviceAspectRatio parameter is used instead.
deviceAspectRatio
By default, this parameter uses an expression that computes the aspect ratio based on
the filmBack parameter.
xFilter, yFilter
Lets you pick which method Shake uses to transform the image in each dimension. For
more information, see “Applying Separate Filters to X and Y Dimensions” on page 863
motionBlur
A Motion Blur quality level of 0. 0 produces no blur, whereas 1 represents standard
filtering. For more speed, use a value less than 1. This value is multiplied by the global
parameter motionBlur.
• shutterTiming
A subparameter of motionBlur. Zero (0) is no blur, whereas 1 represents a whole
frame of blur. Note that standard camera blur is 180 degrees, or a value of .5. This
value is multiplied by the global parameter shutterTiming.
• shutterOffset
A subparameter of motionBlur. This is the offset from the current frame at which the
blur is calculated. Default is 0; previous frames are less than 0.
.ma Load/Camera Copy, Delete, Link Buttons
Four buttons at the bottom of the Camera tab let you control how .ma camera data is
used within a MultiPlane node.
Load
This button lets you load .ma data from a file. This creates a new camera in the
currently open MultiPlane node.Chapter 18 Compositing With the MultiPlane Node 525
Copy/Delete
These buttons let you duplicate or delete the currently selected renderCamera.
Link
The link button at the right lets you link the camera in the currently open MultiPlane
node to an animated camera within another.
Delete Cloud
Deletes a point cloud, but leaves the camera angle intact. This is useful if you plan on
redoing the track and you want to clear the old point cloud in preparation for
importing a new one.19
527
19 Using Masks
This chapter describes how you can use masks in Shake
to create transparency and to limit the effects of other
functions within your node tree.
About Masks
Masking is the process of using one image to limit another. This typically takes the form
of assigning one image to be used as an alpha channel by another. Masking in Shake
can also take the form of using one image to limit the effect of a particular node in the
node tree.
Masking is closely related to keying. Keying is a process for creating (pulling) a matte,
typically using color (green or blue) or brightness (whites or blacks) information from
the image to mask that image. Masking is even simpler—it’s simply assigning one
image to be used as a matte to another image or operation. (For more information on
keying, see Chapter 24, “Keying,” on page 681.)
Masks in Shake are extremely flexible, and can be combined in any number of different
ways. You can create masks that add to, or subtract from, the existing alpha channel of
images anywhere within your node tree.528 Chapter 19 Using Masks
Using Side Input Masks to Limit Effects
You can attach a mask to the side input of a node, thereby limiting that node’s effect
on the input image. In the following screenshots, a mask image (actually an RGrad
image node modified by a Move2D node) is used to limit the effect of a Brightness node
that’s connected to the source image of the car.
By connecting the mask image to the side input of the Brightness node, parts of the
source image remain unaffected by the Brightness operation.
Important: The side input is meant to be used with effects nodes only. Do not use side
input nodes to mask images.
You can set up a node to use a side input mask in one of two ways. You can connect an
existing mask image to the side input node of any effect node, or you can open an
effect node’s parameters and create an automatically connected side input mask using
the Mask controls.
Source image Mask image
Masked Brightness node Corresponding node treeChapter 19 Using Masks 529
To attach an image in the node tree to a side input mask:
m Drag a noodle from an image’s output knot, and attach it to a node’s side input mask.
To create a side input mask:
1 Load the parameters of the node you want to mask into the Parameters tab.
2 Do one of the following:
• Click Create to create a new instance of the type of node listed in the pop-up menu
to the right.
• Choose a different type of node from the Create pop-up menu to the right.
A new image node is created, automatically connected to the side input mask.
Connected side input mask
Adding Custom Nodes to the Mask Shape List
To add your own nodes to the Mask shape list, add a line similar to the following in a
ui.h file:
nuiAddMaskCommand(“QuickShape”,“QuickShape();”);
nuiAddMaskCommand(“QuickPaint”,“QuickPaint(0);”);
For more information on ui.h files, see Chapter 14, “Customizing Shake,” on page 355.
Drag noodle to side input..530 Chapter 19 Using Masks
Parameters Within the Mask Subtree
The Mask subtree, located in the top section of any node’s Parameters tab, contains the
following parameters that let you customize how the input mask image is used:
maskChannel
Lets you choose which channel to use as the mask. This parameter defaults to A
(alpha).
invertMask
Lets you invert the mask, reversing its effect on the image.
clampMask
Turning this parameter on clamps mask image data to a value between 0 and 1. It is
important to enable this parameter when using floating point images as masks.
enableMask
Lets you turn the mask off and on without having to disconnect it.
Using Masks to Limit Color Nodes
The following example uses images from the tutorial media directory ($HOME/nreal/
Tutorial_Media/Tutorial_Misc/masks) to show how to limit the effects of color nodes
using masks.
Masking a color-correction node to create a lighting effect:
1 In the Image tab, click the FileIn node, and select the car_bg.jpg, woman_pre.iff,
sign_mask2.iff, and car_mask.iff images in the /Tutorial_Misc/masks directory, then
click OK.
2 In the Node View, select the car_bg node.
3 Click the Color tab, and then click Brightness.
A Brightness node is added to the car_bg image.
4 In the Brightness parameters, set the value to .3.
The entire image darkens.Chapter 19 Using Masks 531
5 To create a mask that gives the appearance of a “spotlight,” do one of the following:
• Create an RGrad node (in its own branch), and connect the RGrad output to the M
(mask) input on the side of the Brightness node.
• In the Brightness parameters, choose RGrad from the Mask pop-up menu.
Note: To create the node type already in the Mask shape menu, click Create. For
information on the rotoscoping or paint tools and their onscreen controls to draw and
edit masks, see Chapter 21, “Paint,” on page 579.
An RGrad is connected as the mask input for the Brightness node, and the masked
portion of the image is darkened.
6 In the Node View, select the RGrad node, click the Transform tab, then click the
CornerPin node.
7 Using the onscreen controls and the following image as a reference, adjust the RGrad
image to put the circle in perspective.532 Chapter 19 Using Masks
For more information on transforming with onscreen controls, see Chapter 26,
“Transformations, Motion Blur, and AutoAlign,” on page 763.
8 To invert the mask, open the Mask subtree in the Brightness node, and enable
invertMask.
The mask is inverted, and the masked portion of the image is lightened.
Don’t Use Mask Inputs With Layer Nodes
Mask inputs are useful for color corrections and transforms. However, masks should
not be used for layer nodes. The logic is the complete opposite of what you think it
should be. Honest. As the following example shows, even with color and transform
nodes, masks should be used with caution.Chapter 19 Using Masks 533
Masking Concatenating Nodes
It is never a good idea to use side input masking with multiple successive
concatenating nodes because doing so breaks the concatenation. The following
example demonstrates the wrong way to use masks.
Breaking node concatenation with side input masks—don’t try this at home:
1 Select the Brightness node and apply a Color–Mult node.
2 In the Color controls of the Mult node parameters, set the Color to blue.
A blue tint is created to color correct the dark areas of the background.
3 Connect the output of the CornerPin node to the M input of the Mult node.
4 To invert the mask on the Mult node, expand the Mask controls and enable invertMask.
The Mult (color-correction) node is masked and the mask is inverted (like the Brightness
node), so that only the dark areas are tinted blue.
5 In the Mult node, adjust the Color controls to a deeper blue color.
Although the result appears fine, there are several problems with the above approach:
• Normally, the Mult and Brightness functions concatenate. By masking either node,
you break concatenation. When concatenation is broken, processing time slows and
accuracy decreases.
• Masking twice with the same node (the RGrad node in this example) slows
processing.
• Your edges get multiple corrections, and tend to degrade. This is evident by the blue
ring around the soft parts of the mask. 534 Chapter 19 Using Masks
A better way to mask a series of concatenating nodes:
1 Disconnect the masks from the previous example.
2 Select the Mult node and add a Layer–KeyMix node.
3 Connect the car_bg node output to the KeyMix node’s Foreground input (the second
input).
4 Connect the CornerPin node to the KeyMix node’s Key input (the third input).
This eliminates the above problems. The following images compare the two resulting
renders. In the right image that used the KeyMix node, there is no blue ring around the
soft part of the mask area.
Masking Transform Nodes
You can also use masks to isolate transforms.
To mask a pan to create depth between the street and the hood of the car:
1 Select the CornerPin node, and apply a Transform–Pan node.
2 Select the CornerPin node again, and Shift-click Layer–KeyMix.
A KeyMix2 node is added to the CornerPin node as a separate branch.
3 Connect the Pan node to the Foreground input of the KeyMix2 node.
4 Connect the car_mask node to the Key input of the KeyMix2 node.Chapter 19 Using Masks 535
5 Connect the KeyMix2 node to the Key input of the KeyMix1 node.
6 Click the right side of the KeyMix1 node to show the resulting image in the Viewer.
7 Click the left side of the Pan node to show the onscreen controls in the Viewer, and
load the parameters.
8 Using the following illustration as a guide, pan the RGrad up slightly to the left. 536 Chapter 19 Using Masks
Note: If you had simply connected the car_mask image to the M input of the Pan node,
rather than using the KeyMix method, you would have masked normally concatenating
nodes and broken the concatenation between the CornerPin and Pan functions.
Masking Layers
Another form of masking involves using an image as a holdout matte to cut holes in
another image. Masking layers requires a different approach, since you should never
mask a layer using the side input.
Typical nodes used for this task are Inside and Outside. The Inside node puts the first
image in only the white areas of the second image’s alpha, and the Outside node puts
the first image in only the black areas of the second image’s alpha.
The following example continues with the result node tree from the above example.
Since the sign is further in the foreground of the scene, you do not want the sign to
get the brightening effect. Use the Outside node to put the light mask outside of the
sign mask, in effect punching a hole in the light mask with the sign mask.
Using the Outside node to isolate the sign:
1 In the Node View, select KeyMix2 and apply a Layer–Outside.
2 Double-click the Outside node to load it in the Viewer.
Using Images Without an Alpha Channel
A masked image does not need an alpha channel. Connecting an image without an
alpha channel as the mask doesn’t immediately have an effect, however, since by
default mask inputs are expecting an alpha channel.
To fix this, switch the mask channel to R, G, or B in the Mask subtree to select a
different channel to use as a mask. To use the luminance of the image, apply a
LumaKey node to your mask image and leave the channel at A, or apply a
Monochrome node and select R, G, or B.Chapter 19 Using Masks 537
3 Connect the sign_mask2 node to the Background input (the second input) of the
Outside node.
The light mask is “outside” of the sign mask.
The following example demonstrates the wrong way to combine the sign and
car masks.
m Using the following image as a guide, combine the sign_mask and the car_mask with
an Over (or Max) node.
Outside1 KeyMix1538 Chapter 19 Using Masks
A slight problem occurs when you try this using the Over node. A matte line appears
between the two masks.
Fortunately, in Shake, there’s always a different method you can try. This problem is
easily solved by substituting a different node for the Over.
A better way to combine the sign and car masks:
1 Select the Over node, and Control-click IAdd.
The Over node is replaced with the IAdd node, and the line disappears.
Next, you can put the woman outside of the new mask IAdd1 using the Outside node.
Since this technique was used in the previous example, try a different approach. Use
the Atop node—similar to Over, except the foreground only appears where there is an
alpha channel on the background image.
2 Add a SwitchMatte node, and connect the KeyMix1 node to the Foreground input of the
SwitchMatte node.
3 Connect the IAdd1 node to the Background input of the SwitchMatte node.
The alpha channel is copied from IAdd1 to KeyMix1.
4 In the SwitchMatte node parameters, enable invertMatte to invert the mask (since Atop
only composites in the white areas in the background alpha mask).
5 In the SwitchMatte node parameters, disable matteMult.
6 Select the woman_pre node and apply a Layer–Atop node.Chapter 19 Using Masks 539
7 Connect the SwitchMatte node to the Background input of the Atop node.
If the image looks wrong, make sure that matteMult is disabled, and invertMatte is
enabled in the SwitchMatte node parameters.
Masking Filters
Filters have special masked versions of the node that not only mask an effect, but also
change the amount of filtering based on the intensity of the second image. These take
the same name as the normal filter node preceded by an I, for example, Blur and IBlur.
This is much more convincing than using the mask input.
To mask a filter:
1 Create an Image–Text node, and type some text in the text field. (Type in the second
field labeled “text” since the first field is to change the name of the node.)
2 Adjust the xFontScale and yFontScale parameters so the text fills the frame.
3 Create an Image–Ramp, and set the alpha1 parameter to 0.
4 Select the Text node, and add a Filter–Blur node.
5 Connect the Ramp node to the M input of the Blur node.540 Chapter 19 Using Masks
6 In the Blur parameters, set the xPixels and yPixels value to 200.
The result looks bad, rather like the following. Notice that the right side of the image
merely mixes the completely blurred image with the non-blurred image.
7 Select the Blur node, and Control-click Filter–IBlur.
The Blur node is replaced with the IBlur node.
8 Disconnect the Ramp from the blur node’s M input, and connect it to the IBlur node’s
second image input.
9 In the IBlur parameters,set the xPixels and yPixels value to 200.
The result is much nicer—the right side is blurred to 200 pixels, the middle is blurred to
100 pixels, and the left edge has no blur at all.
The -mask/Mask Node
This node is only used in the script, but is created invisibly whenever you insert a sideinput mask. The Mask node masks out the previous operation (in command-line mode)
or a node that you specify when in scripting mode. This is how the interface interaction
of setting a mask is saved in script form. For more information, see “About Masks” on
page 527.
Parameters Type Defaults Description
mask image The image to be used as a mask on the result
of the first input image.
maskChannel string “a” The channel of the mask image to be used as
the mask.Chapter 19 Using Masks 541
Synopsis
Mask(
image,
image mask,
const char * maskChannel,
float percent,
int invertKey,
int enableKey
);
Script
image = Mask(
image,
mask,
“maskChannel”,
percent,
invertKey,
enableKey
);
Command Line
shake -mask image maskChannel percent...
percent float 100 A gain control applied to the maskChannel.
• 100 percent is full brightness.
• 50 percent is half brightness.
• 200 percent is twice as bright, and so on.
invertKey int 0 A switch to invert the maskChannel.
• 0 = do not invert
• 1 = invert
enableKey int 1 A switch to turn the key on and off.
• 0 = off
• 1 = on
Parameters Type Defaults Description542 Chapter 19 Using Masks
Masking Using the Constraint Node
The Layer–Constraint node also helps to limit a process. The Constraint node mixes two
images according to a combination of modes. The modes are Area of Interest (AOI),
tolerance, channel, or field. In the following example, the AOI is enabled and the area
box is set. Only the area inside of the box of the second image is calculated.
Constraint
Constraint is a multifunctional node that restricts the effect of nodes to limited areas,
channels, tolerances, or fields. Toggle the type switch to select the constraint type.
Certain parameters then become active; others no longer have any effect. This is similar
to the KeyMix node in that you mix two images according to a third constraint. KeyMix
expects an image to be the constraint. Constraint allows you to set other types of
constraints.
The Constraint node also speeds calculation times considerably in many cases. The
speed increase always occurs when using the ROI or field mode, and for many
functions when using channel mode. Channel mode decreases calculation time when
the output is a result of examining channels, such as layer operations. Calculation time
is not decreased, however, when it must examine pixels, such as warps and many
filters. The tolerance mode may in fact increase calculation times, as it must resolve
both input images to calculate the difference between the images.
Parameters
This node displays the following controls in the Parameters tab:
clipMode
Toggles between the foreground (0) or the background (1) image to set the output
resolution.
type
Selects the type of constraint you use.
• AOI - Area of Interest (1): Draws a mixing box.
• Threshold (2): Only changes within a tolerance are passed on.
• Channel (4): Only specific channels are modified.
• Field (8): Only a selected field is modified.Chapter 19 Using Masks 543
Because of the labeling, you can do multiple types of constraining in the script by
adding the numbers together. For example, 7 = AOI (1) + Threshold (2) + Channel (4); in
other words, AOI, Threshold, and Channel are all active.
AOI Controls
These are active only if the type parameter is set to 1. (See “type,” above.) They describe
a cropping box for the effect. Opening this parameter reveals left, right, bottom, and
top subparameters.
rTol
If the type parameter is set to 2, the red color channel tolerance. If pixels between the
two images are less than the Tolerance value, they are considered common.
gTol
If the type parameter is set to 2, the green color channel tolerance. If pixels between
the two images are less than the Tolerance value, they are considered common.
bTol
If the type parameter is set to 2, the blue color channel tolerance. If pixels between the
two images are less than the Tolerance value, they are considered common.
aTol
If the type parameter is set to 2, the alpha channel tolerance. If pixels between the two
images are less than the Tolerance value, they are considered common.
thresholdType
Active only when the type parameter is equal to 2. This sets the Tolerance to “lo” or “hi.”
• 0 = “lo.” Changes are made only if the difference between image1 and image2 is less
than the Tolerance values you set.
• 1 = “hi.” Changes are made only if the difference between image1 and image2 is
greater than the Tolerance values.
tReplace
Active when the type parameter is set to 2. Toggles whether the entire pixel is replaced,
or just the channel meeting the Tolerance criteria.
Channel and Field Controls
Opening this parameter reveals two subparameters.
channels: If the type parameter is set to 4 (see “type,” above), the operation only
applies to the specified channels.
field: If the type parameter is set to 8 (see “type,” above), effect only applies to one
field.
• 0 = even field
• 1 = odd field544 Chapter 19 Using Masks
invert
Inverts the selection. For example, everything beyond a color tolerance is included,
rather than below, and so on.20
545
20 Rotoscoping
Shake provides rotoscoping capabilities with the
RotoShape node. When combined with Shake’s other
image processing, layering, and tracking functions, you
have a powerful rotoscoping environment.
Options to Customize Shape Drawing
Before you start working with Shake’s RotoShape node, you should be aware that there
are several parameters in the guiControls section of the Globals tab that allow you to
customize shape-drawing behaviors and shape-transform controls in the Viewer. You
can change these parameters to make it easier to use Shake’s controls for your
individual needs.
rotoAutoControlScale
An option which, when enabled, increases the size of the transform controls of shapes
based on the vertical resolution of the image to which the shape is assigned. This
makes it easier to manipulate a shape’s transform control even when the image is
scaled down by a large ratio.
rotoControlScale
A slider that allows you to change the default size of all transform controls in the
Viewer when rotoAutoControlScale is turned on.546 Chapter 20 Rotoscoping
Note: You can also resize every transform control appearing in the Viewer by holding
the Command key down while dragging the handles of any transform control in the
Viewer.
rotoTransformIncrement
This parameter allows you to adjust the sensitivity of shape transform controls. When
this parameter is set to lower values, transform handles move more slowly when
dragged, allowing more detailed control. At higher values, transform handles move
more quickly when dragged. A slider lets you choose from a range of 1-6. The default
value is 5, which matches the transform control sensitivity of previous versions of Shake.
rotoPickRadius
This parameter provides the ability to select individual points on a shape that fall
within a user-definable region around the pointer. This allows you to easily select
points that are near the pointer that may be hard to select by clicking directly. A slider
allows you to define how far, in pixels, the pointer may be from a point to select it.
rotoTangentCreationRadius
This parameter lets you define the distance you must drag the pointer when drawing a
shape point to turn it into a Bezier curve. Using this control, you can make it easier to
create curves when drawing shapes of different sizes. For example, you could increase
the distance you must drag to avoid accidentally creating Bezier curves, or you can
decrease the distance you must drag to make it easier to create Bezier curves when
drawing short shape segments.
Using the RotoShape Node
The RotoShape node can create multiple, spline-based shapes that can be used as an
alpha channel for an element, or to mask a layer or an effect. You can only create closed
shapes with the RotoShape node. Shapes created using the RotoShape node are
grayscale, and filled shapes are white against a black background. An alpha channel is
automatically created, and has exactly the same data as the R, G, and B channels.
Shapes can be filled or unfilled. For a shape to have an effect on the alpha channel, it
must be filled. Shapes that are filled with white create solid areas in the alpha channel.
Shapes that are filled with black create areas of transparency. Unfilled shapes have no
effect on the alpha channel.Chapter 20 Rotoscoping 547
This chapter covers the RotoShape node as it’s used for rotoscoping. For techniques on
using the RotoShape node to apply masks, see Chapter 19, “Using Masks.”
Note: You can copy shapes, either partially or in their entirety, between the RotoShape,
Warper, and Morpher nodes. When copying a shape from a RotoShape node to a Warper
or Morpher node, you can assign it as a source, target, or boundary shape. This is
especially useful in cases where you’ve isolated a subject using a RotoShape node already
and you can use that shape as a starting point for your warp effect. Be aware that you
cannot copy shapes in RotoShape nodes that were created in Shake 3.5 or earlier.
Add Shapes Mode Versus Edit Shapes Mode
When the RotoShape node is active, the associated tools appear on the Viewer shelf.
There are two main modes you’ll toggle between when using the RotoShape controls in
the Viewer shelf.
Add Shapes Mode
You initially create shapes using the Add Shapes mode.
Edit Shapes Mode
You modify and animate shapes using the Edit Shapes mode.
Why Use the RotoShape Node Instead of the QuickShape Node?
The RotoShape node is a newer, faster, more flexible, and more able rotoscoping tool
that replaces the QuickShape node.
The RotoShape node has the following advantages over the QuickShape node:
• You can create multiple shapes within the same node.
• You can have a soft-edge falloff on each shape that can be modified independently
on each control point.
• You can make one shape cut a hole into another.
• It is much faster to enter keyframes.
• Once you break a tangent, the tangent remains at the angle you specify until you
break the tangent again.548 Chapter 20 Rotoscoping
Drawing New Shapes With the RotoShape Node
Drawing new shapes works the same whether you’re creating a source, target, or
boundary shape. In each case, you create a new, unassigned shape first, then assign its
type in a subsequent step. Unassigned shapes appear yellow, by default.
To create a new shape:
1 Add an Image–RotoShape node to the Node View.
2 Click the Parameter control of the RotoShape node to load its parameters into the
Parameters tab, and its controls into the Viewer shelf.
Note: If you’re rotoscoping over the image from a particular node, click the left side of
the node you want to trace in the Node View to load its image into the Viewer. Make
sure the RotoShape node’s parameters remain loaded in the Parameters tab, otherwise
the shape controls will disappear from the Viewer shelf.
3 In the Viewer shelf, click the Add Shape button.
4 If necessary, zoom into the image in the Viewer to better trace the necessary features of
the subject.
5 In the Viewer, begin drawing a shape by clicking anywhere to place a point.
6 Continue clicking in the Viewer to add more points to the shape.
• Click once to create a sharply angled point.
• Drag to create a Bezier curve with tangent handles you can use to edit the shape of
the curve.
7 To close the shape, click the first point you created.
The shape is filled, and the Edit Shapes mode is automatically activated.Chapter 20 Rotoscoping 549
Note: If you traced the image from another node, you’ll need to load the RotoShape
node into the Viewer to see the fill.
Important: You can only create filled shapes in the RotoShape node. To create singlepoint and open shapes, use the Warper or Morpher node.
A single RotoShape node can contain more than one shape.
To create multiple shapes in a single node:
1 To create another shape, click the Add Shapes button again.
2 Use the techniques described previously to create the additional shape.
3 When you’re finished, click the first point you created to close the shape.
Each shape you create has its own transform control.550 Chapter 20 Rotoscoping
To duplicate a shape:
1 Click the Edit Shapes button to allow you to select shapes in the Viewer.
2 Move the pointer over the transform controls of the shape you want to duplicate, then
right-click and choose Copy Shape from the shortcut menu.
3 Right-click in the Viewer, then choose Paste Shapes from the shortcut menu.
Editing Shapes
Once you’ve created a shape, there are several ways you can modify it by turning on
the Edit Shapes button.
Note: When you’re editing a RotoShape node containing multiple shapes that are very
close to one another, it may be helpful to turn off the Enable/Disable Shape Transform
Control button in the Viewer shelf. Doing so hides transform controls that may overlap
the shape you’re editing.
To edit a shape:
1 Click the right side of the RotoShape node you want to modify to load its parameters
into the Parameters tab, its controls into the Viewer shelf, and its splines into the
Viewer.
2 In the Viewer shelf, click the Edit Shapes button.
3 Select one or more points you want to edit by doing one of the following:
• Click a single point to select it.
• Shift-click additional points to add them to the selection.
• Click in the Viewer and drag a selection box over all the points you want to select.
• Hold the Shift key down and drag to use another selection box to add points to the
selection.
• Hold the Command or Control key down and drag to use another selection box to
remove points from the selection.
• Move the pointer over the edge, or the transform control, of a shape, and press
Control-A or Command-A to select every point on that shape.Chapter 20 Rotoscoping 551
4 When the selected points are highlighted, rearrange them as necessary by doing one
of the following:
• To move one or more selected points, drag them where you want them to go.
• To move one or more selected points using that shape’s transform control, press the
Shift key while you use the transform control.
Note: Using the transform control without the Shift key pressed modifies the entire
shape, regardless of how many points are selected. For more information on using
the transform control, see page 553.
To add a point to a shape:
1 Click the Edit Shapes button.
2 Shift-click the part of the shape where you want to add a control point.
A new control point appears on the shape outline where you clicked.
To remove one or more points from a shape:
1 Select the point or points you want to remove.
2 Do one of the following:
• Click the Delete Knot button in the Viewer shelf.
• Press Delete.
Those points disappear, and the shape changes to conform to the remaining points.
To convert angled points to curves, and vice versa:
1 Select the point or points you want to convert.
2 Click the Spline/Line button to convert angled points to curves, or curves to angled
points.
An optional step is to set the Show/Hide Tangents button to All or Pick to view
tangents as they’re created.
To change a curve by editing a point’s tangent handles:
1 Make sure the Show/Hide Tangents button is set to All to view all tangents, or Pick to
view only the tangents of points that you select.
2 Make sure the Lock Tangents button is turned off.552 Chapter 20 Rotoscoping
3 Do one of the following:
• To change the length of one of the tangent handles independently from the other,
while keeping the angle of both handles locked relative to each other, drag a handle
to lengthen or shorten it. You can also rotate both handles around the axis of the
selected point.
• To change the angle of one of the tangent handles relative to the other, along with
its length, press the Command or Control key while dragging a handle around the
axis of the selected point. The selected tangent handle moves, but the opposing
tangent handle remains stationary.
• To keep the angle of both tangent handles at 180 degrees relative to one another,
keeping the lengths of each side of the tangent identical, press the Shift key while
dragging either of the tangent handles around the axis of the selected point. If you
Shift-drag tangent handles that were previously angled, they are reset.Chapter 20 Rotoscoping 553
To edit a shape using its transform control:
1 Make sure the Enable/Disable Shape Transform Control button is turned on.
Each shape’s transform control affects only that shape. For example, if a RotoShape
node has three shapes in the Viewer, each of the three transform controls will only
affect the shape its associated with. This is true even if you select control points on
multiple shapes at once.
2 When you move, scale, or rotate a shape using its transform control, each
transformation occurs relative to the position of the transform control. To move a
shape’s transform control in order to change the center point about which that shape’s
transformation occurs, press the Command or Control key while dragging the
transform control to a new position.
3 To manipulate the shape, drag one of the transform control’s handles:
• Drag the center of the transform control to move the entire shape in the Viewer. Both
the X and Y handles will highlight to show you’re adjusting the X and Y coordinates
of the shape.
• Drag the diagonal scale handle to resize the shape, maintaining its current aspect
ratio.
• Drag the X handle to resize the shape horizontally, or drag the Y handle to resize the
shape vertically.
Diagonal scale handle
X handle
Y handle554 Chapter 20 Rotoscoping
Drag the Rotate handle (to the right of the transform control) to rotate the shape about
the axis of the transform control.
To edit selected control points using a shape’s transform control:
1 Select one or more control points.
2 Hold down the Shift key while you manipulate one or more selected points with the
transform control to modify only the selected points.
Note: Using a shape’s transform control without pressing the Shift key modifies the
entire shape, regardless of how many points are selected.
To change the position of a shape’s transform control:
m
Press the Command or Control key while you drag the center of a transform control to
move it in relation to the shape it is associated with.
This moves the center point around which shape transformations occur. For example, if
you move the transform control of a shape to an area outside the shape itself, rotating
the shape results in the shape moving around the new position of the transform
control, instead of rotating in place.
To change the position of all shapes in a RotoShape node simultaneously:
m
Turn on the Enable/Disable Transform Control button.
Rotate handleChapter 20 Rotoscoping 555
A transform control that affects the entire node appears across the entire frame. Each
shape’s individual transform control remains visible.
Shape Bounding Boxes
Right-click a point, then choose Bounding Box Toggle from the shortcut menu to
display a box around that shape that can be transformed to move and scale the shape.
This works in addition to the shape’s transform control, which appears at the center of
the shape.
Changing a Shape’s Color
You can change the color of individual shapes, to change their effect on the alpha
channel they create. White shapes create solid areas, while black shapes create regions
of transparency.
To change the color of a shape:
1 Right-click a shape’s transform control, or any of its main control points.
2 Choose Black or White from the shortcut menu to change the shape to that color.556 Chapter 20 Rotoscoping
Reordering Shapes
You can reorder multiple overlapping shapes to change the effect they have on the
alpha channel. For example, placing a black shape over a white shape lets you create a
transparent area, while placing a white shape over a black shape creates a solid region.
To change the order of multiple shapes in the same RotoShape node:
1 Right-click a shape’s transform control, or any of its main control points.
2 Choose one of the following options from the shortcut menu:
• Move to Back: The selected shape is put behind all other shapes in that node.
• Move Back: The selected shape is moved one level behind other shapes in that node.
• Move Forward: The selected shape is moved one level in front of other shapes in that
node.
• Move to Front: The selected shape is moved in front of all other shapes in that node.
Showing and Hiding Individual Shapes
Each shape in a RotoShape node is labeled in the Viewer with a number based on the
order in which it was created. You can use this information to show and hide individual
shapes. Hidden rotoshapes aren't rendered.
To hide and show shapes, do one of the following:
m
Right-click any shape in the Viewer to display a shortcut menu with commands to hide
that shape, hide other shapes, or show all shapes.
m
Right-click anywhere in the Viewer to display a shortcut menu that allows you to show
or hide any shape in that node by its label.
Locking Tangents
When the Lock Tangents button is turned on, the tangent angles are locked when the
control points are moved, rotated, or scaled.
When Unlock Tangents is selected, the tangent angles are unlocked. Select Unlock
Tangents when moving, scaling, or rotating to maintain the shape.
Copying and Pasting Shapes Between Nodes
There are several nodes that use shapes besides the RotoShape node. These include:
• LensWarp
• Morpher
• WarperChapter 20 Rotoscoping 557
Shapes can be copied and pasted between all of these nodes, so that a shape drawn in
one can be used in any other. Animated shapes are copied along with all of their
keyframes.
Note: You cannot copy shapes from RotoShape nodes that were created in Shake 3.5 or
earlier.
To copy a single shape:
m
Right-click the transform control, outline, or any point of the shape you want to copy,
then do one of the following:
• Choose Copy Shape from the shortcut menu.
• Press Control-C.
To copy all visible shapes:
m
Right-click anywhere in the Viewer, then choose Copy Visible Shapes from the shortcut
menu.
Note: Since this command only copies visible shapes, you can turn the visibility off for
any shapes you don’t want to copy.
To paste one or more shapes into a compatible node, do one of the following:
m
Right-click anywhere within the Viewer, then choose Paste Shapes from the shortcut
menu.
m Move the pointer into the Viewer, then press Control-V.
Animating Shapes
Animating shapes created with a RotoShape node is a process of creating and
modifying keyframes. When auto-keyframing is enabled, every change you make to a
shape is represented by a single keyframe in the Time Bar, at the current position of the
playhead.
To animate a rotoshape:
1 Click the right side of the RotoShape node to load its controls into the Viewer shelf.
2 Turn auto-keyframing on by clicking the Autokey button.
3 Move the playhead to the frame in the Time Bar where you want to change the shape.
4 Adjust the rotoshape using any of the methods described in this chapter.
Each time you make an adjustment to a shape with auto-keyframing on, a keyframe is
created at the current position of the playhead.558 Chapter 20 Rotoscoping
5 If necessary, move the playhead to another frame and continue making adjustments
until you’re finished.
6 When you’re done, turn off auto-keyframing.
Rules for Keyframing
How keyframes are created and modified depends on two things: the current state of
the Autokey button, and whether or not there’s already a keyframe in the Time Bar at
the current position of the playhead.
When animating shape changes, the following rules apply:
• When auto-keyframing is off and you adjust a shape that has no keyframes, you can
freely adjust it at any frame, and all adjustments are applied to the entire duration of
that node.
• When you adjust a shape that has at least one keyframe already applied, you must
first turn auto-keyframing on before you can make further adjustments that add
more keyframes.
• If auto-keyframing is off, you cannot adjust a shape at a frame that doesn’t already
have a keyframe. If you try to do so, the shape outline turns orange to let you know
that the changes will not be permanent. However, you can still adjust a shape if the
playhead is on an already existing keyframe.
Note: If the playhead is not currently on a keyframe and you modify a RotoShape node
while auto-keyframing is off, that change disappears when you move the playhead to
another frame (the outline should be orange to indicate the temporary state of the
change you’ve made). If you’ve made a change that you want to keep, turn autokeyframing on before you move the playhead to add a keyframe to that frame.
Animating Single or Multiple Shapes
When a RotoShape node has multiple shapes, you can control whether or not animated
changes you make to a single shape simultaneously keyframe every shape within that
node. If you’re careful about how you place your keyframes, this allows you to
independently animate different shapes within the same RotoShape node without
overlapping keyframes affecting the interpolation of a shape’s animated transformation
from one keyframe to another.
To set keyframes for only the current shape:
m
Set Key Current Shape/All Shapes to Key Current Shape.
When this control is turned on, making a change to a single shape within a RotoShape
node produces a keyframe that affects only that shape. Animation applied to any other
shape is not affected by this new keyframe.Chapter 20 Rotoscoping 559
To set keyframes for all shapes:
m
Toggle to Key All Shapes.
When this control is turned on, making a change to any single shape results in the
state of all shapes within that RotoShape node being saved in the newly created
keyframe.
Seeing the Correspondences Between Shapes and Keyframes
When you position the playhead in the Time Bar over a keyframe, all shapes that were
animated within that keyframe appear with blue control points. Shapes that aren’t
keyframed remain at the current shape color, which is yellow by default.
Cutting and Pasting RotoShape Keyframes
You can copy and paste rotoshape keyframes from one frame of the Time Bar to
another. Whenever you copy a keyframe, you copy the entire state of that shape at
that frame.
To copy a keyframe:
1 Move the playhead in the Time Bar to the frame where you want to copy the current
state of the shape.
2 Right-click the transform control of the desired shape, then choose Copy Keyframe of
Shape from the shortcut menu.
Note: You can copy the state of a shape at any frame, even if there is no keyframe
there. Simply position the playhead anywhere within the Time Bar and use the Copy
Keyframe command. That data can be pasted at any other frame as a keyframe.
Both shapes are keyframed. Only the top shape is keyframed.560 Chapter 20 Rotoscoping
To paste a keyframe:
1 Move the playhead in the Time Bar to the frame where you want to paste the copied
keyframe.
2 Right-click the transform control of the desired shape, then choose Paste Keyframe of
Shape from the shortcut menu.
Adding Blank and Duplicate Keyframes to Pause Animation
If you want a shape to be still for a period of time before it begins to animate, insert a
pair of identical keyframes at the start and end frame of the pause you want to create.
If you want to delay a shape’s animation for several frames beyond the first frame,
insert a keyframe with no animated changes at the frame you want the animation to
begin, then modify the shape at the keyframe you want to animation to end.
To manually add a keyframe without modifying the shape:
m
Click the Autokey button off and on.
A keyframe is created for the current state of the shape. If the shape is already
animated, the state of the shape at the position of the playhead in the Time Bar will be
stored in the new keyframe.
Shape Timing
Three parameters within the timing subtree of the RotoShape parameters allow you to
modify when a rotoshape starts and ends. An additional retimeShapes control lets you
retime all keyframes that have been applied to that RotoShape node, speeding up or
slowing down the animation that affects the shapes within.
timeShift
Offsets the entire rotoshape, along with any keyframes that have been applied to it.
This parameter corresponds to the position of that rotoshape in the Time View.
inPoint
Moves the in point of the rotoshape, allowing you to change where that rotoshape
begins. This parameter corresponds to the in point of the rotoshape in the Time View.
Start of animation Identical keyframes
pausing animation
End of animation
Start of
animation
End of
animationChapter 20 Rotoscoping 561
outPoint
Moves the out point of the rotoshape, allowing you to change where that rotoshape
ends. This parameter corresponds to the out point of the rotoshape in the Time View.
Retiming RotoShape Animation
The retimeShapes button, within the timing subtree of the RotoShape Parameters tab,
lets you retime all of the keyframes that are applied to that rotoshape.
Using this command, you can compress the keyframes that are animating a rotoshape,
speeding up the changes taking place from keyframe to keyframe, or expand them,
slowing the animation down.
When you click retimeShapes, the Node Retime window appears. The retimeShapes
command has two modes, Speed and Remap, which affect a RotoShape node’s
keyframes similarly to the Speed and Remap options found within the Timing tab of a
FileIn node.
Speed Adjustments
The default Operation, Speed, lets you compress or expand all of the keyframes within
the RotoShape node by a fixed multiplier.
Suppose you have the following group of keyframes:
Using the default Amount value of 2.0 and clicking apply contracts the keyframes
proportionally—resulting in the following distribution:562 Chapter 20 Rotoscoping
invert
Turning on the Invert button expands the keyframes by the Amount value, instead of
contracting them. This has an identical effect to setting Amount to a decimal value
between 0 and 1.
Remap Adjustments
Setting Operation to Remap provides a way for you to use curve expressions to retime
the current keyframe distribution. This lets you apply a retiming curve from a FileIn node
to the keyframes of a shape that you’ve already animated to rotoscope that image.
Note: If you want to create a curve specifically to use with the Node Retime command,
you can create a local variable within the RotoShape Parameters tab, load it into the
Curve Editor, and create a curve expression which you can then copy and paste from
the local variable into the Retime Expr field.
Attaching Trackers to Shapes and Points
You can attach preexisting Tracker nodes to a shape, or to any one of a shape’s
individual control points. Once a tracker is attached and you are happy with the result,
you can bake the track into the shape’s panX and panY parameters.
Attaching Trackers to Shapes
When you attach a tracker to a shape within a RotoShape node, the individual rotoshape
control points are not changed as the shape moves along the tracked motion path.
Furthermore, the offset between the position of the tracker and the original position of
the shape is maintained as the shape follows the path of the tracker. You can attach
separate trackers to separate shapes within the same RotoShape node.Chapter 20 Rotoscoping 563
To attach a track to an entire shape:
m
In the Viewer, right-click the lower-center portion of the shape’s transform controls and
select an available track from the “Attach tracker to shape” list.
Note: You may have to click more than once for the correct menu to appear.
To remove the tracker from the shape:
m
Right-click the lower-center portion of the shape’s transform controls, then choose
“Remove tracker reference” from the shortcut menu.
To bake the track into the shape:
m
Right-click the lower-center portion of the shape’s transform controls, then choose
“Bake tracker into panX/panY” from the shortcut menu.
The selected track information is fed into the shape’s panX and panY parameters.
Attaching Trackers to Individual Control Points
You can also attach multiple trackers to each of a shape’s individual control points. You
can attach as many trackers to as many separate control points as you like.
To attach an existing track to a single control point:
1 Select a shape control point in the Viewer.
2 Right-click the selected points, then choose a tracker from the “Attach tracker to
selected CV’s in current shape” shortcut submenu.
The selected tracker now animates the position of that control point. The offset
between the position of the track and the original position of the shape control point is
maintained as the point is animated along the path of the tracker.
You can also create a new Tracker node that is referenced by a specific control point.
To create a new Tracker node attached to one or more control points:
1 Select one or more points in the Viewer.
2 Right-click one of the selected points, then choose “Create tracker for selected points”
from the shortcut menu.
3 Attach the new tracker to the image you want to track, and use the tracker’s controls to
track the desired feature.
The control point you selected in step 1 automatically inherits the tracking data.
Right-click the lower center
section of the transform controls.564 Chapter 20 Rotoscoping
To remove a tracker from one or more control points:
1 Select one or more shape control points in the Viewer.
2 Right-click one of the selected points, then choose “Remove tracker reference on
selected points” from the shortcut menu.
Adjusting Shape Feathering Using the Point Modes
Each shape you create with a RotoShape node actually consists of two overlapping sets
of points. The main shape points define the filled region of the shape itself, and a
second set of edge points allows you to create a custom feathered edge. Since the
shape edge and feathered edges are defined with two separate sets of points, you can
create a feathered edge with a completely different shape.
The following table describes the four point modes that allow you to adjust the shape
and feathered edges either together or independently of one another.
To create a soft edge around a shape:
1 Click the Edge Points button.
2 Select one or more points around the edge of the shape.
Button Shortcut Description
Group Points F1 Group Points mode lets you move the main
shape point and its associated edge point
together. Selecting or moving main shape points
automatically moves the accompanying edge
point. In this mode, edge points cannot be
selected.
Main Points F2 Main Points mode only allows you to adjust the
main shape points. Edge points cannot be
modified.
Edge Points F3 Edge Points mode only allows you to adjust the
edge points. Main shape points cannot be
modified.
Any Points F4 Any Points mode lets you select and adjust either
shape points or edge points independently of
one another.Chapter 20 Rotoscoping 565
3 Drag the selected points out, away from the shape’s edge. The farther you drag the
edge, the softer it becomes.
To reset a soft edge segment to the default hard edge:
1 Click the Edge Points or Any Points controls.
2 Right-click the edge point you want to reset, then choose Reset Softedge from the
shortcut menu.
To adjust both main and edge points at the same time:
1 Click the Group Points button.
2 Select one or more main shape points around the edge of the shape.
Note: In Group Points mode, you can neither select nor adjust edge points.
3 Adjust the selected main shape points.
The accompanying edge points are automatically adjusted to match your changes.566 Chapter 20 Rotoscoping
Important: Be careful with the soft edges—if you create a shape with overlapping
lines, rendering artifacts may appear. To clean up minor artifacts, apply a slight blur
using the Blur node.
Linking Shapes Together
When you right-click the transform control, you can set up a skeleton relationship
between your shapes. Right-click and choose Add Child from the shortcut menu, then
click the transform control of the shape you want as a child of the current shape. To
remove the link, right-click, then choose Remove Parent from the shortcut menu.
Once a link is established, modifying a shape affects its children.Chapter 20 Rotoscoping 567
Importing and Exporting Shape Data
Two controls let you import and export shape data between Shake and external
applications. These controls are located in the Viewer shelf when editing RotoShape,
Warper, or Morpher nodes.
To export shape data:
m
Click the Import Shape Data button in the Viewer shelf, choose a name and destination
for the export file in the File Browser, then click OK.
To import shape data:
m
Click the Export Shape Data button in the Viewer shelf, choose a compatible shape
data file using the File Browser, then click OK.
To support this new feature, a new shape data file format has been defined—named
SSF (Shake Shape File). This format standardizes the manner in which shape data is
stored by Shake for external use. Before shape data can be imported from an external
application, it must first be converted into the SSF format. For more information on the
SSF format, see the Shake 4 SDK Manual.
Right-Click Menu on Transform Control
Menu Option Description
Shape Visibility > Hide Shapes Hides all shapes.
Shape Visibility > Hide Other
Shapes
Hides all shapes except for the current one.
Shape Visibility > Show All
Shapes
Turns the visibility of all shapes on.
Shape Visibility > List of all
shapes
A list of every shape appears within this submenu. Choose a shape
to toggle its visibility.
Bounding Box Toggle Toggles the Bounding Box control for a shape on and off.
Arrange > Move to Back Moves the shape behind all other shapes.
Arrange > Move Back Moves the shape back one position in the shape order.
Arrange > Move Forward Moves the shape forward one position in the shape order.
Arrange > Move to Front Moves the shape in front of all other shapes.
Select All Selects all points on the shape.
White Renders the shape with a white interior.
Black Renders the shape with a black interior and can therefore be made
to punch holes in other shapes.568 Chapter 20 Rotoscoping
Right-Click Menu on Point
Viewer Shelf Controls
Re-Center Re-centers the transform tool to be the center of the shape.
Control-drag to modify it without moving the shape.
Add Child Click the transform tool of a second shape to make it a child of the
current shape.
Remove Parent Removes the current shape from the skeleton hierarchy.
Delete Shape Deletes the current shape.
Copy Shape Copies the current shape.
Copy Visible Shapes Copies all visible shapes.
Paste Shapes Pastes copied shapes.
Copy Keyframe of Shape Copies the state of the shape at the position of the playhead.
Paste Keyframe of Shape Pastes a copied shape keyframe.
Attach Tracker To Shape Calls up a list of previously created trackers that may be used to
transform the entire shape.
Menu Option Description
Menu Option Description
Select All Selects all points on the shape.
Reset Softedge Repositions the edge knot on top of the main knot.
Remove tracker reference on
selected points
Breaks the link between a tracker and the currently selected control
points.
Bake tracker into selected points Permanently bakes the transformation data from a referenced
tracker into the control point.
Create tracker for selected
points
Creates a new tracker that’s automatically used to transform the
selected control points.
Attach tracker to selected CVs in
current shape
Lets you choose from all the trackers available in the current script,
in order to attach a tracker to one or more selected control points.
Button Description
Add/Edit Shapes Click the Add Shapes button to draw a shape. Click the Edit
Shapes button to edit a shape. Closing a shape automatically
activates Edit Shapes mode. Rotoshapes render only when
Edit Shapes mode is active.
Import/Export
Shape Data
Lets you import and export shape data between Shake and
external applications.Chapter 20 Rotoscoping 569
Fill/No Fill Controls whether or not the shape is filled.
Show/Hide
Tangents
Controls the tangent visibility. In Pick mode, only the active
point displays a tangent. None hides all tangents, and All
displays all tangents.
Lock/Unlock
Tangents
When Lock Tangents is on, the tangent angles are locked
when control points are moved, rotated, or scaled. When
Unlock Tangents is on, the tangent angles are unlocked.
Spline/Linear
Mode
New points are created as splines or as linear points. Select a
point and toggle this button to specify its type.
Enable/Disable
Transform
Control
Enables or disables the onscreen transform control used pan
the entire collection of shapes. The default setting is Hide.
Delete Control
Point
Deletes currently selected knot/control point(s).
Points Modes Determines what points can be selected. Use Group Points
mode to select the main shape and the edge points. Use
Main Points mode to select only the main shape points. Use
Edge Points mode to select only edge points, and use Any
Points mode to pick either main or edge points.
Key Current/Key
All Shapes
Key Current Shape/Key All Shapes: When Autokey is enabled
(when animating), select Key Current Shape to keyframe
only the current rotoshape. Select Key All Shapes to
keyframe all rotoshapes.
Enable/Disable
Shape Transform
Control
Lets you show or hide the transform control at the center of
each shape. Hiding these controls will prevent you from
accidentally transforming shapes while making adjustments
to control points.
Path Display If the main onscreen transform tool is turned on, this button
toggles the visibility of the animation path.
Button Description570 Chapter 20 Rotoscoping
RotoShape Node Parameters
The RotoShape node has the following controls:
timing
Three parameters within the timing subtree of the RotoShape parameters allow you to
modify when a rotoshape starts and ends. An additional retimeShapes control lets you
retime all keyframes that have been applied to that RotoShape node.
timeShift
Offsets the entire rotoshape, along with any keyframes that have been applied to it.
This parameter corresponds to the position of that rotoshape in the Time View.
inPoint
Moves the in point of the rotoshape, allowing you to change where that rotoshape
begins. This parameter corresponds to the in point of the rotoshape in the Time View.
outPoint
Moves the out point of the rotoshape, allowing you to change where that rotoshape
ends. This parameter corresponds to the out point of the rotoshape in the Time View.
retimeShapes
The retimeShapes control, within the timing subtree of a rotoshape’s parameters, lets
you retime all of the keyframes that are applied to that rotoshape. Using this
command, you can compress the keyframes that are animating a rotoshape, speeding
up the changes taking place from keyframe to keyframe, or expand them, slowing the
animation down. When you click retimeShapes, the Node Retime window appears.Chapter 20 Rotoscoping 571
For more information on using the retimeShapes command, see “Retiming RotoShape
Animation” on page 561.
Res
The Width and Height of the RotoShape node’s DOD.
bytes
The bit depth of the image created by the RotoShape node. You can specify 8-bit, 16-
bit, or float.
pan
A global pan applied to the entire image.
angle
A global rotation applied to the entire shape—points are properly interpolated
according to the rotation.
aspectRatio
This parameter inherits the current value of the defaultAspect global parameter. If
you’re working on a nonsquare pixel or anamorphic image, it is important that this
value is set correctly to avoid unwanted distortion in the image.
scale
A global scale applied to the entire image.
center
The center of transformation for the angle and x/yScale parameters.
motionBlur
Motion Blur quality level. 0 is no blur, whereas 1 represents standard filtering. For more
speed, use less than 1. This value is multiplied by the Global Parameter motionBlur.
shutterTiming
A subparameter of motionBlur. Shutter length. 0 is no blur, whereas 1 represents a
whole frame of blur. Note that standard camera blur is 180 degrees, or a value of .5. This
value is multiplied by the global parameter, shutterTiming.
shutterOffset
A subparameter of motionBlur. This is the offset from the current frame at which the
blur is calculated. Default is 0; previous frames are less than 0.572 Chapter 20 Rotoscoping
Using the QuickShape Node
The QuickShape node is an image generator to be used for animated garbage mattes. It
is ideal for plugging into the Mask input of a node, or is used in conjunction with
nodes such as Inside, Outside, or KeyMix.
Note: The QuickShape node is an older node to create rotoshapes. The more flexible
(and faster) RotoShape node is recommended. This node is maintained for compatibility
purposes. The one advantage QuickShape has over RotoShape is its ability to propagate
keyframe changes to other keyframes before or after the current frame.
Since these nodes create images like any node, you can modify the images with other
nodes such as Blur or DilateErode.
You can enable motion blur for an animated QuickShape. Unfortunately, the shape does
not use Shake’s normal high-quality motion blur. It instead draws and renders several
versions of the entire shape, so temporal aliasing occurs with extreme motion.
Creating QuickShapes
When you create a QuickShape node and Display Onscreen Controls is enabled for the
Viewer, you can immediately add points to the shape.
Button Usage Example
Start in Build mode. In Build mode, each
time you click a blank spot, you append a
new point between the last point and the
first point. Click the points, or, if you hold
down the mouse button, you can drag the
new point around. You can also go back and
change any key or tangent, or insert a point
by clicking a segment.
Once you are finished with the rough shape,
switch to Edit mode (click the Build/Edit
Mode button). When you click a blank spot,
you do not append a new point; instead, you
can drag to select several points and move
them as a group. This also fills in the shape.Chapter 20 Rotoscoping 573
Modifying QuickShapes
To select multiple points in Edit mode, drag to select the desired points. The selected
points can then be modified as a group.
Button Usage Example
The Spline/Line Mode buttons change the
selected points from Linear to Smooth.
Select the points and toggle the button to
the setting you want. In this example, the
two right points have been made Linear.
When Linear is selected, no tangents are
available.
Click the Fill button on the Viewer toolbar to
turn the shape fill on and off. In Build mode,
the shape is not filled. The filled shape is not
just a display feature—it affects the
composite.574 Chapter 20 Rotoscoping
The Lock Tangents button locks or unlocks
the tangents of adjacent points when
moving any point. In the first example, the
tangents are unlocked. Therefore, the
middle blue point is moved down. Shake
tries to keep the tangents of the adjacent
points smooth, and therefore moves the
tangents.
If Lock Tangents is on, the adjacent tangents
stay locked in place. This provides accuracy
for adjacent segments, but creates a more
irregular shape.
The Show/Hide Tangents button displays or
hides the tangents on the shape.
The Enable/Disable Transform Control
button turns on and off the display of the
transform tool for the QuickShape.
The Delete Control Point button deletes all
selected points.
Button Usage ExampleChapter 20 Rotoscoping 575
To break a tangent:
m
Control-click the tangent.
Note: No tangents are available when the points are set to Linear mode.
To reconnect the tangents:
m
Shift-click the broken tangent.
Use the transform tool to modify the entire shape. The transform tool includes pan,
rotation, and scaling tools for the shape. Since this is a transformation, the points rotate
properly in an angular fashion when interpolating in an animation, rather than just
sliding linearly to the next position. The controls appear at the same resolution of the
QuickShape node, so if you are dealing with 2K plates, you may want to enter a larger
resolution for the QuickShape. Similarly, if you find the transform tool annoying, enter a
resolution of 10 x 10. Neither of these techniques changes rendering speed due to the
Infinite Workspace.
Animating QuickShapes
The following table discusses the QuickShape animation tools.576 Chapter 20 Rotoscoping
Button Usage Example
To easily animate the QuickShape, enable
Autokey and move the points. To enter a
new keyframe, move to a new time, and
change the shape’s position (or the control
points of the shape). In this example, the
shape is smaller on the second keyframe. As
you drag the playhead, the shape
interpolates between the two keyframes.
Delete a keyframe (if present) at the current
frame.Chapter 20 Rotoscoping 577
Here, a point is inserted and moved toward
the center at the first keyframe. At the
second keyframe’s position, the shape is still
round because Shake has maintained the
smooth quality of the segment. If you
instead turn on the Propagate buttons
when you modify a point, the second
keyframe’s point position is also modified.
For example, go back to keyframe 1, enable
Propagate Forward, and insert a new point,
dragging it outward. Jump to the second
keyframe, and the new point is positioned
in a relatively similar fashion in the second
keyframe.
If you have several keys, Propagate Forward
or Backward can slow down your
interactivity.
To quickly scrub through an animation,
toggle to Release or Manual Update mode
(in the upper-right corner of the interface),
and then move the playhead. The shapes
draw in real time, but are not rasterized.
Button Usage Example578 Chapter 20 Rotoscoping
QuickShape Node Parameters
The following table lists the QuickShape parameters.
Parameters
This node displays the following controls in the Parameters tab:
width, height
The overall width and height of the frame in which the rotoshape is drawn. Defines the
DOD. These parameters default to the expressions GetDefaultWidth() and
GetDefaultHeight().
bytes
Bit depth, 1, 2, or 4 bytes/channel.
xPan, yPan
A global pan applied to the entire shape.
angle
The center of transformation for the angle and x/yScale parameters.
aspectRatio
This parameter inherits the current value of the defaultAspect global parameter. If
you’re working on a nonsquare pixel or anamorphic image, it is important that this
value is set correctly to avoid unwanted distortion in the image.
xScale, yScale
A global scale applied to the entire shape.
xCenter, yCenter
The center of transformation for the angle and x/yScale parameters.
motionBlur
Motion Blur quality level. 0 is no blur, whereas 1 represents standard filtering. For more
speed, use less than 1. This value is multiplied by the Global Parameter motionBlur.
shutterTiming
A subparameter of motionBlur. Shutter length. 0 is no blur, whereas 1 represents a
whole frame of blur. Note that standard camera blur is 180 degrees, or a value of .5. This
value is multiplied by the Global Parameter shutterTiming.
shutterOffset
A subparameter of motionBlur. This is the offset from the current frame at which the
blur is calculated. Default is 0; previous frames are less than 0.21
579
21 Paint
Shake provides simple paint capabilities using the
QuickPaint node. This chapter describes how to use the
non-destructive tools found within this node to make fast
fixes to your image sequences.
About the QuickPaint Node
The QuickPaint node is a touch-up tool to fix small element problems such as holes in
mattes or scratches/dirt on your plates. It is a procedural paint tool, which allows you to
change strokes long after they’ve been made. This helps to emphasize its key feature—
it is simply another compositing tool that can easily be used in conjunction with any of
Shake’s other nodes. You can apply an effect and easily ignore it, remove it, or reorder it
after you have applied your paint strokes.
The tools within the QuickPaint node respond to the pressure sensitivity found in most
pen-based digitizing tablets.
Connecting Input Images to the QuickPaint Node
The QuickPaint node has two inputs. The first one lets you connect a background
image to paint on, and also acts as the clone source. The second input is used by the
Reveal tool—Reveal paint strokes expose the image that’s connected to the second
input (for example, a clean background plate), allowing you to replace portions of the
first image with portions of the second image.
Setting the QuickPaint Node’s Resolution
You can apply a QuickPaint node to another node, or you can create an unattached
“floating” QuickPaint node that can be composited later with other nodes using one of
the Layer functions, or used as a mask operator. QuickPaint nodes that are attached to
other nodes assume the resolution of the node tree.
Floating QuickPaint nodes inherit the defaultWidth and defaultHeight of the script. To
change the resolution of a floating QuickPaint node, create a Color or Window node,
and attach the QuickPaint node underneath. Setting the resolution in the Color or
Window node parameters will then determine the resolution of the QuickPaint node.580 Chapter 21 Paint
Note: In the Color node, the alpha channel is set to 1 by default.
It’s important to make sure the resolution of the QuickPaint node is properly set,
because you cannot paint beyond the boundaries of the frame.
Toggling Between Paint and Edit Mode
The QuickPaint node has two modes of operation. In Paint mode, you can create new
brush strokes. In Edit mode, you can modify any previously created paint stroke.
When the QuickPaint node is active, its associated tools appear in the Viewer shelf.
Three subtabs—Paint Controls, Edit Controls, and Paint Globals—appear in the
Parameters tab.
The first button on the Viewer shelf is the Paint/Edit mode toggle.
To toggle between Paint and Edit mode, do one of the following:
m
Click the Paint/Edit button.
m
Click either the Paint Controls or Edit Controls subtab in the Parameters window to
toggle to the Paint or Edit mode.
Paint Tools and Brush Controls
Using the other controls in the Viewer shelf and Paint Controls tab, you can modify the
paint characteristics of new strokes (color, size, brush type, opacity, softness).
There are five basic brush types. One modifier changes the drop-off of the five brush
types. To use a brush, make sure that you are in Paint mode (click the Paint/Edit button
or select a brush), then paint.
To change the size of a selected brush:
m
Control-drag in the Viewer. You can also numerically set the brush size in the Paint
Controls tab.
To draw a straight line:
m Hold down the Shift key while drawing in the Viewer.Chapter 21 Paint 581
The following table contains the basic brush tools.
Press F9 to select your last-used brush type. With this key command, you can quickly
toggle between the last two brush types you selected.
Picking a Paint Color
There are several ways to pick your paint color and opacity.
The Color control in the Paint Controls tab gives you access to the standard color
tools—the Color Picker, the Virtual Color Picker, or point-and-click color selection in the
Viewer. With the pointer in the Viewer, you can also press F10 or P to open the Color
Picker.
The Color control in the Viewer shelf indicates the current color. It works like any other
color control in Shake.
When you paint, each stroke is unpremultiplied. As a result, adjusting the alpha slider in
the Parameters tab does not affect what you apply to the RGB channels. However,
changing opacity affects all four channels.
Button Description
Hard/Soft When soft is selected, paints any brush type with a soft
falloff. When hard is selected, paints any brush type with a
hard falloff. You can also press F11 to toggle between the
soft and hard falloff.
This is not a brush—it just modifies other brushes.
Paint Brush Applies RGBA color to the first input.
Smudge Smears the pixels. Smudge should always use the hardfalloff setting.
Eraser Erases previously applied paint strokes only. Does not affect
the background image.
Reveal Brush Reveals the image connected to the second node input. If
there is no input, the Reveal Brush acts as an Outside node,
and punches a hole through the paint and the first input
source.
Clone Brush Copies areas from the first image input, as well as paint
strokes created in the QuickPaint node.
To move the brush target relative to the source, Shift-drag.
Soft
falloff
Hard
falloff582 Chapter 21 Paint
Other Viewer Shelf Controls
The QuickPaint node has the following Viewer shelf controls:
Button Description
Active Channels These buttons indicate which channels are being painted
on. For example, to touch up the alpha channel only, turn
off the RGB channels.
Frame Mode When Frame mode is selected, you only paint on the
current frame.
Interp Mode When Interp (Interpolate) mode is selected, brush strokes
are animated using interpolation from one frame to the
next. For example, paint a stroke on frame 1, and then go
to frame 20, and paint a stroke on frame 20. When you
scrub between 1 and 20, the stroke interpolates. Beyond
frame 20 or before frame 1, the image is black.
To insert a second interpolation stroke, click the Interp
toggle until Interp is selected again, and use the
strokeIndex slider in the Edit Controls tab to select the
stroke you wish to modify.
Persist Mode When Persist (Persistent) mode is selected, the stroke
persists from frame to frame. The stroke does not change
unless you switch to Edit mode and animate the stroke.
History Step Use the History Step buttons to step backward or
forward through your history. Using these buttons
activates Edit mode.
As you step backward, the strokeIndex parameter in the
Edit Controls tab indicates the current stroke number.
Although you can edit any brush at any time, this
parameter sets the point at which you are evaluating
your paint.
You can, for example, step back several strokes, insert a
new stroke, and then step forward. The later strokes are
placed on top of your new stroke because the new stroke
is earlier in the step history.
Magnet Drag Mode When Magnet Drag mode is enabled, and you are in Edit
mode, you can select a group of points on a stroke. If you
click near the middle of the points and drag, the points
near the selected point are dragged more than points
farther away. You can also press and hold Z to
temporarily activate this mode if you are in Linear Drag
mode.
Linear Drag Mode Click the Magnet Drag button to toggle to Linear Drag
mode. When in Edit mode and you drag a group of
points, they all move the same amount.Chapter 21 Paint 583
Modifying Paint Strokes
In Edit mode, you can select any stroke by clicking its path. You can also adjust the
strokeIndex slider back and forth to expose previous strokes numerically.
Once you’ve selected a stroke, you can use the parameters in the Edit Controls tab to
modify its characteristics, changing the tool, softness, stroke mode, color, alpha
channel, brushSize, and all other parameters long after its creation.
You can animate strokes with the Interpolation or Frame setting, or you can modify any
stroke after it has been created with the Edit mode. For more information on
converting paint stroke modes, see “Converting Paint Stroke Types” on page 589.
In Edit mode, you can select a stroke in one of three ways:
m
Click the stroke.
Its control points and path appear.
m Use the strokeIndex parameter in the Edit Controls tab.
Each stroke is assigned a number, and can be accessed by using the strokeIndex slider.
m Use the History Step buttons.
This not only selects the stroke, but only renders existing strokes up to the current
frame. Even though there may be later strokes in the history list, they are not drawn
until you move forward using the History Step button.
To add to your selected points on a paint stroke:
m
Press Shift and drag a selection box over the new points.
Erase Last Stroke By default, removes the last stroke you made. If you
select another stroke with the History Step controls or
stokeIndex parameter, this control deletes the currently
selected stroke.
Clear Canvas Removes all strokes from all frames of your canvas.
Button Description584 Chapter 21 Paint
To remove points from the current selection:
m
Control-drag to remove control points from the current selection.
To drag-select multiple control points:
1 Move the pointer over the stroke you want to edit.
2 Drag to select that stroke’s control points.
3 Edit the points as necessary.
This behavior applies to QuickPaint, RotoShape, and QuickShape objects. For example,
you can display the onscreen controls for the shapes of two different QuickPaint nodes
by loading the parameters of one node into the Parameters1 tab, then Shift-clicking the
right side of the second node to load its parameters into the Parameters2 tab. Also, if
the points you want to drag-select are within a DOD bounding box, move the pointer
over the shape inside the DOD, then drag to select the points.
Deleting Strokes
There are three ways you can delete paint strokes.
To delete the last stroke you created:
m
Click the Erase Last Stroke button in the Viewer shelf.
To delete all paint strokes you’ve made on every frame:
m
Click the Clear Canvas button in the Viewer shelf.
To delete one or more strokes you made earlier:
1 Change to Edit mode by doing one of the following:
• Click the Edit button in the Viewer shelf.
• Click the Edit Controls tab in the Parameters tab.
2 Move the playhead to the frame with the stroke you want to delete.
3 Select a stroke to delete by doing one of the following:
• Click the History Step button in the Viewer shelf.
• Move the stokeIndex slider in the Edit Controls tab.
4 When the stroke you want to delete is highlighted in the Viewer, click the Erase Last
Stroke button in the Viewer shelf.
Editing Stroke Shape
Once you’ve selected one or more points, you can drag them to a new position to
change the shape of the paint stroke.
To insert a new point:
m
Shift-click a segment of the stroke.
To remove a point:
m
Select the point, then press Delete.Chapter 21 Paint 585
There are two different drag modes that affect how strokes are reshaped when you
move a selected group of control points.
• If Linear Drag mode is selected, all selected control points move the same amount.
• If Magnet Drag mode is selected, the points nearest the pointer move the most. To
temporarily activate this mode, press and hold the Z key and drag.
Animating Strokes
There are several ways you can animate paint strokes:
• You can keyframe the movement of selected points using the standard Autokey
controls to set keyframes for paint strokes in Interp mode.
• You can animate the startPoint and endPoint parameters to animate the completion
of a stroke along the defined stroke path.
• You can attach a tracker to a paint stroke.
Selected points Dragging in Linear Drag mode
Selected points Dragging in Magnet Drag mode586 Chapter 21 Paint
Attaching a Tracker to a Paint Stroke
In Edit mode, a preexisting track can be attached to a paint stroke.
To attach a track to a paint stroke:
1 Make sure the paint stroke is a persistent stroke.
Note: For information on converting a paint stroke from Frame to Persist mode, see
“Converting Paint Stroke Types” on page 589. You can also convert the stroke after the
track is attached.
2 In the Viewer, right-click the selected stroke, then choose an available track from the
“Add tracker to stroke” shortcut menu.
Note: Although only a single control point appears on the paint stroke when attached
to the track, the track is applied to the entire paint stroke. You cannot apply a track to
individual control points on a paint stroke.
The selected track information is then fed into the stroke’s panX and panY parameters
as an offset.
Once a tracker is attached to a paint stroke, the track information is displayed in the
Viewer on the paint stroke, as well as in the Edit Controls tab, next to the Convert
Stroke button. The track information is only displayed if that stroke is selected in the
strokeIndex or in the Viewer.
To remove the track, right-click the paint stroke in the Viewer, then choose “Remove
tracker reference” from the shortcut menu.
Once the paint stroke is attached to the track and you have achieved the result you
want, you can “bake” the tracker information into the stroke.
To bake the track:
1 Make surethat Edit mode is on.
2 Select the stroke (click the stroke in the Viewer or select the stroke in the strokeIndex
slider in the Edit Controls tab).
3 Right-click the paint stroke, then choose “Bake tracker into paint stroke” from the
shortcut menu.
The keyframes are applied to the paint stroke, and the track information no longer
appears in the Edit Controls subtab.Chapter 21 Paint 587
Modifying Paint Stroke Parameters
You can also use the Edit Controls tab in the QuickPaint parameters to modify your
strokes.
In the Edit Controls tab, click a brush type in the Tool row to switch brush types. You
can also click the Hard/Soft button to switch between a hard and soft falloff, or change
the stroke type with the Convert Stroke button. Additionally, you can alter or animate
the color, alpha, opacity, brushSize, or aspectRatio parameters of the current stroke.
The startPoint parameter determines the point at which the stroke begins, measured as
a percentage offset from the beginning of the stroke’s path. For example, a startPoint
value of 50 displays a stroke that is half the length of its invisible path. The endPoint
parameter works the same way, but from the other end of the stroke. You can animate
a stroke onscreen, creating a handwriting effect by setting keyframes for the endPoint
from 0 to 100 over several frames. All stroke types can be animated in this way.
Remember that the startPoint and endPoint parameters describe a percentage of the
path. Therefore, if you change control point positions relative to each other, you may
introduce unwanted fluctuations in the line-drawing animation.
The keyFrames parameter (in the Paint Controls subtab) is a placeholder so that
keyframe markers appear in the Time Bar—it has no other interactive function.588 Chapter 21 Paint
Interpolating Paint Strokes
In the following example, frame 1 contains three separate paint strokes, and frame 50
also contains three separate paint strokes. Interpolate the second paint stroke on frame
1 (the number “2”) with the second paint stroke on frame 50 (the number “5”).
To interpolate paint strokes from one shape to another:
1 In the Viewer shelf, ensure that Paint mode is enabled.
2 Ensure that Frame mode is enabled.
3 At frame 1, draw three paint strokes.
4 At frame 50, draw three more paint strokes.
Note: In the above illustrations, each number is a single paint stroke.
5 In the Viewer shelf, click the Paint mode button to toggle to Edit mode.
Note: You can also click the Edit Controls tab in the QuickPaint Parameters tab.
6 In the Edit Controls tab, select stroke 2 from the strokeIndex.Chapter 21 Paint 589
7 Click the Convert Stroke button.
The Convert Stroke window opens.
8 In the Convert Stroke window, enable Interp if it is not already enabled.
9 Enter “2, 5” in the Stroke Range field.
This instructs Shake to combine paint strokes 2 and 5 into one interpolated stroke.
Note: Because there is more than one paint stroke on a frame, the comma syntax must
be used for interpolation. If frame 1 contained only one paint stroke, and frame 50
contained only one paint stroke, and you wanted to interpolate the two strokes, you
could enter “1-2” or “1, 2” in the Stroke Range of the Convert Stroke window to
interpolate between paint stroke 1 and paint stroke 2 in the node.
As another example, if you wanted to interpolate between a stroke on frame 1 (stroke
1), a stroke on frame 5 (stroke 2), a stroke on frame 10 (stroke 3), and a stroke on frame
15 (stroke 4), enter “1, 2, 3, 4” to interpolate between all strokes.
10 Click OK.
Scrub between frames 1 and 50, and notice that the 2 (the second paint stroke in the
node) and the 5 (the fifth paint stroke in the node) interpolate.
Because Frame mode was enabled when you drew the paint strokes, the strokes that
are not interpolated (the numbers 1, 3, 4, and 6) exist only at the frame in which they
were drawn (frames 1 and 50).
Converting Paint Stroke Types
Normally, to paint a stroke that exists in all frames, you select Persist mode in the
Viewer shelf before you draw your strokes. In case you forget this step and draw your
strokes in the default Frame mode, you can use the Convert Stroke feature to convert a
paint stroke, or multiple paint strokes, to Persist mode.
Convert Stroke button590 Chapter 21 Paint
To convert paint strokes from Frame to Persist mode:
1 Once the paint stroke is drawn (in Frame mode), click the Edit mode button in the
Viewer shelf.
2 In the Edit Controls tab, select the stroke in the strokeIndex.
Note: You can change the selected stroke later in the Convert Stroke window.
3 Click the Convert Stroke button.
4 In the Convert Stroke window, enable Persist.
To convert multiple strokes, do one of the following:
• To convert all paint strokes from Frame to Persist, enter the whole stroke range in the
“Convert stroke(s)” value field. For example, if you had a total of 12 strokes, enter “1-
12” to convert all 12 strokes to Persist mode.
• To convert selected strokes, enter the desired range in the “Convert stroke(s)” value
field. For example, enter “3-8, 11” to convert paint strokes 3 through 8 and stroke 11
(of the 12 total strokes).
Note: Enter a frame range using standard Shake syntax (“1-100,“ “20-50x3,” and so on.).
• To convert a single stroke, enter the stroke number in the “Convert stroke(s)” field.
5 Click OK.
The strokes are converted from Frame to Persist mode.
You can also convert a persistent paint stroke to appear only within a specific frame
range.
To convert paint strokes from Persist to Frame mode:
1 Once the paint strokes are drawn (in Persist mode), click the Edit mode button in the
Viewer shelf.
2 In the Edit Controls tab, use the strokeIndex slider to select the desired stroke.
3 Click the Convert Stroke button.
In the Convert Stroke window, the message “Convert Stroke from ‘Persist’ to ‘Frame’”
and the Frame Range field appear.
4 Enter the frame range for the paint stroke.
For example, to draw the stroke on frames 1 and 3, and from frames 10 to 20, enter “1,
3, 10-20.”
5 Click OK.
The converted stroke appears on frames 1, 3 and 10 through 20.Chapter 21 Paint 591
QuickPaint Hot Keys
The following table lists the QuickPaint node hot keys.
Note: In Mac OS X, Exposé is mapped to F9-F12 by default. To use these keys in Shake,
disable the Exposé keyboard shortcuts in System Preferences.
QuickPaint Parameters
The following section lists the QuickPaint node parameters.
Note: The QuickPaint node should not be used inside of macros.
The controls in the QuickPaint node are divided among three nested tabs within the
Parameters tab:
Paint Controls
The parameters in this tab control how new strokes are drawn.
Color
The color of the current paint stroke.
alpha
A slider that lets you change the alpha channel of the current paint stroke. This does
not modify its color, as the strokes are not premultiplied.
brushSize
The size of the brush. You can also use Control-drag in the Viewer to set the brush size.
aspectRatio
Aspect ratio of the circular strokes.
Key Function
F9 Use last brush.
F10 or P Pick color.
F11 Toggle between hard/soft brush.
Z Magnet drag in Edit mode.592 Chapter 21 Paint
opacity
A fade value applied to the R, G, B, and A channels.
constPressure
When this parameter is turned on, the digital graphics tablet’s stylus pressure is
ignored.
keyFrames
This parameter has no purpose except as a placeholder to contain keyframe markers in
the Time Bar. It is not modifiable by the user.
Edit Controls
The parameters in this tab let you modify the qualities of existing brushstrokes. Clicking
the Edit Controls tab puts the Viewer into Edit mode, which lets you pick a brushstroke
with the strokeIndex slider, then modify its properties using the parameters below.
strokeIndex
A slider that lets you pick an individual brushstroke to modify. Each brushstroke is
numbered in the order in which it was created. As you change the strokeIndex number,
the corresponding brushstroke appears with a superimposed path to indicate that it is
selected.
Tool
This parameter lets you change the effect that a brushstroke has on the image after the
fact. For example, you can change a paint stroke into a smudge, eraser, reveal, or clone
stroke at any time.
Convert Stroke
Determines what happens to a stroke after the frame in which it’s drawn. Strokes can
be set to last for only a single frame, or to persist for the duration of the QuickPaint
node, or to change their shape, interpolating from one frame to the next.Chapter 21 Paint 593
The types of strokes available are:
• Persist: Paint strokes are static, and remain onscreen from the frame in which they
were drawn until the last frame of the QuickPaint node.
• Frame: Paint strokes appear only in the frame in which they were drawn.
• Interp: Paint strokes remain onscreen from the frame in which they were drawn until
the last frame of the QuickPaint node. Their shape can be keyframed over time, to
create interpolated animation effects.
Color
A color control that lets you change the color of an existing stroke.
alpha
A slider that lets you change the alpha channel of the currently selected stroke. This
control does not modify the color of strokes, because strokes are not premultiplied. This
parameter affects the R, G, B, and A color channels of the image equally.
brushSize
The size of the brush. You can also Control-drag in the Viewer to set the brush size.
aspectRatio
Aspect ratio of the circular strokes.
startPoint
The point at which the stroke begins, based on a percentage offset. This parameter can
be keyframed to create write-on/off effects.
endPoint
The point at which the stroke stops, based on a percentage offset. This parameter can
also be keyframed to create write-on/off effects.
opacity
A slider that lets you change the transparency of the currently selected brushstroke.
Unlike the alpha slider, this parameter does change the color of the stroke.
keyFrames
This parameter contains keyframes applied to the QuickPaint node, but is not
modifiable by the user.
panX, panY
These parameters let you move strokes, using an offset from each stroke’s original
position.594 Chapter 21 Paint
Paint Globals
The parameters in this tab control how stroke information is captured when using a
digital graphics tablet or mouse.
snapshotInterval
Sets how many strokes are applied before the image is cached. For low-resolution
images, you can probably set the value lower, but if you set it too low when working
with film plates, you’ll spend all your time caching 2K plates, which is bad.
maxPressure
Sets the maximum amount of pressure you can apply.
pressureCurve
You can control the pressure response of the stylus by loading this parameter into the
Curve Editor. You can also, of course, change the graphics tablet’s settings outside of
the software.
compressSave
When this control is on, the node is saved in binary format, which is faster and smaller.
When this control is turned off, Shake saves the node in an editable ASCII format
(described below).
moveExpression
This expression controls the drop-off curve for the Magnet drag mode when you move
a group of points.
StrokeData Synopsis
When compressSave is enabled, data is written in a compressed format and is therefore
illegible to you. However, it is faster and more compact when compressed. Each stroke
has the following data in quotation marks when saved in ASCII format:
Exercise Caution With compressSave
If you have a considerable amount of work, you should ensure that compressSave is
enabled. If it is not turned on, you run the risk of creating a file too large for your
computer to read in.Chapter 21 Paint 595
“FORMAT TOOL MASK
NUMDATA;TIME,TYPE;X;Y;P;X;Y;P;...X;Y;P;TIME,TYPE;X;Y;P;....X;Y;P;
”,
followed by inPoint, outPoint, and so on, up to yOffset.
StrokeData Type Function
TOOL int • Paint = 1
• Smudge = 2
• Eraser = 3
• Reveal = 4
• Clone = 5
MASK float The active channels on which the paint is applied.
FORMAT Reserved for future use.
NUMDATA int The number of data pieces per point, placed for future
compatibility reasons. This is currently 3—the X,Y position and the
pressure of each point.
TYPE float The time the data corresponds to.
TYPE int The mode the data corresponds to.
• Persist = 0
• Frame = 3
• Interp = 4
• inPoint (Reserved for future use) = 1
• outPoint (Reserved for future use) = 2
X;Y;P float X, Y position, pressure.22
597
22 Shake-Generated Images
This chapter covers the use of the Shake-generated
image nodes found within the Image Tool tab.
Generating Images With Shake
This chapter covers various nodes that generate images directly within Shake. These
nodes can be used for a variety of purposes—as backgrounds, as masks, or as inputs to
alter the affect of filter or warp nodes.
Checker
The Checker node generates a checkerboard within the boundaries of the image. It is
handy to test warps, or to split a screen in half. To make a specific number of tiles per
row or column (for example, 4 x 4), divide the width and the height by the number—
height/4 yields 4 rows.
Parameters
This node displays the following controls in the Parameters tab:
width, height
The width and height value fields in the Res parameter set the size of the generated
image. 598 Chapter 22 Shake-Generated Images
bytes
The bit depth of the generated image. There are three settings: 8 bits, 16 bits, or float
(1, 2, or 4 bytes per channel).
xSize, ySize
The width and height of each checker in the pattern.
Color
The Color node generates a solid field of color within the width and height of the
image.
Parameters
This node displays the following controls in the Parameters tab:
width, height
The width and height value fields in the Res parameter set the size of the generated
image.
bytes
The bit depth of the generated image. There are three settings: 8 bits, 16 bits, or float
(1, 2, or 4 bytes per channel).
Color
A color control that lets you set the color of the generated image.
alpha
A slider that lets you adjust the transparency of the generated image by modifying the
alpha channel.
depth
A slider that lets you adjust the Z depth of the image.Chapter 22 Shake-Generated Images 599
ColorWheel
The ColorWheel node generates a primitive color wheel. It can also be used as a tool to
determine what HSV/HLS commands, such as AdjustHSV and ChromaKey, are doing.
Parameters
This node displays the following controls in the Parameters tab:
width, height
The width and height value fields in the Res parameter set the size of the generated
image.
bytes
The bit depth of the generated image. There are three settings: 8 bits, 16 bits, or float
(1, 2, or 4 bytes per channel).
satCenter
Saturation of center area.
satEdge
Saturation of edge area.
valCenter
Value of center area.
valEdge
Value of edge area.
Note: By adjusting the satCenter, satEdge, valCenter, and valEdge parameters, you can
change the distribution of color and brightness across the color wheel being
generated.600 Chapter 22 Shake-Generated Images
Grad
The Grad node generates a gradient between four corners of different colors. The count
order of the corners is: Corner 1 in the lower-left corner, corner 2 in the lower-right
corner, and so on. For a simple gradient ramp, use the Ramp node. For a radial gradient,
use the RGrad node.
Parameters
This node displays the following controls in the Parameters tab:
width, height
The width and height value fields in the Res parameter set the size of the generated
image.
bytes
The bit depth of the generated image. There are three settings: 8 bits, 16 bits, or float
(1, 2, or 4 bytes per channel).
xMid, yMid
The midpoint of the gradient, where all four colors meet.
LLColor, aLL, zLL
The color, alpha channel value, and Z channel value at the lower-left corner. The color
defaults to 100 percent red.
LRColor, aLR, zLR
The color, alpha channel value, and Z channel value at the lower-right corner. The color
defaults to 100 percent green.Chapter 22 Shake-Generated Images 601
URColor, aUR, zUR
The color, alpha channel value, and Z channel value at the upper-right corner. The color
defaults to 100-percent blue.
ULColor, aUL, zUL
The color, alpha channel value, and Z channel value at the upper-left corner. The color
defaults to black.
Ramp
The Ramp node generates a linear gradient between two edges. You can set the
direction of the ramp to horizontal or vertical. The Ramp is, among other things, a useful
tool for analyzing color-correction nodes. Attach a horizontal Ramp node to any of the
color-correction nodes, then attach the color-correction node to a PlotScanline node. For
a radial gradient, use the RGrad node. For a four-corner ramp, use the Grad node.
Parameters
This node displays the following controls in the Parameters tab:
width, height
The width and height value fields in the Res parameter set the size of the generated
image. 602 Chapter 22 Shake-Generated Images
bytes
The bit depth of the generated image. There are three settings: 8 bits, 16 bits, or float
(1, 2, or 4 bytes per channel).
orientation
Toggles between generating a horizontal or vertical gradient.
center
The point in the frame at which there’s a 50-percent blend of each color.
Color1, alpha1, depth1
The color, alpha value, and Z channel value at the left (horizontal) or bottom (vertical)
of the frame.
Color2, alpha2, depth2
The color, alpha value, and Z channel value at the right (horizontal) or top (vertical) of
the frame.
Rand
The Rand node generates a random field of noise. The field does not resample if you
change the resolution or density—you can animate the density without pixels
randomly changing. The seed is set to time by default so that it changes every frame,
but you can of course lock this parameter to one value.
Parameters
This node displays the following controls in the Parameters tab:
width, height
The width and height value fields in the Res parameter set the size of the generated
image.
bytes
The bit depth of the generated image. There are three settings: 8 bits, 16 bits, or float
(1, 2, or 4 bytes per channel).Chapter 22 Shake-Generated Images 603
density
The density of the pixels, from 0 to 1. A lower density results in fewer random pixels.
seed
Changes the random pattern of noise that’s created. When Shake generates a random
pattern of values, you need to make sure for purposes of compositing that you can
recreate the same random pattern a second time. In other words, you want to be able
to create different random patterns, evaluating each one until you find the one that
works best, but then you don’t want that particular random pattern to change again.
Shake uses the seed value as the basis for generating a random value. Using the same
seed value results in the same random value being generated, so that your image
doesn’t change every time you re-render. Use a single value for a static result, or use
the keyword “time” to create a pattern of random values that changes over time.
For more information on using random numbers in expressions, see “Reference Tables
for Functions, Variables, and Expressions” on page 941.
RGrad
The RGrad node generates a radial gradient. You can also control the falloff to make
circles. For a simple color ramp, use the Ramp node. For a four-corner gradient, use the
Grad node.
Parameters
This node displays the following controls in the Parameters tab:
width, height
The width and height value fields in the Res parameter set the size of the generated
image. 604 Chapter 22 Shake-Generated Images
bytes
The bit depth of the generated image. There are three settings: 8 bits, 16 bits, or float
(1, 2, or 4 bytes per channel).
xCenter, yCenter
The pixel defining the center of the gradient.
aspectRatio
This parameter inherits the current value of the defaultAspect global parameter. If
you’re working on a nonsquare pixel or anamorphic image, it is important that this
value is set correctly to avoid unwanted distortion in the image.
radius
The non-blurred radius of the center.
falloffRadius
The blurred edge radius (the total width of the circle is radius+falloffRadius).
falloff
The midpoint, in terms of percentage, of the falloff. 0 or 1 equals a hard-edge circle; .5
is a smooth ramp.
centerColor, aCenter, zCenter
The color, transparency, and Z depth at the center of the gradient.
edgeColor, aEdge, zEdge
The color, transparency, and Z depth at the edge of the gradient.
Text
The Text node calls on software that is basically identical to GL Render for character
generation. You can type in your text, or use variables and links to insert characters,
making it an ideal tool for generating slates.
You can use any TrueType (.ttf) and Type 1 (PostScript, .pfa for ASCII and .pfb for binary)
font. If an Adobe Font Metrics (.afm) file is present for the font (for example, you have
MyFont.pfa and MyFont.afm), it is supported and provides kerning for the font. Shake
first looks for fonts in its distribution directory, under fonts. You can also place them in a
direct path by setting the environment variable NR_FONT_PATH. Finally, Shake also
detects fonts placed in the standard directories for your OS:
Macintosh OS X: All “Installed” directories.
Linux: /usr/lib/DPS/AFMChapter 22 Shake-Generated Images 605
The Text node uses the Shake implementation of the GL Render. It allows you to not
only manipulate the characters in 3D space (including X, Y, and Z position, rotation, and
scaling), but also in a camera field of view. Because of this, it is better to animate text
within the Text (or AddText) node to ensure crisp, clean edges.
To select a font in the interface:
1 In the Text node parameters, open the font pop-up menu.
To preview a font in the Viewer, right-click the font name in the list.
2 To choose a font from the list, click the font name.
Note: For long font lists, drag the scroll bar to see more fonts.
You can also use the following shortcuts at any time without any special formatting:
Text Shortcut Writes
{parameter} Prints either the local or global parameter value, for example:
Script={scriptName}
writes
Script=My_Script.shk
{nodeName.parameter} Prints the parameter’s value from a selected node, for example:
Red Val = {Mult1.red}
could write
Red Val = .6
\n New line. Example:
Hello\nWorld
returns
Hello
World
%f Unpadded frame number
%F Four-digit padded frame number
%t Short non-dropframe timecode: no 00: (if not needed)
%T Long non-dropframe timecode: hr:mn:sc:fr
%tD Short dropframe timecode: no 00: (if not needed)
%TD Long dropframe timecode: hr:mn:sc:fr
%H Host name
%U Username
%c,%C Locale’s center
%d Locale’s day 01-31
%D Locale’s abbreviated day name: Wed
%E Locale’s full day name: Wednesday
%m Locale’s month: 01-12
%M Locale’s abbreviated month name: Nov606 Chapter 22 Shake-Generated Images
Examples 1:
To get special characters, such as umlauts, copyright symbols, and so on, use octal and
hexadecimal codes preceded by a \ (backslash). These codes can be found in UNIX with
the main page for “printf” in its special characters section. The following example was
provided by Thomas Kumlehn, at Double Negative. Copy and paste it into the Text node:
Auml=\xC4, Ouml=\xD6, Uuml=\xDC, \n auml=\xE4, ouml=\xF6, uuml=\xFC
\n Szett=\xDF \ntm=\x99, Dot=\x95, (R)=\xAE, (C)=\xA9
A good reference website for characters can be found at www.asciitable.com. (Thanks
to Christer Dahl for this tip.)
To use expressions, preface the text with “a :” (but without the enclosing quotation
marks).
All printed commands must be enclosed in quotation marks. For example, if you want
to print “Hell” from frames 1 to 10 and “o World” from frames 11 onward, enter:
:time<11?”Hell”:”o World”
Finally, you can also use full C formatting for your strings. This is initialized with “a : “at
the start of the text string as well:
: stringf(
“Red = %4.2f at Frame %03f”,
Grad1.red1, time
)
To append strings from another parameter, use something like:
in Text1.text: Hello
in Text2.text: : Text1.text + “ World”
%N Locale’s full month name: November
%x,%X Full date representation: mm/dd/yy
%y Year without century: 00-99
%Y Year as ccyy
Text Shortcut Writes
Text String Writes
My name is Peter My name is Peter
My name is %U If login = Dufus:
My name is Dufus
My name is %U.\nToday is %M.
%d
My name is Dufus.
Today is Nov. 12
Mult red = {Mult1.red} Assuming the node Mult1 exists, and the red value is .46:
Mult red = .46Chapter 22 Shake-Generated Images 607
Parameters
This node displays the following controls in the Parameters tab:
width, height
The width and height value fields in the Res parameter set the size of the frame
containing the generated image.
bytes
The bit depth of the generated image. There are three settings: 8 bits, 16 bits, or float
(1, 2, or 4 bytes per channel).
text
A text field where you enter the text you want to generate in the Viewer.
font
A pop-up menu that lets you choose a font.
xFontScale, yFontScale
Two sliders that let you change the horizontal and vertical size of the generated text.
By default, yFontScale is linked to xFontScale.
leading
The amount of spacing between each line if there are multiple lines of text.
xPos, yPos, zPos
The horizontal, vertical, and Z depth position of the text in the frame. The text is
positioned relative to the center point that’s defined by the xAlign and yAlign
parameter settings.608 Chapter 22 Shake-Generated Images
xAlign
Three buttons that let you define how the generated text should be aligned,
horizontally. The options are:
• left: Aligns the text from the left edge.
• center: Aligns the text from the center.
• right: Aligns the text from the right edge.
yAlign
Three buttons that let you define how the generated text should be aligned, vertically.
The options are:
• bottom: Aligns the text from the bottom edge.
• center: Aligns the text from the middle.
• top: Aligns the text from the top edge.
Color
A color control lets you set the color of the text.
alpha
A slider lets you adjust the transparency of the generated text.
xAngle
A slider lets you create a 3D effect by spinning the text vertically, relative to the
position of the yAlign parameter.
yAngle
A slider lets you create a 3D effect by spinning the text horizontally, relative to the
position of the xAlign parameter.
fieldOfView
The aperture angle in degrees of the virtual camera used to render the 3D positioning
of the xAngle and yAngle parameters.
kerning
The spacing between each letter. Larger values space the letters farther apart, while
smaller values bring the characters closer together. You can also use negative values to
make the characters overlap.
fontQuality
The polygonalization factor of the font splines. This is conservatively set to a high value.
For flat artwork, you can probably get away with a value of 0. When you have extreme
perspective, you should keep it set to a high value.Chapter 22 Shake-Generated Images 609
Tile
The Tile node is located in the Other tab. Tile does not generate an image, but makes
small tiles of an image within that image. The more tiles created, the slower the
processing (for example, more than 40 tiles).
Parameters
This node displays the following controls in the Parameters tab:
nXTile
The number of times the image is duplicated and shrunk horizontally.
nYTile
The number of times the image is duplicated and shrunk vertically.23
611
23 Color Correction
Shake’s color-correction and pixel-analyzer functions
provide many ways of analyzing and manipulating the
color values of your images.
Bit Depth, Color Space, and Color Correction
By default, Shake works with a color range of 0 to 1 in RGB linear space. Shake allows
you to work at different bit depths, so 0 is considered black and 1 is considered white.
Ordinarily, values above 1 and below 0 are clamped (constrained to a value between 1
and 0) as they’re passed from node to node down the tree of image-processing
functions. This can have a profound effect on the resulting image processing in your
script, with the following caveats:
• Nodes that concatenate are not subject to this clamping.
• Clamping does not occur when you work in float bit depth, because 32-bit (float)
computations preserve values above 1 and below 0.
Working in Different Color Spaces
When working with logarithmic Cineon plates, apply a LogLin node to avoid
unpredictable results. The LogLin node allows you to jump from logarithmic to linear
space, or linear to logarithmic space. For more information, see “The Logarithmic
Cineon File” on page 437.
To apply an effect such as a Blur node in a different color space (for example, to blur
the color difference channels in a YUV image, but not the luminance), apply a
ColorSpace node to the image to convert it to the different color space. Then, add the
effect node—in this example, the Blur node. To return to your original color space, add
another ColorSpace node.612 Chapter 23 Color Correction
For a practical discussion on using this technique, see Chapter 24, “Keying,” on
page 681.
Note: To view the images in this chapter in color, refer to the onscreen PDF version of
this documentation.
Concatenation of Color-Correction Nodes
A powerful aspect of Shake’s color handling is its ability to concatenate many color
corrections. This means that if you have ten concatenating color functions in a row,
Shake internally compiles the functions into a single lookup table, then executes that
table. (Internally to Shake, one node is processed, rather than ten.)
When you concatenate color-correction nodes, you avoid clamping values over 1 and
under 0 within the concatenating node group. By not clamping, you preserve much more
image data. As a result, concatenation preserves quality and speeds processing time.
Note: Color-correction nodes that process image data in float don’t concatenate
because there is no advantage to doing so. Because the image data is calculated in 32-
bit float space, clipping isn’t an issue. Also, there is no computational advantage to
concatenating nodes that are subject to float calculations.
Which Nodes Concatenate With One Another?
Nodes that concatenate with one another are labeled with the letter “c” (for
concatenation) in the upper-left corner of their buttons in the Tool tabs. (The “c”
doesn’t appear on nodes in the Node View.)
Note: Nodes only concatenate with other nodes that have the same color “c.”Chapter 23 Color Correction 613
Example 1: Proper Color-Correction Concatenation
The following example illustrates the correct method of concatenating color-correction
nodes. First, a Brightness node (set to a value of 4) is applied to an image. Next, a
second Brightness node (set to a value of .25) is added to the tree. The two nodes
concatenate (.25 * 4 = 1) to multiply the image by 1, resulting in no change to the
image. This is the ideal result.
If you turn on enhanced Node View (with the pointer in the Node View, press ControlE), you’ll see that the nodes concatenate—indicated by a green line linking the affected
nodes.
Example 2: Incorrect Color-Correction Concatenation
The next example illustrates the pitfalls of incorrectly combining color correction with
other nodes. It uses the same node setup as in the previous example, but with an
added Blur node, which breaks the concatenation. The result is that two separate
color- correction adjustments are made. Because 8-bit and 16-bit image processing
paths truncate values above 1 and below 0, the above values have unexpectedly
negative results.614 Chapter 23 Color Correction
Prior to the Blur node, all of the values are boosted to 1 when multiplied by the first
Brightness node’s adjustment of 4. After the Blur node, the values are then dropped to a
maximum value of .25 when the second Brightness value of .25 is applied.
As you can see, the result is altogether different than the result in Example 1. This can
be avoided by always making sure that the color-correction nodes in your tree are
properly concatenating.
The following nodes concatenate with one another:
• Add
• Brightness
• Clamp
• Compress
• ContrastRGB (but not ContrastLum)
• DelogC
• Expand
• Fade
• Gamma
• Invert
• LogC
• Lookup
• Mult
• Set
• SolarizeChapter 23 Color Correction 615
Note: AdjustHSV and LookupHSV only concatenate with each other.
Making Concatenation Visible
When you turn on enhanced Node View, you can see the links between concatenated
nodes. Make sure that showConcatenationLinks is set to enhanced in the
enhancedNodeView subtree of the Globals tab, then turn on the enhanced Node View.
Nodes that are currently concatenating appear in the Node View linked with a green
line. For more information, see “Using the Enhanced Node View” on page 221.
Avoiding Value Clamping Using a Bit Depth of Float
One way to avoid the consequences of broken concatenation is to boost your bit depth
to float with a Bytes node, prior to performing any color correction. This sets up the
image processing path within the node tree to preserve values above 1 and below 0,
instead of clamping them.
Note: Be aware that the float bit depth is more processor-intensive, and can result in
longer render times. For more information, see “Bit Depth” on page 408.
Premultiplied Elements and CG Element Correction
You may sometimes spot problems in the edges of computer-generated images when
applying color-correction nodes. A cardinal rule of image processing in Shake is to
always color correct unpremultiplied images.
Masked Nodes Break Concatenation
If you mask a node, concatenation is broken. To avoid broken concatenation, use a
node tree structure with KeyMix. 616 Chapter 23 Color Correction
In the following example, a computer-generated graphic is composited with a
background image. The addition of a ContrastLum node (with a value of .6) results in
premultiplication issues—manifested as a fringing around the edges of the image.
To eliminate this problem, unpremultiply the graphic prior to color correction by
inserting an MDiv node prior to any color-correction nodes in the tree. Later, at the end
of the chain of color-correction nodes you apply in the tree, make sure you once again
premultiply the graphic by adding an MMult node (this can also be done by turning on
the preMultiply parameter in the Over node).Chapter 23 Color Correction 617
The following screenshot shows a correctly set up node tree. Mult, Gamma, and
ContrastRGB nodes are inserted between a pair of MDiv and MMult nodes, prior to
compositing the two images with a layering node (the Over node).
Note: In the above example, all three color-correction nodes concatenate properly,
shown by the green line that is visible with enhanced Node View is on.
For a more detailed description of premultiplication and its importance in compositing,
see “About Premultiplication and Compositing” on page 421.
Color Correction and the Infinite Workspace
The Shake engine applies effects to a potentially infinite canvas, so occasionally you may
encounter an unexpected result when you color correct and pan a small element across
a larger element. This occurs when an Invert, Set, or Add node is applied to an image,
since the previously black pixels outside of the frame are changed to a different color.618 Chapter 23 Color Correction
In the following example, an artifact of Internet pop culture is recreated using a Text
node. The default black background color is raised to blue with an Add node.
When the image is panned, blue continues to appear in the area that was previously
outside of the frame.
To create a black outline, insert a Crop node before the Pan in the node tree:Chapter 23 Color Correction 619
To color correct the area outside of the Domain of Definition (DOD, represented by the
green bounding box), use the SetBGColor node.
For more information on the Infinite Workspace, see “Taking Advantage of the Infinite
Workspace” on page 405. For more information on the DOD, see “The Domain of
Definition (DOD)” on page 82.620 Chapter 23 Color Correction
Using the Color Picker
The Color Picker tab is a centralized interface that lets you assign colors to node
parameters using the ColorWheel, luminance gradient, swatches from a color palette,
or numerically, using a variety of color models. You can also store your own frequently
used color swatches for future use in the Palette.
Many of the nodes described in this chapter use the Color control that appears within
the Parameters tab.
This control corresponds to the Color Picker (clicking the Color control swatch opens
the Color Picker window). Both the Color control in the Parameters tab and the Color
Picker can be used interchangeably.
Color sampling swatches
Choose which color to
pick when clicking in the
Viewer.
ColorWheel
Click the ColorWheel and
luminance bar to choose
a color.
Palette
Store frequently used
colors here.
ColorValues
Adjust individual color
channels using a variety
of color models.
Values
Adjust individual color
channels using different
numeric representations.Chapter 23 Color Correction 621
Using Controls in the Color Picker
You can adjust the controls in the Color Picker in the following ways:
To choose a color from the ColorWheel:
m Drag the pointer in the wheel to select a point.
Crosshairs make it easy to spot the precise color that’s chosen, and the four sampling
controls above the ColorWheel reflect the selection.
To change the overall value of the wheel and the selected color:
m Drag in the luminance bar underneath the ColorWheel.
The overall brightness of the ColorWheel changes.
To sample color from the Viewer:
m Drag over the image in the Viewer.
The ColorWheel and color sampling swatches above the ColorWheel automatically
update as you drag in the Viewer.
The color swatches sample pixels from the image in four different ways:
• Current: Samples the exact color value of the last pixel you clicked or dragged over.
• Average: Samples the average color value of all the pixels you drag over.
• Min: Samples the minimum color value of all the pixels you drag over.
• Max: Samples the maximum color value of all the pixels you drag over.622 Chapter 23 Color Correction
To load a sampled color into a Color control in the Parameters tab:
1 In the Parameters tab, click the Color control for the color parameter you want to adjust.
A yellow outline appears around the edge of the Color control, and the Color Picker
opens.
2 Select a new color in any of the ways described above (using the ColorWheel or
dragging over the image in the Viewer, for example).
Note: If you’re sampling a color from an image that has no color-correction nodes
applied, turn on Sample From Viewer in the Color Picker.
If you’re sampling a color from an image that has already been color-modified (for
example, an image modifed by a Mult node), then the real-time update of the color
correction interferes with onscreen selection of color, causing an unwanted feedback
loop. To avoid this, turn on Use Source Buffer instead, and color values are taken from
the original image node, not the currently selected node.
This is especially useful when doing scrubs for keying nodes, since it allows you to pick
color values from the prekeyed image.
Notice that the color sampling swatches above the ColorWheel update as you drag.
3 Click one of the color sampling swatches to load its color into the selected Color
control (in the Parameters tab).
The color sampling swatches reset themselves each time you click in the Viewer. If
you’ve accidentally dragged a region and the Min and Max values are unsatisfactorily
set, simply click in the Viewer again to choose new values in all swatches.
You can also load colors into the Parameters tab using the Palette, located under the
ColorWheel. The Palette can also be used to store colors that you use repeatedly in your
composition.Chapter 23 Color Correction 623
To select a color from the Palette:
m
Click a color swatch.
You can also drag and drop between the Palette swatches and other Color Picker
swatches.
To assign a color to a Palette swatch:
m Drag a color sampling swatch (above the ColorWheel) and drop it into the Palette
swatches area.
The new color appears in the Palette.
To save your own custom color assignments:
m
Choose File > Save Interface Settings.624 Chapter 23 Color Correction
You can also choose or adjust colors numerically in the Color Picker by manipulating
the values of each individual channel.
To read different color channels for a node, do one of the following:
m Open the ColorValues subtree, then open one of the color space subtrees.
Use the sliders to adjust colors based on individual channels from the RGB, HSV, CMYK,
and HLS color space models.
m Open the Values subtree.
Defining Custom Default Palette Colors
If you want to change the default Palette colors that Shake starts with, add the
following declaration to a .h file in the ui directory:
nuiSetColor(1,1,0,0);
nuiSetColor(2,1,0.5,0);
nuiSetColor(3,1,1,0);
etc...
The syntax is:
nuiSetColor(swatchNumber, redValue, greenValue, blueValue);
The first value is the swatch position in the Palette (swatchNumber) from left to right
starting with the top line, and the next three values are the red, green, and blue
values designating the color you want in RGB space. Chapter 23 Color Correction 625
Use the color channel value fields to enter numeric values or expressions. The numeric
ranges representing each color channel can be changed using the ValueRange button,
making manipulation of color by expressions easier.
The channel slider buttons can also be individually controlled by the same hot keys
used for the Virtual Color Picker. These channel sliders let you adjust colors using
individual channels from each of three different color space representations: RGB, HSV,
and CMYK.
Note: Offset (hot key O) is not included in the channel slider keys.
To animate color values:
m Use the associated Autokey button in the Parameters tab.
Using a Color Control Within the Parameters Tab
The Color controls found in the Parameters tab or Tweaker window have many
powerful shortcuts you can use to directly manipulate individual color channels. In
many instances, using the Color Picker may be unnecessary.
To use the Virtual Color Picker to make specific adjustments:
m
Press a channel key on the keyboard, then drag within the Color control to adjust that
channel.626 Chapter 23 Color Correction
The following chart lists all the keyboard shortcuts for color adjustments within a color
control.
To quickly select a color from a color control, use the Virtual Color Picker.
To access the Virtual Color Picker from any Color control:
m
Right-click a color control, then drag to select a color from the Virtual Color Picker.
The Virtual Color Picker is available in the Parameters tab of all nodes that contain Color
controls, or in the Color Picker.
Keyboard Channel Description
R Red Adjusts red channel independently.
G Green Adjusts green channel independently.
B Blue Adjusts blue channel independently.
O Offset Boosts or lowers all color channels relative to one another.
H Hue Adjusts all channels by rotating around the ColorWheel.
S Saturation Adjusts color saturation, according to the HSV model.
V Value Adjusts color “brightness,” according to the HSV model.
T Temperature Adjusts overall color between reds and blues.
C Cyan Adjusts cyan, according to the CYMK colorspace model.
M Magenta Adjusts magenta, according to CMYK.
Y Yellow Adjusts yellow according to CMYK.
L Luminance Adjusts black level, otherwise referred to as luminance.Chapter 23 Color Correction 627
Customizing the Palette and Color Picker Interface
These commands are placed in your ui.h file. For more information on customizing
Shake, see Chapter 14, “Customizing Shake,” on page 355.
Using the Pixel Analyzer
The Pixel Analyzer tab is an analysis tool to find and compare different color values in
an image. You can examine minimum, average, current, or maximum pixel values on a
selection (that you make), or across an entire image.
Code Description
nuiSetColor(1,1,0,0);
nuiSetColor(2,1,0.5,0);
nuiSetColor(3,1,1,0);
Assigns a color to a Palette
swatch; the first number is the
assigned box. Values are in a
range of 0 to 1.
nuiPushControlGroup(“Color”);
nuiGroupControl(“MyFunction.red”);
nuiGroupControl(“MyFunction.green”);
nuiGroupControl(“MyFunction.blue”);
nuiPopControlGroup();
nuiPushControlWidget(
“Color”,
nuiConnectColorTriplet(
kRGBToggle,
kCurrentColor,
1
)
);
);
Assigns a Color Picker to your
custom macros. This code
creates a subtree named “Color”
that contains the three
parameters–red, green, and
blue—although these can be
any three parameters. The last
function
(nuiConnectColorPControl)
selects what color space the
values are returned in, what type
of value, and if you want to use
the source buffer or not.628 Chapter 23 Color Correction
Note: The Pixel Analyzer tab should not be confused with the PixelAnalyzer node, found
in the Other tab. For more information, see “The PixelAnalyzer Node” on page 631.
Using the Pixel Analyzer is very similar to sampling color values with the Color Picker.
When you drag across an image in the Viewer with the pointer, the values update in
the Pixel Analyzer. You can usually use the default Pixel Analyzer settings.
Click the color swatch that you want to examine—Current, Minimum, Average, or
Maximum color value. You can toggle between the different color swatches repeatedly
without having to drag again in the Viewer. The values appear in the value fields below
the color swatches.
Since the Pixel Analyzer keeps the examined pixels in memory, if you switch images in
the same Viewer, the Pixel Analyzer updates its values based on the new image.
Because of this, you can compare images or perform color corrections, as the color
correction constantly updates the Analyzer.Chapter 23 Color Correction 629
Using the Pixel Analyzer Tab to Set Levels
The following example shows you how the Pixel Analyzer can be used to perform a
similar operation to that of the Auto Levels command in Adobe Photoshop. This method
works by using the Pixel Analyzer to automatically find the lowest and highest values in
each channel of an image. You can then assign these values to an Expand node in order
to push the lowest values to 0 (black), and the highest values to 1 (white). In this
example, you can see the effect clearly in the landscape under the clouds.
To set levels using the Pixel Analyzer:
1 Read in an image using a FileIn node, then attach a Color–Expand node to it.
2 Click the left side of the FileIn node to load the image into the Viewer, then click the
right side of the Expand node to load its parameters into the Parameters tab.
3 Open the Pixel Analyzer tab.
4 Switch to Image mode.
The Current, Minimum, Average, and Maximum values of the image in the Viewer
appear in the color swatches at the top of the window.
Cloud images © 2004 M. Gisborne630 Chapter 23 Color Correction
5 Drag the Minimum color to the Low Color control of the Expand node in the
Parameters tab.
6 Drag the Maximum color to the High Color control of the Expand node in the
Parameters tab.
The image is now adjusted. Load the Expand node into the Viewer to see the result.
Pixel Analyzer Controls
The Pixel Analyzer has the following controls:
Mode
• Off: Turns off the Pixel Analyzer.
• Pixel: Analyzes the pixels based upon the area scrubbed with the pointer.
• Image: Analyzes the entire image—no scrubbing is necessary.
Accumulate
When you click the Accumulate button, all scrubbed pixels (not just the current pixel)
are considered for Average, Min, and Max calculations—until the Reset button is
clicked. So, Average calculates the average of every scrub since the last Reset, and Min
and Max replace their values if a new minimum or maximum value is scrubbed. If the
Analyzer is on Image mode, this button has no effect.
Reset
Resets the scrubbed buffer to black.
Before AfterChapter 23 Color Correction 631
Value Range
Shake numerically describes color as a range of 0 to 1 (0, 0, 0 is black; 1, 1, 1 is white).
However, you can set a different numeric range—for example, 0, 0, 0 as black, and 255,
255, 255 as white.
Hexadecimal
This button toggles the numeric display to hexadecimal values.
Min/Max Basis
The Min/Max Basis buttons set the channel for calculation of the Minimum and
Maximum swatches. Normally, this parameter is set to L (luminance). To determine the
minimum values in only the red channel, toggle the Min/Max Basis to R (red). For
example, one pure red pixel and one pure green pixel are equivalent pixels based on
luminance. However, based on red, the green pixel has a minimum value of 0, and
therefore the Minimum swatch returns a different value.
Custom Entries
You can insert your own functions to return data using the following code. You provide
a label and the function in a ui.h file. The two default plugs are called exp10 and expf:
gui.pixelAnalyzer.customLabel1 = “exp10”;
gui.pixelAnalyzer.customFunc1 = “(int)(1024*log(l/0.18)/log(10))”;
gui.pixelAnalyzer.customLabel2 = “expf”;
gui.pixelAnalyzer.customFunc2 = “log(l/0.18)/log(10)”;
The PixelAnalyzer Node
The PixelAnalyzer node, located in the Other tab, allows you to examine one or more
areas of an image over a range of frames. The data is then stored as the average,
minimum, and maximum values of each area on a frame-by-frame basis. This data can
then be used by other nodes to perform tasks such as matching colors, or reducing
flickering in a plate. This is done by feeding image data from a PixelAnalyzer node into
expressions within one of Shake’s color-correction nodes.
Using the PixelAnalyzer Node
The workflow used for analyzing color with the PixelAnalyzer node is similar to that of
the Tracker. However, the analyzer does not do any motion tracking itself. It merely
grabs color values at the position of the defined analysis areas.
Note: The analysis areas can be animated, and can also be moved via data from a
Tracker node. Use expressions to assign track data to the areaX and areaY parameters
of the analysis area you want to match to the movement of a tracker.
When using the PixelAnalyzer node, it’s important to make sure that it’s loaded into the
Viewer prior to performing the analysis. Otherwise, the analysis cannot be performed.632 Chapter 23 Color Correction
To analyze an area:
1 Attach the PixelAnalyzer node to an image. Double-click the PixelAnalyzer node to load
its image into the Viewer and its parameters into the Parameters tab.
2 Position the analysis area box in the Viewer to examine the necessary area of the
image, and adjust its size as necessary.
Note: To animate the box, use the Viewer Autokey button, or use an expression to
assign tracker data to the areaX and areaY parameters of the analysis area. Otherwise,
the box remains stationary.
3 To create several analysis areas, use the Add button in the lower portion of the
parameters.
4 Verify the frame range in the analysisRange parameter.
5 Press the Analyze Forward (or Analyze Backward) button in the Viewer.
The analysis begins. Each analysis area (controlled by the visibility toggle by the
areaName) grabs color values within its area and calculates the average, minimum, and
maximum values for that area.
To delete an analysis area, do one of the following:
m
In the Viewer, select the box (click the box), and then click Delete in the PixelAnalyzer
parameters.
m
In the PixelAnalyzer parameters, click the areaName value field (highlighted green) and
click Delete.
To save the data of an analysis area:
1 Click the Save button in the lower portion of the PixelAnalyzer parameters.
The “Save pixel analysis data to file” window appears.
2 Select or create the directory to store the saved the data, name the file, and click OK.
Using the PixelAnalyzer to Correct Uneven Exposure
The following examples show how you can use the PixelAnalyzer node to obtain image
data that is used to correct the exposure of a shot that dynamically brightens or
darkens. The goal is to even the exposure of the image out so it doesn’t change. These
examples illustrate the power of expressions in Shake to automate complex operations.Chapter 23 Color Correction 633
Setting Up the PixelAnalyzer Node
Attach a PixelAnalyzer node to the problem image. It will eventually be used as a source
of color values by expressions placed within the parameters of color-correction nodes.
The color-correction nodes are not attached to the PixelAnalyzer node; instead, they’re
branched off of the source image. Three different examples show three different color-
correction nodes in use—different situations may require different approaches,
depending on the image.
Note: To accurately analyze changes in brightness, the PixelAnalyzer node’s analysis
area should be positioned over the brightest area of the image.
Method 1: Using an Add Node
Attach an Add node to the image, then enter the following expression into its red,
green, and blue channels:
(PixelAnalyzer1.area1AverageRed@@1)-PixelAnalyzer1.area1AverageRed
(PixelAnalyzer1.area1AverageGreen@@1)-PixelAnalyzer1.area1AverageGreen
(PixelAnalyzer1.area1AverageBlue@@1)-PixelAnalyzer1.area1AverageBlue
The first part of the expression takes the first frame of the image as the base value
(specified by @@1). The average channel values from all other frames are compared to
frame 1. For every frame, the current channel value is subtracted from that of frame 1.
For example, if at frame 1 the average red value is .5, and at frame 10 the average red
value is .6, the above expression subtracts .1 from frame 10 to arrive at .5 again.
Note: To avoid problems when analyzing images with a lot of noise or grain, use the
PixelAnalyzer node’s Average value parameters.
In one possible scenario, examining the resulting image with the PlotScanLine viewer
script might reveal that the midtones are OK, but that the darks are creeping down.
This might indicate that the change in brightness is not occurring because of addition,
but perhaps is a result of multiplication.
Method 2: Using a Brightness Node
If the Add node didn’t provide satisfactory results, a Brightness node might have a
better effect. Attach a Brightness node, then enter the following expression into the
value parameter:
(PixelAnalyzer1.area1AverageRed@@1)/PixelAnalyzer1.area1AverageRed
The expression uses the same basic approach as in Method 1, except that the color
value from frame 1 is divided by the current frame’s color value. Since Brightness is a
multiplier, this makes an adjustment based on the difference. As a result, if at frame 1
the average red value is .5, and at frame 10 the average red value is .6, the above
expression multiplies frame 10 by .83333 (.5/.6) to arrive at .5 again. 634 Chapter 23 Color Correction
Method 3: Using a Mult Node to Correct All Three Channels
Method 2 assumes uniform variation across all three channels, which is probably
wishful thinking. On the other hand, it’s fast and easy. A more accurate approach might
be to feed similar expressions into the RGB channels of a Mult node.
The following expressions are entered into the red, green, and blue parameters of a
Mult node:
(PixelAnalyzer1.area1AverageRed@@1)/PixelAnalyzer1.area1AverageRed
(PixelAnalyzer1.area1AverageGreen@@1)/PixelAnalyzer1.area1AverageGreen
(PixelAnalyzer1.area1AverageBlue@@1)/PixelAnalyzer1.area1AverageBlue
This adjusts each channel according to the PixelAnalyzer node’s analysis of each
channel of frame 1.
PixelAnalyzer Viewer Shelf Controls
The PixelAnalyzer node has the following Viewer shelf controls:
Parameters
The PIxelAnalyzer node has the following parameters:
analysisRange
The frame range over which the analysis is performed.
limitProcessing
When turned on, this button limits image updating to the area inside the analysis
boxes.
areaName
The name of each analysis area you create. The associated parameter names for each
analysis area update whenever the name is changed.
areaAverage
The average value of every pixel within the analysis area over the span of the
analysisRange.
Button Description
Analyze Forward/
Backward
Analyzes to beginning or end of a clip. Once you’ve defined the
analysis area, click one of these controls to analyze the image.
Offset From Search
Region
Click this button to offset the analysis area from its initial
position.
Path Display Toggle analyze area display. Chapter 23 Color Correction 635
areaMinimum
The minimum value found within the analysis area over the span of the analysisRange.
areaMaximum
The maximum value found within the analysis area over the span of the analysisRange.
areaWindowParamameters subtree
The areaWindowParameters subtree contains parameters that define the size and
location of the region that encompasses each analysis area, including the following:
• areaX, Y: The center of the analysis area. These coordinates define the location of the
analysis area, and are the parameters to animate if you want to move the analysis
area.
• areaWidth, areaHeight: The width and height of the analysis area.
• areaVisible: A slider toggle that makes the analysis area boundary box visible. This
parameter is updated when one of the Analyze controls is clicked. This parameter
corresponds to the Visibility button found next to the area name parameter.
Add, Delete, Save
These three buttons let you add, delete, and save new analysis areas.
Color-Correction Nodes
In general, Shake has three classes of color-correction nodes, which are located in the
Color Tool tab.
Atomic-Level Correctors
Each atomic-level corrector node (Add, Brightness, Clamp, Lookup, and so on) applies a
single basic mathematical function to your color, such as add, multiply, gamma, and
basic lookup curve functions. These functions usually concatenate for speed and
accuracy, and are fast ways to manipulate your data for a wide variety of purposes. In
the Color Tool tab, each node’s icon consists of a graph that represents its function. In
the following illustration, notice the difference in the graph curves for the Clamp,
Compress, and Expand nodes.
Utility Correctors
The utility correctors are nodes that prepare an image for other types of operations.
Typically, these include ColorSpace, ColorX, Lookup, MDiv, MMult, Reorder, Set, SetAlpha,
SetBGColor, and VideoSafe. For more sophisticated functionality, there is a certain
degree of programmability in the ColorX and the Lookup nodes, which allow you to
create expressions that affect the image.636 Chapter 23 Color Correction
Consolidated Correctors
The consolidated correctors (ColorCorrect, ColorMatch, ColorReplace, HueCurves) are your
primary tools for tasks such as matching skin tones, shadow levels, spill suppression, and
so on. Functions performed by the atomic-level nodes are also performed by these, but
are combined with many other tools which provide more control over the result.
The following table includes general guidelines to help you understand why there are
so many nodes in the Color Tool tab. For example, you can perform an add operation
either with an Add node, or with a ColorCorrect node. In either case, the result is the
same, but for simple operations the Add node may be more straightforward to use,
with the additional benefit that it concatenates with many other color-correction
operators.
The following is a basic list of the color-correction nodes and how they are useful:
Node Description
Add Raises or lowers all colors evenly. This is the only legal color
correction in log space.
AdjustHSV Shifts the entire Hue, Saturation, or Value range.
Brightness Multiplies the R, G, and B channels by a single value.
Clamp Good for clamping off values that go above or below a desired
level.
ColorCorrect A multi-functional color-correction node that lets you make many
different adjustments to an image. It includes a premultiplied input
flag.
ColorMatch Matches the lows, midtones, and highlights in one image to those
in another.
ColorReplace Replaces one color in an image with another. It also makes a better
chroma keyer than the ChromaKey node.
ColorSpace Converts an image into a different color space, such as RGB, HSV,
CMY, and so on.
ColorX Lets you apply pixel-based mathematical expressions to an image.
Compress Squeezes the range of color in an image up and down, and is
particularly useful for creating fog effects with a DepthKey mask.
ContrastLum Adjusts contrast. Use Lum to preserve your color levels.
ContrastRGB Adjusts the contrast of individual channels, including the alpha
channel.
Expand Another levels adjuster that raises or lowers your low and high
points.
Fade Adjusts the opacity of an image. This node works the same as using
Mult with identical values in all four channels.
Gamma Gamma adjustments affect midtone color values while leaving the
white and black points of the image unaltered.Chapter 23 Color Correction 637
Atomic-Level Functions
The term atomic-level is used because each of these nodes applies a single
mathematical operation to the affected image. Because of their simplicity, they are easy
to work with. They are also ideal for use on the command line.
HueCurves A node that isolates and adjusts an image based on its hue. Ideal
for spill suppression.
Invert Turns black to white and vice versa. Works best on normalized
images between 0 and 1.
LogLin Performs logarithmic to linear, and linear to logarithmic color space
conversion.
Lookup/HLS/HSV Applies lookup expressions or curve manipulation to your image. It
is faster than ColorX for non-pixel based lookups.
LookupFile Pulls a lookup table from a file.
MDiv Used to unpremultiply an image by its alpha channel.
MMult Used to premultiply an image by its alpha channel.
Monochrome Gives you weighted control over desaturating an image to make it
black and white.
Mult Multiplies the values in each color channel of an image. The R, G,
and B color channels can be adjusted individually.
Reorder Swaps channels within an image. See also Layer–Copy and Layer–
SwitchMatte to copy channels from other images.
Saturation Controls the saturation levels. Saturation can either be boosted or
decreased.
Set Sets a channel to a constant level, replacing whatever values were
previously in that channel. Channels can be adjusted together or
separately.
SetAlpha Sets the alpha level to a constant value, replacing whatever values
were previously in that channel. This node also crops the Infinite
Workspace.
SetBGColor Sets the color outside of the DOD.
Solarize A misguided Invert function. Good for Beatles album covers.
Threshold Clips the color values in an image. Color channels can be clipped
individually.
VideoSafe Clips luminance or saturation values that are above the broadcastlegal range for video.
Node Description638 Chapter 23 Color Correction
Add
The Add node adds color to the R, G, B, A, or Z channel. Specifically, this node adds
color to black areas, including those beyond the image frame, in case you move the
image later. Any correction that occurs outside of the DOD can be corrected with the
SetBGColor node. Shake’s color is described in a range of 0 to 1, so adding -1, -1, -1
makes your image completely black. If you add the fifth value, depth, you effectively
add a Z channel with your add value to the image.
Parameters
This node displays the following controls in the Parameters tab:
Color
A color control that lets you pick a color to add.
alpha
Adds to or subtracts from the alpha channel.
depth
Adds to or subtracts from the Z channel.
Brightness
The Brightness node is simply a multiplier on the RGB channels. It differs from Fade in
that Fade also affects the alpha channel. For individual channel control, use Mult.
You can also use this node to convert your image to a single-channel alpha image by
using a brightness of 0.
Parameters
This node displays the following control in the Parameters tab:
value
A slider you use to specify the value to multiply the image by. This control
simultaneously multiplies the RGBA channels.Chapter 23 Color Correction 639
Clamp
The Clamp node clamps off values above and below a certain range. For example, if
your redHi value is .7, any value above that is set to .7. You can isolate the red, green,
blue, and alpha values.
Parameters
This node displays the following controls in the Parameters tab:
Low Color
The low value that all values are set to if they are less than this number. 0 is no change.
aLo
A low value control for the alpha channel.
High Color
The high value that all values are set to if they are more than this number. A value of 1
equals no change.
aHi
A high value control for the alpha channel.
Compress
The Compress node squeezes the image to fit within the Lo and Hi range you set.
Unlike Clamp, the entire image is modified because it is fit between the two points.
Parameters
This node displays the following controls in the Parameters tab:
Low Color
The new lowest value in the image. A value of 0 equals no change.
aLo
A low value control for the alpha channel.640 Chapter 23 Color Correction
High Color
The new highest value in the image. A value of 1 equals no change.
aHi
A high value control for the alpha channel.
ContrastLum
The ContrastLum node applies a contrast change on the image, with a smooth falloff on
both the low and high ends. You can also move the center of the contrast curve up and
down (for example, move it down to make your overall image brighter). Note that this
contrast is based on the luminance of the pixel, so it balances red, green, and blue, and
you therefore do not shift your hue. So, an image with strong primary colors looks
different than an image with the same values in a ContrastRGB node, which evaluates
each channel separately.
Also note that a roll-off is built into the contrast node, giving you a smooth falloff at the
ends of the curve. This is controlled by the softClip parameter.
Parameters
This node displays the following controls in the Parameters tab:
value
The contrast value. A higher value means it pushes your RGB values toward 0 and 1. A
value of 0 equals no change. A lower value is low contrast.
center
This is the center of the contrast curve. A lower value is a brighter image.
softClip
This controls the drop-off of the curve. When increased to 1, you have maximum
smoothness of the curve.
ContrastRGB
The ContrastRGB node applies a contrast curve on each channel individually, so you can
tune the channels separately. This differs from the ContrastLum node in that it only
changes the pixel value according to its own channel. For a good example of how the
functions differ, plug a ColorWheel node into both a ContrastLum and ContrastRGB
node. Notice that the ContrastLum node is weighed away from the green values. This
also means the ContrastRGB node runs the risk of shifting the hue as you adjust your
channels.Chapter 23 Color Correction 641
Parameters
This node displays the following controls in the Parameters tab:
Value
The contrast value. A higher value means it pushes the RGB values toward 0 and 1. A
low value is low contrast. A value of 0 represents no change.
Center
The center of the contrast curve. A lower value makes that channel brighter. A higher
value makes the image darker. Generally, these values are between 0 and 1.
SoftClip
The roll-off value to give a smooth interpolation. A value of 0 equals no roll-off.
Expand
The Expand node squeezes the data between two points on the X axis of a graph of an
image, increasing the amount of pure black and white (on a per-channel basis) in the
image. Compress squeezes them on the Y axis of the image graph.
Parameters
This node displays the following controls in the Parameters tab:
Low Color
Pixels less than or equal to Lo value go to 0. At 8 or 16 bits per channel, pixels less than
this value are clamped at 0.642 Chapter 23 Color Correction
aLo
A low color control for the alpha channel.
High Color
Pixels greater than or equal to Hi value go to 1. At 8 or 16 bits per channel, pixels
greater than this value are clamped at 1.
aHi
A high color control for the alpha channel.
Fade
The Fade node multiplies the RGBA channels. It differs from Brightness in that Fade also
affects the alpha channel. For individual channel control, use Mult.
A neat trick is to fade to 0. This effectively deactivates all nodes above the Fade node in
the tree.
Note: Premultiplication is not a concern with the Fade node, since Fade treats the RGBA
channels evenly.
Parameters
This node displays the following control in the Parameters tab:
Value
The brightness factor. Greater than 1 increases brightness; less than 1 darkens it. A
value of 0 is complete black.
Gamma
The Gamma node applies a gamma to your image. A value of 1 equals no change.
Shake’s ability to use expressions can be particularly useful here; for example, to invert
the gamma of 1.7, type “1/1.7” as your gamma value. Typing “.588” isn’t nearly as slick.
Parameters
This node displays the following controls in the Parameters tab:Chapter 23 Color Correction 643
rGamma
The red gamma value.
gGamma
The green gamma value.
bGamma
The blue gamma value.
aGamma
The alpha gamma value.
Invert
The Invert node inverts the color curve, so white becomes black and black becomes
white. A predominantly yellow image becomes predominantly blue if the red, green,
and blue (RGB) channels are selected in the channels field.
Invert also works on the Z channel, but assumes the Z is normalized, for example,
between 0 and 1. If this is not the case, you have an unpredictable result. If you need to
invert a non-normalized Z channel, use ColorX with a formula similar to the following in
the Z channel:
MaxZRange-z
Parameters
This node displays the following control in the Parameters tab:
channels
The channels you want to invert. You can use r, g, b, a, and/or z. To use multiple
channels, list them out. For example, rgz inverts the red, green, and Z channels.
Monochrome
The Monochrome node turns any image black and white. Unlike the Saturate node, you
can adjust the relative brightness that each channel contributes to the final BW image.
The default values are weighed according to the human eye’s different sensitivities to
red, green, and blue, but you can override these weights to simulate other effects, such
as how various black and white film stocks expose an image.
This node reduces a three-channel image (RGB) to a one-channel image (BW), and a
four-channel image (RGBA) to a two-channel image (BWA). If the three color channels
are identical, it is more efficient to use a Reorder with a value of “rrr,” since only the red
channel is read in. Monochrome reads all three channels in, and therefore has more I/O
activity.644 Chapter 23 Color Correction
Parameters
This node displays the following control in the Parameters tab:
Weight
The default R, G, and B values are set according to the human eye’s sensitivity to color,
but you can balance the colors differently to push a certain channel. The default values
are:
• R = .3
• G = .59
• B = .11
Mult
The Mult node multiplies the R,G, B, A, or Z channels. To uniformly increase the red,
green, blue, and alpha channels, use the Fade node. To affect red, green, blue, but leave
alpha at the same amount, use Brightness.
Parameters
This node displays the following controls in the Parameters tab:
Color
The value by which the incoming R, G, and B channel pixels are multiplied.
alpha
The value by which the incoming alpha channel is multiplied.
depth
Multiplying the Z value does not necessarily add a Z channel like the Add node.
Saturation
The Saturation node changes the saturation value of an image. Unlike the Monochrome
node, Saturation increases or decreases the saturation of the image weighing all color
channels equally.
Note: You can also use Monochrome to desaturate an image, giving different weights to
each of the color channels.Chapter 23 Color Correction 645
Parameters
This node displays the following control in the Parameters tab:
value
A slider that defines the saturation multiplier.
Solarize
The Solarize node is a partial inverse that reverses the high or low end, depending on
the value of the hi/lo flag. With values above (“hi,” or 1) or below (“lo,” or 0 ), the
thresholds are reversed. The resulting images are similar to a photographic effect
popular in the 1960s, where the print is exposed to light during development and
results in an image with blended positive and negative value ranges. It gives a metallic
effect to color images.
Parameters
This node displays the following controls in the Parameters tab:
value
The point at which the image is inverted.
thresholdType
Determines whether numbers must be higher or lower than the Value parameter to be
affected by the Solarize operation.
• 0 means “lo,” or below the threshold value that the color is inverted.
• 1 means “hi,” or above the threshold.
Threshold
The Threshold node cuts channels off at a certain value, and turns everything below
that cut-off value to 0. Each channel can have its own separate cut value. By using the
crush parameter, you can boost everything above the value 1.646 Chapter 23 Color Correction
Parameters
This node displays the following controls in the Parameters tab:
Color
Anything below this value goes to black.
alpha
All areas in the alpha channel below this value go to black.
cCrush
If this is set to 1, everything above the cut-off values goes to 1.
softClip
Provides a roll-off value.
Utility Correctors
These tools are more applicable for preparing data for other operations.
ColorSpace
The ColorSpace node converts an image from one color space to another color space.
After the image is converted, you can use color-correction functions that operate in the
new color space logic for interesting effects. For example, if you place a ColorSpace
node to convert from rgb to hls (hue, luminance, saturation), and then apply a Color–
Add node, the Add node’s red channel shifts your hue, instead of the red channel.
The optional r, g, bWeight arguments only affect rgb to hls, or hls to rgb, conversions.
For command-line use, and compatibility with Shake version 2.1 scripts or earlier, use
the dedicated space conversion functions CMYToRGB, HLSToRGB, HSVToRGB, RGBToCMY,
RGBToHLS, RGBToHSV, RGBToYIQ, RGBToYUV, YIQToRGB, and YUVToRGB. These functions
do not have arguments, with one exception: The HLS functions have optional r, g, and
bWeight parameters.Chapter 23 Color Correction 647
Parameters
This node displays the following controls in the Parameters tab:
inSpace
Selects the incoming color space.
outSpace
Selects the output color space. For example, if you use one ColorSpace, you probably
use rgb as your inSpace, and then something like hsv to convert it to hue/saturation/
value space. After applying your operations, you usually apply a second ColorSpace
node, with hsv as your inSpace and rgb as your outSpace.
luminanceBias
The weighing of the three channels for the luminance calculation in conversions
involving HSL. Luminance differs from value in that luminance calculates brightness
based on the human eye’s perception that green is brighter than an equal value in the
blue channel.
• rWeight
• gWeight
• bWeight
ColorX
The ColorX node modifies each pixel in the image according to an expression that you
supply. ColorX is normally slower than a dedicated, optimized node such as Mult or Add.
Use dedicated operators whenever possible. You can also set up complex rules inside
the node (see below).
For more information on using expressions, see Chapter 31, “Expressions and Scripting,”
on page 935.648 Chapter 23 Color Correction
Expressions can use the following variables:
• The variables r, g, b, a, and z refer to the value of the original channels (red, green,
blue, alpha, and Z).
• The variables x and y are the coordinates of the pixel.
• The variables width and height are the width and height of the image.
• The variable time is the current frame number (time).
Many operators can be represented by an arithmetic expression, such as reordering,
color correction, and gradient generation, or even circle drawing. Note that no spaces
are allowed in the expressions, unless you can use quotes for more explicit grouping.
LogLin
The LogLin node is typically used to handle logarithmic film plates, such as Cineon files
from a film scanner, or when writing files out for film recording. It converts the images
from the logarithmic color space to the linear color space for accurate compositing. You
can then use the node at the end of your node tree to convert the images back into
logarithmic space for film scanning. You can also use this around nodes that only work
in 8 or 16 bits, functions such as Primatte, Keylight, or other plug-ins from third parties.
Ultimatte is the exception, as it maintains float values. For a full description of this
process, see “The Logarithmic Cineon File” on page 437.
The LogLin color parameters are linked together by default, so gBlack and bBlack
reflect the rBlack value. You can of course adjust these to further color correct your
scanned plates.
Task
Example: Each value field can be filled independently in the
interface—these are all command-line examples.
Reordering shake bg.iff -colorx b r a g
Red correction shake bg.iff -colorx “r*1.2”
Red and blue correction shake bg.iff -colorx “r*1.2” g “b*1.5”
Same expression for red, green,
and blue
shake bg.iff -colorx “(r+g+b)/3” -reorder rrr
X gradient in matte shake -colorx r g b “x/width”
Y gradient in matte shake -colorx r g b “y/height”
Blue spill removal shake primatte/woman.iff -colorx r g “b>g?g:b”
Random noise shake -colorx rnd(x*y) rnd(2*x*y) rnd(3*x*y)
Turbulent noise (1 channel) shake -colorx turbulence2d(x,y,20,20)
Clip alpha if Z is less than 20 shake uboat.iff -colorx r g b “z<20”
Clip alpha if Z is more than 50 shake uboat.iff -colorx r g b “z>50”
A smooth alpha gradient from
Z units 1 to 70
shake uboat.iff -colorx r g b “(z-1)/70”Chapter 23 Color Correction 649
Note: This node only does color correction—it does not change your bit depth or your
file type. When Shake imports the Cineon files, typically a 10-bit file, it automatically
promotes the files to 16 bits. This process has nothing to do with the color correction.
The default values are supplied by Kodak—if you apply a LogLin in Shake, you should
get the same visual result as if you plugged in the same numbers into any other
software package’s logarithmic converter. The range of the offset and black and white
points is 0 to 1023, the range of a 10-bit file (8 bit is 0 to 255, 9 bit is 0 to 511). Every 90-
point adjustment of these values is equivalent to a full f-stop of exposure.
Parameters
This node displays the following controls in the Parameters tab:
Conversion
This parameter describes whether you convert from log to linear space, or linear to log
space. 0 is log to lin, 1 is lin to log.
rOffset
This control offsets the red color channel.
gOffset
This control offsets the green color channel.
bOffset
This control offsets the blue color channel.
rBlack
This sets the black point. The default value is 95.
rWhite
This sets the white cutoff point. The default value is 685.650 Chapter 23 Color Correction
rNGamma
Generally, this number is not touched. The .6 is an average of the response curves, and
may differ from stock to stock and even channel to channel. You can look it up on
Kodak’s website—see products/Film/Motion Picture Film, and then check the
characteristic curves.
rDGamma
The display gamma, according to Kodak, to compensate for the monitor lookup table.
This was set to neutralize the Cineon system’s standard monitor setting. Its inclusion
here is more of a heritage thing. It is highly recommended that you leave it at 1.7.
rSoftclip
The roll-off values on the white point. The default is 0, which gives a Linear break. By
increasing this value, the curve is smoothed.
Lookup
The Lookup node performs an arbitrary lookup on your image. It is extremely flexible,
allowing you to mimic most other color-correction nodes, and is generally much faster
than the ColorX node.
For information regarding the curve editor in Lookup, LookupHSV, and LookupHLS, see
Chapter 10, “Parameter Animation and the Curve Editor,” on page 291.
The Lookup is defined as a function f(x), where x represents the input color, ranging
from 0 to 1. As you draw the graph of this function, x is on the X axis, and f(x) is on the
Y axis.
Note: The Lookup node should not be used inside of macros.Chapter 23 Color Correction 651
Parameters
This node displays the following controls in the Parameters tab:
rExpr
Use this function to change the input red value, always represented by “x.”
gExpr
Use this function to change the input green value, always represented by “x.”
bExpr
Use this function to change the input blue value, always represented by “x.”
aExpr
Use this function to change the input alpha value, always represented by “x.”652 Chapter 23 Color Correction
Sample Lookup Tables
The following table lists the Lookup equivalents of other Shake color-correction nodes.
The following examples do custom lookups. The last two examples use Shake’s curve
formats, but use the Value mode (the V at the end of the curve name), and input x as
the value. All “keyframes” are between 0 and 1, and can take any value.
When using the interface, this is the default behavior—click the Load Curve button in
the Parameters tab to load the curve into the Curve Editor.
Function Brightness Invert
Math Expression f(x) = x * value f(x) = 1-x
Lookup Expression x*1.5 1-x
Graph
(white is result, gray is input)
Function Compress Do Nothing
Math Expression f(x) = x * (hi-lo) + lo f(x) = x
Lookup Expression “(x*.4)+0.3”
(if lo = 0.3 and hi =
0.7)
“x”
Graph
(white is result, gray is input)
Function Clipping Dampening
Lookup Expression x>.5?0:x x*x
Graph
(white is result, gray is input)Chapter 23 Color Correction 653
LookupFile
Use the LookupFile node to apply a lookup table to any image by reading a text file. The
file should consist of an arbitrary number of rows, and each row can have three or four
entries, corresponding to red, green, blue, and possibly alpha. Shake determines the
range of the lookup to apply based on the number of rows in the file—with “black”
always mapping to 0 and “white” mapping to (n-1), where n is the number of lines in
the file. Therefore, if your file contains 256 rows, Shake assumes that your entries are all
normalized to be in the range of 0 (black) to 255 (white). If you have 1024 lines in your
file, then “white” is considered to be a value of 1023. Interpolation between entries is
linear, so lookups with only a few entries may show undesirable artifacts. For example,
the following simple five-line lookup file produces the following lookup curve:
0 0 0 0
.3 .3 .3 .3
1 1 1 1
2 2 2 2
4 4 4 4
Because of this linear interpolation, you may want to instead use the standard Lookup
node with lookups that do not have a large number of points.
Function Spline Lookup Linear Lookup
Lookup Expression CSplineV(x,0,
0@0,
1@.25,
0@.75,
1@1
)
LinearV(x,0,
0@0,
1@.25,
0@.75,
1@1
)
Graph (white is result,
gray is input)654 Chapter 23 Color Correction
Parameters
This node displays the following controls in the Parameters tab:
lookupFile
A text field where you enter the path to the lookup file.
channels
The channels to which the lookup operation is applied.
LookupHLS
The LookupHLS node performs exactly like Lookup, except it works on the HLS channels
instead of RGB channels.
Parameters
This node displays the following controls in the Parameters tab:
sExpr
Use this function to change the input value, always represented by “x.”
lExpr
Use this function to change the input value, always represented by “x.”Chapter 23 Color Correction 655
hExpr
Use this function to change the input value, always represented by “x.”
aExpr
Use this function to change the input value, always represented by “x.”
LookupHSV
The LookupHSV node performs exactly like Lookup, except that it works on the HSV
channels instead of RGB channels.
Note: You cannot clone LookupHSV nodes in the node tree using the Paste Linked
command.
Parameters
This node displays the following controls in the Parameters tab:
vExpr
Use this function to change the input value, always represented by “x.”
sExpr
Use this function to change the input value, always represented by “x.”
hExpr
Use this function to change the input value, always represented by “x.”
aExpr
Use this function to change the input value, always represented by “x.”656 Chapter 23 Color Correction
MDiv
The MDiv node divides the color channels by the alpha channel.
When you color correct a rendered (premultiplied) image, first apply an MDiv node to
the image to make the image a non-premultiplied image, perform the color correction,
and then add an MMult node to return the image to its premultiplied state. For more
information on premultiplication, see “About Premultiplication and Compositing” on
page 421.
Parameters
This node displays the following control in the Parameters tab:
ignoreZero
Tells Shake to ignore pixels with an alpha value of 0.
• 0 = Divide entire image.
• 1 = Ignore zero-value pixels.
MMult
The MMult node multiplies the color channels by the matte.
This node is used to premultiply an image. When compositing with the Over node,
Shake expects all images to be premultiplied. Premultiplication is usually automatic
with 3D-rendered images, but not for scanned images. Also, use this node to color
correct a 3D-rendered image. Add an MDiv node to the image, perform your color
corrections, and then add an MMult node into an Over operation. For more information
on premultiplication, see “About Premultiplication and Compositing” on page 421.
Note: You can also choose to not insert a MMult node, and instead enable preMultiply
in the Over node’s parameters.
Parameters
This node displays the following control in the Parameters tab:
ignoreZero
Tells Shake to ignore pixels with an alpha value of 0.
• 0 = Multiplies entire image.
• 1 = Ignore zero-value pixels.Chapter 23 Color Correction 657
Reorder
The Reorder node lets you shuffle channels. The argument to this command specifies
the new order. A channel can be copied to several different channels. The letter “l”
refers to the luminance pseudo-channel which can be substituted in place of the RGBA.
If an expression is on a channel that does not exist, Shake creates the channel. You can
use the Z channel as well. For example:
shake -reorder zzzz
places the Z channel into the RGBA channels for viewing.
To copy a channel from another image, use the Copy node.
Parameters
This node displays the following control in the Parameters tab:
channels
Indicates the new channel assignment. You can use any of the following:
• r: Set the pixels of this channel to the values of the red channel.
• g: Set the pixels of this channel to the values of the green channel.
• b: Set the pixels of this channel to the values of the blue channel.
• a: Set the pixels of this channel to the values of the alpha channel.
• z: Set the pixels of this channel to the values of the Z channel.
• l: Set the pixels of this channel to luminance of RGB.
• 0: Set the pixels of this channel to 0.
• 1: Set the pixels of this channel to 1.
• n: Remove this channel from the active channels.
Set
The Set node sets selected channels to a constant value. Alpha or depth channels can
be added to images with Set. For example, -set z .5 puts in a Z plane with a value of .5.
Parameters
This node displays the following controls in the Parameters tab:658 Chapter 23 Color Correction
channels
The channels to be set.
value
The value of the channel.
SetAlpha
The SetAlpha node is simply a macro for Set and Crop that sets the alpha to 1 by
default. It exists because in the Infinite Workspace, color corrections extend beyond the
frame of an image. By using the Crop in the macro, the Set node is cut off, making the
image ready for transformations.
To remove the alpha channel from an image (turn an RGBA image into an RGB image),
set the alpha value to 0.
Parameters
This node displays the following control in the Parameters tab:
alpha
From 0 to 1, this is the alpha value of the output image. By default, it is 1.
SetBGColor
The SetBGColor node sets selected channels to the selected color outside of the
Domain of Definition (DOD). For example, if you create an Image–RGrad, the green
DOD box appears around the RGrad image in the Viewer. Everything outside of the
DOD is understood to be black, and therefore does not have to be computed. To
change the area outside of the DOD, attach a SetBGColor to the node and change the
color.
This node is often used for adjusting keys, as the keyer may be pulling a bluescreen,
and therefore assigns the area outside of the DOD, which is black, as an opaque
foreground. If the element is scaled down and composited, you do not see the
background. To correct this, insert a SetBGColor before the keyed element is placed in
the composite, for example, ChromaKey > SetBGColor > Scale > Over.Chapter 23 Color Correction 659
Parameters
This node displays the following controls in the Parameters tab:
mask
Specifies the channels that are reset.
Color
The new value to set the red, green, and blue channels to.
alpha
The new value to set the alpha channel to.
depth
The new value to set the Z channel to.
VideoSafe
For information on the VideoSafe node, see “VideoSafe” on page 208.
Consolidated Color Correctors
The consolidated color-corrector nodes are more complex than the other nodes. They
are usually inappropriate for use on the command line, and have unique interfaces.
AdjustHSV
The AdjustHSV node takes a specified color, described by its HSV values, and offsets the
color. For example, you can take a red spaceship and turn it blue without affecting the
green alien on it. It works similarly to the ChromaKey node.
To change a color, scrub with the Color Picker to isolate its hue, saturation, and value.
Next, scrub with the target Color Picker. For example, if you have a hue of 0 (red), and
enter a hueOffset of .66, you slide it to blue. The Range, Falloff, and Sharpness sliders
help control how much of a range you capture.660 Chapter 23 Color Correction
Parameters
This node displays the following controls in the Parameters tab:
sourceColor
These color controls let you select the HSV values of the target color you want to
change.
destinationColor
The color you want to use as the replacement for the color value selected as the
sourceColor.
hueOffset
A value that is added to the hue of the selected destination Color, thereby changing
the color.
hueRange
The range of hue that is added to the HSV value selected in sourceColor to include a
wider field of values.
hueFalloff
The amount of falloff from the affected amount of hue to the unaffected amount of
hue. A greater hueFalloff value includes more color values at the edges of the
hueRange.
hueSharpness
The drop-off curve of Falloff, creating a smoother or sharper transition between
affected and unaffected regions of the image.
• 0 = linear drop-off
• 1.5 = smooth drop-offChapter 23 Color Correction 661
satOffset
This is what is added to the saturation of the selected destination Color, thereby
changing the intensity of the color.
satRange
The range of saturation that is added to the HSV value selected in sourceColor to
include a wider field of values.
satFalloff
The amount of falloff from the affected amount of saturation to the unaffected amount
of saturation. A greater satFalloff value includes more saturated values at the edges of
the satRange.
satSharpness
The drop-off curve of Falloff, creating a smoother or sharper transition between
affected and unaffected regions of the image.
• 0 = linear drop-off
• 1.5 = smooth drop-off
valOffset
A value that is added to that of the destination Color.
valRange
The range of value that is added to the HSV value selected in sourceColor to include a
wider field of values.
valFalloff
The amount of falloff from the affected amount of value to the unaffected amount of
value. A greater satFalloff value includes more saturated values at the edges of the
satRange.
valSharpness
The drop-off curve of Falloff, creating a smoother or sharper transition between
affected and unaffected regions of the image.
• 0 = linear drop-off
• 1.5 = smooth drop-off
ColorCorrect
The ColorCorrect node combines Add, Mult, Gamma, ContrastRGB, ColorReplace, Invert,
Reorder, and Lookup in one node, and also gives you the ability to tune the image in
only the shadow, midtone, or highlight areas.
You should keep in mind that the ColorCorrect node breaks concatenation with other
color-correction nodes in the tree.
Note: The ColorCorrect node should not be used inside of macros.662 Chapter 23 Color Correction
The ColorCorrect Subtabs
The following table describes the ColorCorrect Parameter subtabs.
Note: You can only view one subtab at a time.
The Master, Low, Mid, and High Color Controls Tabs
The first four color control tabs (Master, Low, Mid, and High Controls) are identical
except for the portion of the image you are modifying. Each has the same matrix to
Add, Multiply (Gain), and apply a Gamma to the RGB channels, as well as apply contrast
on a per-channel basis (Contrast, Center, SoftClip).
To tune the color with the matrix, do one of the following:
m Numerically enter a value in the RGB value fields.
m Use the slider below each value field.
Subtab Description
Master Applies the same correction to the entire image.
Low Controls Applies the correction primarily to the darkest portion of the
image; the correction falls off as the image gets brighter.
Mid Controls Applies the correction primarily to the middle range of the image.
High Controls Applies the correction primarily to the highlights of the image; the
correction falls off as the image gets darker.
Curves Manual correction of the image using curves.
Misc Secondary color correction, as well as invert, reorder, and
premultiplication control.
Range Curves Display of the different image ranges (shadows, midtones,
highlights), their control curves, and the final concatenated curve
of the color correction.Chapter 23 Color Correction 663
m
Click the Color control, then select a color from the Viewer or the ColorWheel (in the
Color Picker).
m Use the Virtual Color Picker. Press the relative key, R (red), G (green), B (blue), H (hue), S
(saturation), V (value), L (luminance), M (magenta), or T (temperature), and drag left or
right on the parameter line. (You can do this anywhere on the line, not just over the
Color control.)
m
To group the R, G, and B sliders, press V (value) and drag left or right.
As an example, press R and drag right over the Add line. This adds to the red channel.
When the color control is bright red, press H and drag left and right. The hue of the
color shifts. This technique only modifies the color that is Added, Multiplied, and so on.
So, dragging a color control while pressing S (saturation) does not decrease the
saturation of the image, but only the saturation of the color that you are adding (or
multiplying) to the image.664 Chapter 23 Color Correction
The bottom portion of the tab contains buttons to toggle the channels from RGB
display to a different color space model. You can display RGB, HSV, HLS, CMY, and TMV.
For example, if the current image is displayed in the RGB color model, click HSV and the
numbers are converted to HSV space. Notice the Add color does not change—only the
numerical value.
Note: The “TMV” color space is Temperature/Magenta-Cyan/Value.
When the ColorCorrect node is saved into a script, the values are always stored in
RGB space.
Each color picker has an Autokey and View Curves button associated with all three
channels for that parameter. All three channels are treated equally. Chapter 23 Color Correction 665
Working With Low, Mid, and High Ranges
The following section discusses the differences in working with low, mid, and high color
ranges in the ColorCorrect node. The first image is the original image.
Thanks to Skippingstone for the use of images from their short film Doppelganger.
In the following examples, a Gain (multiply) of 2 is applied to each channel. The first
example multiplies all pixels by 2. The pure blacks stay black, the whites flare out.
However, when the gain is applied to the Low areas (the shadows), although the pure
blacks stay black, the areas just above 0 are raised into the mid range, and this reduces
the apparent contrast. A higher value solarizes the image. In the following images, a
gain of 2 is applied in the Master tab, the Low, Mid, and the High tabs.666 Chapter 23 Color Correction
You can control the range of the image that is considered to be in the shadows,
midtones, and highlights in the Range Curves subtab. This tab displays your final color
lookup operator as a curve, your mask ranges (to turn on the display, click the Ranges
button at the bottom), and controls for the center of the low and high curves. Also, you
can toggle the output from the Normal, corrected image to a display of the Low areas
boosted to 1 and all else black; the Mid areas boosted to 1 and all else black; or the
High areas boosted to 1 and all else black. A colored display is used, rather than a
display based on luminance, since different channels have different values. In the
following, the range viewer controls, with Low, Mid, and High, are selected.Chapter 23 Color Correction 667
To control the mask areas, turn on the Ranges curve display at the bottom of the Range
Curves tab. The left image below shows the default ranges. A curve of the final lookup
is displayed in this illustration as a yellow line for clarity. Notice that the Low and High
range curves’ (gray curves sloping in from left and right) centers are set at .5. If you
adjust the low or high values, you modify that range, as well as the mid-range curve.
For example, the second image shows what happens when low is set down to .1. Notice
the Low and Mid curves shift left, but the High curve remains unaffected.
The Curves Tab
The Curves tab allows you to apply manual modifications to the lookup curve.
Although this is generally used for manual adjustments, you can also apply functions
using the standard lookup expressions.
Note: To insert a new control point, Shift-click a segment of the curve.668 Chapter 23 Color Correction
The Misc Tab
The Misc tab contains several functions.
• Invert: Invert uses the formula 1-x, so float values may have odd results.
• reorderChannels: Enter a string to swap or remove your channels as per the standard
Reorder method.
For more information, see “Reorder” on page 657.
• preMultiplied: Enable preMultiplied if your image is premultiplied (typically, an image
from a 3D render), and an MDiv is automatically inserted before the calculations, and
an MMult is automatically added after the calculations.
For more information on premultiplication, see “About Premultiplication and
Compositing” on page 421.
• Color Replace: Use the Color Replace tools for secondary color correction, as per the
ColorReplace node. In this example, the red color of the shutter is selected and
replaced with blue. Sat Falloff is set to .2, Val Range to 0, and Val Falloff to 1.Chapter 23 Color Correction 669
Order of Calculations
Calculations are made in the following order:
• MDiv (optional)
• ColorReplace
• Invert
• Lookup Curves
• Gamma
• Mult
• Add
• Contrast
• Reorder
• MMult (optional)
ColorMatch
The ColorMatch node allows you to take an old set of colors (source color) in an image
and match them to a new set (destination color). You can match the low, middle, and
high end of the image. You can also perform Contrast, Gamma, and Add color
corrections with Gamma as an inverse gamma to preserve highlights.
When you match color and use the Color controls, be aware of where you scrub. If you
color correct an image and then feed it into the composite, you may have to jump
down to view the color-corrected image to get proper source color; otherwise you pull
in modified color. This occurs after you have already fed in the destination color, since
they are linked to the source color. Therefore, a good workflow is to select all three
source colors, and then select the destination colors. Another scrubbing technique is to
ignore the node when scrubbing (select the node and press I in the Node View), then
enable the node again when finished.670 Chapter 23 Color Correction
Parameters
This node displays the following controls in the Parameters tab:
lowSource
The low end of the RGB of the source color.
lowDest
The low end of the RGB destination color.
midSource
The middle of the RGB of the source color.
midDest
The middle of the RGB destination color.
highSource
The high end of the RGB of the source color.
highDest
The high end of the RGB destination color.
Contrast
Contrast values for the three channels.
Gamma
The Gamma values. Note this is an inverse gamma function, so you retain your
highlights as you raise the gamma.
Mult
Multiplies the input image by this value.Chapter 23 Color Correction 671
Add
Adds color to the input image. Blacks are modified when this is raised.
min
Sets the clipping for the function.
max
Sets the clipping for the function.
ColorReplace
The ColorReplace node allows you to isolate a color according to its hue, saturation, and
value, and then replace it with a different color. Other areas of the spectrum remain
unchanged. This is especially useful for spill suppression.
To pull a mask of the affected source color, enable affectAlpha in the ColorReplace
parameters. To better understand the parameters, you can attach the ColorReplace
node to a ColorWheel node to observe the effects graphically.
Parameters
This node displays the following controls in the Parameters tab:
affectAlpha
Toggles whether the alpha color is also adjusted by the color correction. If so, you can
then easily use this as a mask for other operations.
sourceColor
These color controls let you select the target color you want to change.
destinationColor
The color you want to use as the replacement for the color value selected as the
sourceColor.
hueRange
The range on the hue (from 0 to 1) that is affected. A range of .5 affects the entire hue
range.672 Chapter 23 Color Correction
hueFalloff
This describes the amount of falloff from the affected to the unaffected hue region. A
greater hueFalloff value includes more color values at the edges of the hueRange.
satRange
The range of the saturation from the Source color; 1 is the entire range.
satFalloff
This describes the amount of falloff from the affected amount of saturation to the
unaffected amount of saturation. A greater satFalloff value includes more saturated
values at the edges of the satRange.
valRange
Defines the range of value that is added to the HSV value selected in sourceColor to
include a wider field of values. 1 is the entire range.
valFalloff
This describes the amount of falloff from the affected amount of value to the
unaffected amount of value. A greater satFalloff value includes more saturated values at
the edges of the satRange.
HueCurves
The HueCurves node allows you to perform various color corrections (Add, Saturation,
Brightness) on isolated hues through the use of the Curve Editor. Typically, this tool is
used for spill suppression, but you can also do color corrections once you understand
how it is used.
Note: The HueCurves node should not be used inside of macros.
To color correct with the HueCurves node:
1 Find the hue of the area you want to color correct with the Color Picker. For example, if
you have blue spill, the hue is approximately .66.
2 Load the parameter you want to use into the Curve Editor. For example, to load
saturation into the Curve Editor, click the button to the left of “saturation” in the
parameter list.
When a parameter is loaded, the button is highlighted, and the curve appears in the
Curve Editor.
3 Drag the control point near the hue (.66) down.
The saturation is decreased in that particular hue, turning the pure blues to gray.
By default, all curves have a value of 1 until you modify the value downward.
Additionally, be careful with red-hued targets, as you may have to drag both the first
and last control point on the curve.Chapter 23 Color Correction 673
Parameters
This node displays the following controls in the Parameters tab:
saturation
Removes saturation from the hue range you identify.
satLimit
Sets the limit for saturation values.
rSuppress
Removes red from the hue area you identify when you drag the control point
downward.
rHue
Adds red to the hue range you identify.
luminance
Removes luminance from your area.
gSuppress
Removes green from the hue area you identify when you drag the control point
downward.674 Chapter 23 Color Correction
gHue
Adds green to the hue range you identify.
bSuppress
Removes blue from the hue area you identify when you drag the control point
downward.
bHue
Adds blue to the hue range you identify.
Other Nodes for Image Analysis
The PlotScanline and Histogram nodes, found in the Other tool tab, let you analyze
images from the node tree to better understand how the data is being manipulated.
Note: You can also apply a Histogram or PlotScanline using the viewer scripts.
Using the PlotScanline to Understand Color-Correction Functions
To better understand some of the Shake color-correction nodes, use the Other–
PlotScanline node or the Plotscanline viewer script. The PlotScanline node, located in
the Other Tool tab, looks at a single horizontal scanline of an image and plots the
brightness value of a pixel for each X location.
The most basic example of this is shown in the following illustration. The example
begins with a simple horizontal gradient source image that varies linearly from 0 to 1.
The PlotScanline resolution is set to 256 x 256 (for an 8-bit image).
The ramp ranges from black (on the left) to white (on the right). This is reflected in the
graph as a linear line.Chapter 23 Color Correction 675
When a node such as ContrastLum is inserted above the PlotScanline node, you can
begin to understand the node. In the ContrastLum node, value is set to 1.5 and the
center and softClip parameters are adjusted.
The effect on the ramp is reflected in the plot.
This also works for non-color correctors, and makes it an interesting analysis tool for the
Filter–Grain or Warp–Randomize node.
PlotScanline and Histogram Viewer Scripts
You can also use the PlotScanline and Histogram viewer scripts to observe image
data, but these are applied directly on your image. To load the parameters, right-click
the Viewer Script button, then choose a Load Viewer Script Controls option from the
shortcut menu.676 Chapter 23 Color Correction
PlotScanline
The PlotScanline node is an analysis tool that examines a line of an image and graphs
the intensity of each channel per X position. It greatly helps to determine what a color-
correction node is doing. Although it can be attached to any image for analysis, it is
often attached to a horizontal Ramp to observe the behavior of a color correction.
Switch the Viewer to view the alpha channel (press A in the Viewer), and see the
behavior of the alpha channel.
The following are some examples of using the PlotScanline node.
Example 1
A 256 x 256 8-bit black-and-white Ramp. Since there is a smooth gradation, with the
center at .5, it is a straight line. Moving the center to .75 pushes the center to the right,
making the entire image darker. This is reflected in the PlotScanline node.
Example 2
In this example, some color-correction nodes are inserted and the values are adjusted.
The PlotScanline indicates exactly what data gets clipped, crunched, compressed, or
confused.Chapter 23 Color Correction 677
Parameters
This node displays the following controls in the Parameters tab:
width
The width of the PlotScanline. You likely want to set the width to 256 on an 8-bit image
to get one-to-one correspondence.
height
The height of the PlotScanline. You likely want to set the height to 256 on an 8-bit
image to get one-to-one correspondence.
line
The Y-line of the image to be analyzed. On a horizontal ramp, this does not matter, as
they are all identical.
Histogram
The Histogram node is an analysis tool that examines an image and graphs the
occurrence per channel of each value. The X axis of the Histogram corresponds to the
numerical value of a pixel. The Y axis is the percentage of pixels per channel with that
value. The graph has nothing to do with the input pixel’s original X or Y position in
the image.
Note: The Histogram node should not be used inside of macros.
The following are some examples:
Example 1
A 256 x 256 8-bit black-and-white Ramp. Since there are equal amounts of the entire
range of pixels, the result is a solid white field. The orientation of the ramp has no
bearing on the graph.678 Chapter 23 Color Correction
Example 2
A 256 x 256 8-bit Color. Since the color is set to (approximately) .75, .5, .25, each channel
exists at only one position in the Histogram.
Example 3
A 256 x 256 8-bit 4-corner Grad. The four corner values are of red (1, .5, .5, .5), of green
(0, 1, .5, .5), and of blue (.5, 0, 0, 0). This reflects that most of red’s values are around .5,
ramping downward to 1, and no value is less than .5. In the Viewer, press R, G, B, or C to
toggle through the different channels.
Example 4
Next, a Brightness of 2 is added between the Grad and the Histogram. The
maxPerChannel parameter is enabled in the Histogram to better see the results. The
result (the image is zoomed in here) is no odd values (all numbers are multiplied by 2),
so there is a gap at every other value. It is a telltale sign of digital alteration when such
regular patterns appear in a histogram.Chapter 23 Color Correction 679
Parameters
This node displays the following controls in the Parameters tab:
width
The width of the Histogram. You probably want to set the width to 256 on an 8-bit
image for one-to-one correspondence.
height
The height of the Histogram. You probably want to set the height to 256 on an 8-bit
image for one-to-one correspondence.
ignore
Tells Shake to ignore black or white pixels. For example, if you have a small element on
a large black background, your histogram is skewed toward black. Disable black
consideration to better analyze the image.
• 0 = No ignore.
• 1 = Ignore values of 0.
• 2 = Ignore values of 1.
• 3 = Ignore values of 0 and 1.
maxPerChannel
This determines how the graph channels relate to each other. If you have a large red
component and a small blue component, the blue is very short, making it difficult to
see. Toggle this to view the blue channel in relation to itself, rather than the red.
• 0 = Distribution of values relative to RGB total.
• 1 = Distribution of values relative to its own channel.24
681
24 Keying
Shake provides powerful, industry-standard keying tools
in the Primatte and Keylight nodes, along with additional
keying nodes such as LumaKey and SpillSuppress. When
combined with Shake’s other filtering, masking, and
color-correction nodes, you have detailed control over
every aspect of the keying process.
About Keying and Spill Suppression
The first part of this chapter presents different strategies for pulling keys in Shake.
Keying can be loosely defined as the creation of a new alpha channel based upon the
pixel color (a pure blue pixel, for example), luminance (a very dark pixel), or Z depth (a
pixel 300 Z units away, for example) in an image. Keying is discussed as a separate
process from masking, which can be loosely defined as the creation of an alpha
channel by hand through the use of painting, rotoshapes, or imported alpha masks
from 2D or 3D renders in other software packages. For more information on these
masking techniques, see Chapter 19, “Using Masks.”
If you are not familiar with Primatte and Keylight (Shake’s primary bundled keyers), you
are encouraged to work through the keying lessons in the Shake 4 Tutorials book prior
to reading through this chapter.
32-bit Support in Primatte and Keylight
As of Shake 4, the Primatte and Keylight nodes preserve 32-bit image data. 682 Chapter 24 Keying
Pulling a Bluescreen or Greenscreen
In the Key Tool tab, the two primary nodes used to pull bluescreen and greenscreen
keys are Primatte and Keylight. In the Shake 4 Tutorials, there are lessons devoted to
each. Other functions in the Key tab include the ChromaKey, DepthKey, DepthSlice,
LumaKey, and SpillSuppress nodes. These are discussed in the second half of this
chapter. The ColorReplace node, although located in the Color tab, is also considered to
be another key-pulling node.
Note: Although there is a ChromaKey node in the Key tab, it is not particularly useful.
The same model and parameters appear in ColorReplace, but ColorReplace generally
works much better.
Keying With Primatte and Keylight
For lessons on how to use Primatte and Keylight to pull keys, see the Shake 4 Tutorials.
Keying With ColorReplace
ColorReplace is not the best keying tool, but it is a good node for making quick garbage
masks. The following example uses ColorReplace to key the blonde image over the
clouds1 image.
To set up the key example:
1 Click the Image tab and select FileIn.
2 Go to the $HOME/nreal/Tutorial_Media/Tutorial_06/images directory and load the
alex_fg.jpg and clouds1.jpg images. (The images are courtesy of Photron.)
3 Create the following tree:
4 Set the following parameters:
• ColorReplace node: Click the SourceColor control, then scrub on the bluescreen in the
blonde image. Set Replace Color to any other color (it just cannot be the same blue).
Also, turn on affectAlpha so that you pull a key.
• Invert node: Set the channels to “a” instead of rgba.
• Over node: Enable preMultiply. Chapter 24 Keying 683
Because ColorReplace puts white in the SourceColor area of the alpha channel, use the
Invert node to invert the image for the Over node.
The initial settings yield blue fringes. In the ColorReplace parameters, set satFalloff to 1
to correct this. Also, if pure black or pure white pixels start to show transparent, set the
valRange and valFalloff numbers to approximately .2 and .5.
You can see there is some crunchiness, particularly in the hair. This demonstrates why
ColorReplace is usually used to pull a key to mask other operations such as color
correctors, rather than the actual composite. There are examples of using ColorReplace
in the “Blue and Green Spill Suppression” section below.
Combining Keyers
You get the most flexibility when you combine keyers. Both Primatte and Keylight allow
you to input a holdout matte. There are at least three typical ways of combining keys:
• Use the Primatte or Keylight holdout matte input.
• Use the Primatte arithmetic operator.
• Use the Max, IAdd, IMult, Over, Inside, or Outside node.684 Chapter 24 Keying
You can combine keys with the holdout matte input. Typically, you pull a basic key for
soft edges and reflections. You also pull a second key which is very hard, and then
soften it with filters.
In the following example, the first image shows the initial key coming out of the
Keylight node. The reflections are good, but there is some transparency near the seat
belt and steering wheel.
Next, a key is pulled with Primatte on the same bluescreen. With just foreground and
background operators, the reflection is removed and the transparent areas filled in. This
results in a very hard matte.Chapter 24 Keying 685
Next, filters are attached to the Primatte node. A DilateErode is added, and the xPixels
parameter set to 1 (this closes up any holes in the alpha channel). You can also use a
Median filter to do the same thing. A second DilateErode is applied, with the xPixels set
to -5. This eats away at the matte. This filtering process cleans up the matte, making it
more solid. A Blur softens it, and is fed into the HoldOutMatte input (the third input) of
the Keylight node. The result is a solid matte with soft edges.
An Alternative for Making Hard Mattes
Copy your Keylight node and boost the screenRange to .3 to harden the matte.
A Matte Touch-Up Tool: The KeyChew macro, covered in Chapter 32, “The Cookbook,”
is also a good tool for duplicating the DilateErode chain in the above example.686 Chapter 24 Keying
Another way to combine keys applies only to the Primatte node, which features a
useful arithmetic parameter. Normally, when you pull a key in Primatte, the alpha mask
is replaced in the foreground image. When the arithmetic parameter is switched from
replace to add, multiply, or subtract, you can combine the mattes within Primatte. In
the following node tree, an initial key is pulled with ColorReplace, with the affectAlpha
button turned on. The resulting alpha channel is then inverted, without inverting the R,
G, and B channels, using an Invert node. The inverted alpha channel is then combined
with the Primatte-generated alpha channel by setting the Primatte arithmetic
parameter to Add.
The final way to combine keys is to use layer nodes. In the following tree, keys are
pulled using two different nodes and then combined with a Max node, which takes the
greatest pixel value. The elements are combined with a KeyMix node. Note that the
KeyMix does not get an alpha channel, since neither clouds1 nor blonde has an alpha
channel. KeyMix only mixes the two images through the mask you have pulled.Chapter 24 Keying 687
The following example uses a SwitchMatte node to assign the information from the
combined keys to the foreground image. The resulting combined image data is then
composited against the background using an Over node.
Blue and Green Spill Suppression
Once the composite is pulled from the keyers and put back into the KeyMix or Over
node, you can start to work with spill suppression. Blue spill occurs when light is
reflected off of the bluescreen and onto the foreground material.688 Chapter 24 Keying
The following examples use the woman.iff and bg.jpg files in the /Tutorial_08/images
directory. Notice that there is quite a bit of blue spill on the woman’s shirt. In the
following tree, the Primatte output is set to alpha only; a holdout matte is created for
the line under her arm (with a QuickPaint node); and foreground and background
scrubs are added. The composite is done with an Over node that has preMultiply
enabled. Notice the blue spill that remains:
Although it’s tempting to think that it would look better if you switch the Primatte
output to comp and turn off the preMultiply in Over, this isn’t the case. This has the
unanticipated result of turning your blue edges into black edges that are actually more
difficult to remove. Additional techniques to help correct this are discussed below.
To avoid blue spill correction in your keyers, the following examples contain several
sample trees to help you.Chapter 24 Keying 689
Using Color Replace—Method One
This technique is nice because it is fast, but often simply replaces blue spill with a
different color. In the following tree, a ColorReplace node is applied to the foreground,
and replaces the blue with the color of the wall. As always, increase the satFalloff to a
value near1. The head looks great, but spill remains on the shirt, and there is now a
yellow edge around the skirt.
To correct this, drop the ColorReplace valRange to 0 and the valFalloff to approximately
.4. Add a second ColorReplace node to eliminate the blue spill on the shirt. Carefully
select the blue on the shirt, and then replace it with a beige-white color. Also, move all
Range parameters to 0 and all Falloff parameters to approximately .3. The shirt looks
good, but there is still a yellow ring around her skirt.690 Chapter 24 Keying
Using Color Replace—Method Two
A better technique is to use ColorReplace to mask a color correction. Replace
ColorReplace2 with a Monochrome node, then pipe Primatte directly into it. Next, attach
the output of ColorReplace1 as the Monochrome1 mask, and turn on the affectAlpha
parameter in ColorReplace. This process turns off all saturation in the blue areas and
turns those areas gray. This is good since the eye can detect luminance levels better
than saturation levels. The skirt and the shirt now look good.
As an alternative to Monochrome, you can use the AdjustHSV or Saturation node.
SpillSuppress
The SpillSuppress node mathematically evaluates each pixel and compares the blue and
green strength. If the blue is significantly stronger than the green, the blue is
suppressed. Because of the luminance difference between blue and green, the
SpillSuppress node tends to work better on blue spill than green spill. It also tends to
push your images to a yellow color.Chapter 24 Keying 691
HueCurves
The HueCurves node, located in the Color Tool tab, enables you to boost colors or
saturation based on the hue of the pixel you want to affect. HueCurves works by
loading a parameter into the Curve Editor and tuning it—ignoring the value fields in
the node parameters. The X axis is the hue, and the Y axis depends on the parameter
you are using. For the following example, load the saturation curve into the Curve
Editor and grab the control point around .66, since blue has a hue of 66 percent. Drag
the point down to decrease the saturation for the blue pixels.
Edge Treatment
Another typical problem when keying (OK, it is the problem with keying) is edge
treatment.
The following example uses two images from the $HOME/nreal/Tutorial_Media/
Tutorial_06/images directory:
1 In the Image tab, click FileIn.
2 Go to the /Tutorial_06/images directory and read in the pillar1.jpg and clouds1.jpg
images.
These images bring up an important tip for compositors: Supervisors are typically
optimistic about the results possible from using the sky as a bluescreen.
3 In the Node View, select the pillar1 node.
4 Click the Key tab, then click the Keylight node.
The pillar1 node is connected to the Keylight node’s foreground input.
5 Connect the clouds1 node to the Keylight node’s background input.
6 In the Keylight node parameters, click the screenColour control, then scrub the sky near
the horizon.692 Chapter 24 Keying
A black line appears around the pillar.
As mentioned earlier, it is better to composite after pulling your key because it gives
you more flexibility.
7 So, rewire the tree with an Over node. Make sure you turn on preMultiply in the Over
node parameters and set the Keylight output to unPremult.
8 Next, attach a DilateErode node to the Keylight node.
9 In the DilateErode channels parameter, enter “a” (replace rgba) to affect only the alpha,
then set xPixels to -1. Note that the DilateErode node chews into the matte.Chapter 24 Keying 693
The next image illustrates a disabled preMultiply parameter in the Over node (because
the mask/RGB premultiplied relationship is upset). White lines appear around the edges.
This introduces another problem: The electrical power lines have disappeared from the
lower-right corner of the image. Because the details are so fine, the DilateErode node
has chewed the lines away.
The easiest way of correcting this in this example is to use a manually painted mask,
which limits the effect of the DilateErode node.
To restore fine detail to the composite:
1 Attach a QuickPaint node to the Mask input of the DilateErode node.
A mask is attached to the DilateErode.
2 Paint across the electrical wires.
But this only dilates areas where the wires exist—the opposite of what you want. 694 Chapter 24 Keying
3 Open the Mask subtree in the DilateErode parameters, then enable invertMask.
The edges are now dilated everywhere except around the wires area.
For more information on the QuickPaint node, see “About the QuickPaint Node” on
page 579.
Applying Effects to Bluescreen Footage
Problems can occur when you apply effects to keyed footage. For example, suppose
you want to blur the foreground image, but not the background. Your first instinct
would probably be to apply a Blur node to the foreground image, and then key and
composite with Keylight. This is inadvisable. This section presents ways to find and avoid
problems with this workflow.
The following node tree uses the woman.iff and sky.jpg images in the Tutorial_Media
directory. Special thanks to Skippingstone for the use of images from their short film
Doppelganger.Chapter 24 Keying 695
Problem 1: Edge Ringing
When the blur is applied, a blue edge is introduced along the woman’s neck line.
Problem 2: Accidentally Blurring Both Layers
One might be tempted to place the Blur node after the Keylight node in the tree. This
also produces an incorrect result—the background is blurred as well.
Problem 3: Artifacts Introduced by Masking
Lastly, one might be tempted to mask the blur using the key from Keylight. This also
produces an incorrect result.696 Chapter 24 Keying
Filtering Keys: The Correct Way
The problem with the three examples above is that the keying node, in this case
Keylight, is being made to do too much—by having to simultaneously key, suppress
spill, and composite within the same node, there is no good place to apply a filter.
The solution is to pull a key for purposes of creating a mask, but to use other nodes to
perform the actual compositing. In the following example, the Keylight output
parameter can be set to either “comp” or “on Black.” The image and alpha that’s output
from the Keylight node can then be filtered, and the filtered result composited against
the background using a simple Over node.
In general, it’s good practice to use the Primatte and Keylight background inputs as test
composites only, to be able to see how the key is looking while you’re tuning it. Once
you’re done pulling the key, you can rewire the output image into a composite using
an Over or a KeyMix node. This method of working gives you several advantages:
• You can apply filters and effects to the foreground material.
• You can transform the foreground material.
• You can color correct the foreground material.
Keying DV Video
DV footage, which is compressed with a 5:1 ratio as it’s recorded with 4:1:1 (NTSC) or
4:2:0 (PAL) color sampling, is less than ideal as a format for doing any kind of keying.
This is due to compression artifacts that, while invisible during ordinary playback,
become apparent around the edges of your foreground subject when you start to key.
With a high-quality DV camera and good lighting, it’s possible to pull a reasonable key
using DV clips, but you cannot expect the kind of subtleties around the edges of a
keyed subject that you can get with uncompressed or minimally compressed video
(decent) or film (best). For example, while you may be able to preserve smoke,
reflections, or wisps of hair when keying uncompressed footage, with equivalent DV
footage this probably won’t be possible. Chapter 24 Keying 697
On the other hand, if your foreground subject has slicked back hair, a crisp suit, and
there are no translucent areas to worry about, you may be able to pull a perfectly
acceptable key.
The following example presents an impeccably shot bluescreen image, recorded using
a high-quality DV camera.
When you key the image and place it over a background (a red field is used in this
example), the result is marred by blocky, aliased-looking edges all around your foreground
subject. Unfortunately, that’s the 4:1:1 color sampling of DV making itself seen.
Why is this happening? In the RGB color space, each pixel in your picture is represented
with a value of Red, Green, and Blue.
When you shoot video—which uses the YUV (or YCrCb) color space, each pixel is
represented by a luminance value (Y) and two chrominance values: red/green
difference (Cr) and blue/green difference (Cb). This is done because green has a higher
luminance value, and is therefore more likely to display artifacts if too much
information is taken away.
In the video world, 4:4:4 color sampling means that for every four pixels in a row, you
get four pixels each for Y, Cr, Cb. This is equivalent to 8-bit RGB. 4:2:2 means that you
get four Y pixels and two each of Cr/Cb for every four pixels in a row. You get less
information in a smaller package, giving you a little more speed and a usually
acceptable loss of detail.
The DV format’s 4:1:1 color sampling means that you get four Y (luminance) pixels and
a single each of Cr/Cb (red/green and blue/green difference). In other words,
luminance can change at every pixel, but color changes only once every four pixels.
If you look at the keyed DV footage, you see that each of those blocks around the edge
of your subject is four pixels wide. A lot of data is saved. Unfortunately, bluescreen
keyers pull keys from color, not luminance; hence the artifacts.
DV bluescreen Key pulled on DV bluescreen698 Chapter 24 Keying
Although the information in video is transferred from the YUV colorspace into the RGB
colorspace, you can still examine the original YUV channels. Attach a Color–ColorSpace
node to the FileIn and set the outSpace to be YUV. With the pointer over the Viewer,
press the R, G, and B keys to look at the new YUV channels.
As you can now see, the luminance channel gets all of the necessary information, since
the human eye is more susceptible to differences in luminance than hue. But the two
color channels display significant artifacts.
Here are two methods you can use to help you pull better keys from DV footage:
To correct a DV key (method 1):
1 Attach a ColorSpace node to the FileIn node containing the DV footage.
2 Set the outSpace parameter of the ColorSpace node to YUV.
The colors in the image radically change, but don’t worry, this is only temporary.
3 Attach a Blur node to the output of the ColorSpace node.
4 In the Blur node’s channels parameter, blur only the U and V channels by switching the
channels from rgba to gb.
5 Blur the isolated gb channels by adjusting the xPixels parameter by approximately 8,
and adjusting the yPixels parameter independently by a much smaller value,
approximately 1 pixel.
6 Attach a second ColorSpace node to the output of the Blur node, then set the inSpace
parameter to YUV and keep the outSpace parameter set to RGB.
Y (luminance) U (Cr, or r/g difference) V (Cb, or b/g difference)Chapter 24 Keying 699
This converts the image back to RGB space.
The key is greatly improved. In particular, the original blockiness around the edge is
gone. Unfortunately, you’re still missing any fine detail that you might otherwise have
had were the footage shot using a different format.
Note: Using this method when you use straight YUV files, you can bypass the RGB to
YUV conversion by turning off yuvDecode in the FileIn node. Apply the Blur, and then
use one ColorSpace node to convert the image to RGB space.
The next method will use a different image, one with more hair detail that takes a little
more effort to preserve.
Without blur on UV channels With blur on UV channels700 Chapter 24 Keying
To correct a DV key (method two):
1 As in the first method, attach a ColorSpace node to the FileIn node containing the DV
footage, then set the outSpace parameter to YUV.
2 Next, attach a Reorder node to the output of the ColorSpace node. Set the channels
parameter to rrra to reassign the Y channel (the least compressed channel) to all three
channels.
The result will be a sharp grayscale image.
3 Attach a LumaKey node to the output of the Reorder node, and pull a key,
concentrating on the edges of the subject.
Don’t worry if there are holes in the middle of the image. You’re just trying to isolate as
much of the edge of the subject as possible.
4 Now, go back up to the FileIn node at the top of the tree, and branch off another
keying node, such as Keylight.
5 Pull a key, this time concentrating on the interior of the subjects.
In essence, this second key is a holdout matte that you can combine with the LumaKey
you created in step 3.Chapter 24 Keying 701
6 Now, attach the outputs of the LumaKey and the Keylight nodes to a Max node, to
combine both alpha channels into one.
The above screenshots show the results of each individual key, combined into the
single alpha channel below. As you see, the holes in the LumaKey matte are filled by
the Keylight matte, and the loss of detail around the edge of the Keylight matte is
restored by that in the LumaKey matte. These keys can also be combined in many other
ways using different layering nodes and rotoshape masks to better control the
combination.
The LumaKey matte The Keylight matte
The combined LumaKey
and Keylight matte702 Chapter 24 Keying
7 As an optional step, you may find it necessary to insert a DilateErode node between the
Keylight and Max nodes in order to erode the output matte from the Keylight so it
doesn’t interfere with the edge created by the LumaKey.
This produces a mask that you can then recombine with the original foreground image
and a background image using a KeyMix node. You’ll probably have to deal with some
spill suppression, but you can use the same techniques described in “Blue and Green
Spill Suppression” on page 687.
Note: If you’re really detail-oriented, you can combine both of the above methods, in
an effort to preserve more detail from the greenscreen key performed in method two.
Keying Functions
The following section details the keying nodes located in the Key Tool tab. The
ColorReplace node, discussed above, is located in the Color Tool tab. For more
information, see Chapter 23, “Color Correction.”Chapter 24 Keying 703
ChromaKey
The ChromaKey node examines the HSV values of an image and pulls a matte based
upon the parameters. In the interface, you can scrub a color in the Viewer. However,
disable the matteMult parameter before you scrub. The hue, saturation, and value of an
image each has a set of parameters to describe the exact HSV values you are keying, as
a range from that midpoint, a falloff value, and a sharpness value to describe the falloff
curve. This is illustrated in the following image:
The ChromaKey node is not Shake’s strongest feature. It is recommended to use Color–
ColorReplace; select your bluescreen color; choose your destination color (any other
color); then enable affectAlpha.
A typical application is bluescreen or greenscreen removal. You can stack multiple
ChromaKey nodes to extract different ranges. With the arithmetic parameter, you can
choose to add to, subtract from, or replace the current mask.
Parameters
This node displays the following controls in the Parameters tab:
HSVColor
Picks the center value to be pulled on hue, saturation, and value.
hueRange
Plus and minus from the hue specified by the HSVColor parameter.
hueFalloff
Describes the falloff range from hueRange that is picked, with the values ramping
down.
hueSharpness
Describes the falloff curve from hueRange to hueFalloff.
• 0 = linear drop-off
• 1 = smooth drop-off
satRange
Plus and minus from the saturation specified by the HSVColor parameter.
satFalloff
Describes the falloff range from satRange that is picked, with the values ramping down.704 Chapter 24 Keying
satSharpness
Describes the falloff curve from satRange to satFalloff.
• 0 = linear drop-off
• 1 = smooth drop-off
valRange
Plus and minus from the value specified by the HSVColor parameter.
valFalloff
Describes the falloff range from valRange that is picked, with the values ramping down.
valSharpness
Describes the falloff curve from valRange to valFalloff.
• 0 = linear drop-off
• 1 = smooth drop-off
matteMult
Toggle to premultiply the RGB channels by the pulled mask.
• 0 = no premultiply
• 1 = premultiply
arithmetic
This parameter lets you define how the mask is created by the key.
• 0 = replace existing mask
• 1 = add to existing mask
• 2 = subtract from existing mask
DepthKey
The DepthKey node creates a key in the alpha channel based on depth (Z) values.
Values below loVal are set to 0, and values above hiVal are set to 1. Values in between
are ramped. You also have roll-off control, plus a matteMult toggle. If there is no Z
channel in your image, this node does not work.
If the Z channel is not directly imported with the FileIn node, you can copy it over with
the Copy command, using z as your channel.
Note: When you import from Maya, there is a strange -1/distance Z setting. This
basically means that the numbers in DepthKey are impractical. To correct this, go to
www.highend2d.com and download the MayaDepthKey macro. The macro operates
exactly like the normal DepthKey, but does the conversion math for you.
Parameters
This node displays the following controls in the Parameters tab:
loVal
Any pixel below this value (as calculated per its depth) turns black.Chapter 24 Keying 705
hiVal
Any pixel above this value (as calculated by its depth) turns white.
loSmooth
A roll-off factor to provide a smooth drop-off.
hiSmooth
A roll-off factor to provide a smooth drop-off.
matteMult
Toggle to premultiply the RGB channels by the pulled mask.
• 0 = no premultiply
• 1 = premultiply
DepthSlice
Similar to DepthKey, the DepthSlice node creates a slice in the alpha channel based on
Z, as defined by a center point, and a drop-off range.
Parameters
This node displays the following controls in the Parameters tab:
center
The center Z depth from which the slice is measured.
lo
The distance included in the slice away from the center. lo adds distance toward the
camera.
hi
The distance included in the slice away from the center. hi adds thickness away from
the camera.
grad
When enabled (1), there is a gradation from hi to lo. Beyond, the slice is still black.
mirror
When enabled, the effect is mirrored in Z.
matteMult
Toggle to premultiply the RGB channels by the pulled mask.
• 0 = no premultiply
• 1 = premultiply706 Chapter 24 Keying
Keylight
Keylight is an Academy Award-winning keyer from Framestore CFC based in England. It
accurately models the interaction of the bluescreen or greenscreen light with the
foreground elements, and replaces it with light from the new background. With this
approach, blue spill and green spill removal becomes an intrinsic part of the process, and
provides a much more natural look with less tedious trial-and-error work. Soft edges,
such as hair, and out-of-focus edges are pulled quite easily with the Keylight node.
To ensure the best results, try to always pull the key on raw plates. In the Keylight
parameters, there is a colourspace control to indicate if the plate is in log, linear, or
video color space. Therefore, you should not perform color correction on film plates
when feeding the plates into Keylight.
For a hands-on example of using the Keylight node, see Tutorial 5, “Using Keylight,” in
the Shake 4 Tutorials.
Parameters
This node displays the following controls in the Parameters tab:
output
You have the option to do your composite within the Keylight node, but there are other
output options as well should you want to composite the foreground image using
other nodes:
• comp: Renders the final composite, against the assigned background.
• on Black: Renders the foreground objects over black, creating a premultiplied
output.
• on Replace: Renders the foreground objects over the replaceColour. This is a good
mode to test your composite. Choosing a bright color allows you to instantly see if
there are unwanted transparent areas in the foreground subject.
• unpremult: Renders the foreground without premultiplying the matte. Use this mode
when you want to add transformations and color corrections after pulling the key.
You then apply either a MMult node or enable preMultiply in an Over node to create
the final composite.
Float Support in Keylight
The Keylight node now supports the preservation of 32-bit data. As a result, float
images may require different keyer settings than 8-bit images. For example,
highlights in the foreground subject of float images may produce unexpected areas
of translucency due to barely perceptible green or blue casts that are preserved in
float, but which would be clipped in 8 bit or 16 bit. This can be addressed by lowering
the highlightGain parameter (to approximately -0.25) to strengthen weak areas in the
interior of the key.Chapter 24 Keying 707
• status: Displays an image with different colors, each of which indicates what portions
of the foreground image are handled in which way by Keylight. This mode is useful
for helping you to troubleshoot your key.
• Black pixels: Areas that become pure background in the composite.
• Blue pixels: Areas that become spill-corrected foreground.
• Green pixels: A blend of foreground and background pixels.
• Pure green: Mostly foreground and dark green is mostly background.
screenColour
The primary color to be pulled, which is usually blue or green.
Note: Keylight is tuned to the primary colors and is not effective on secondary colors
(cyan, magenta, yellow). If trying to pull these colors, consider switching your image
from RGB to CMY with the ColorSpace node, pull the key, and then switch back to RGB.
screenRange
Defines the range of colors that should be keyed out. The higher the number, the more
of the background screen is removed. A value of 0 gives the smoothest key that retains
the most fine detail; a value of .3 removes all the gray levels, and may result in coarser
edges around the foreground subject.
fgBias
Foreground bias is used to reduce the blue spill on foreground objects. Keylight uses
this color to calculate which shades the screen color passes through as it interacts with
the foreground elements.
For example, blonde hair in front of a bluescreen tends to go through a magenta stage.
Setting the FG Bias to the blonde color ensures the magenta cast is properly
neutralized. This value affects both opacity and spill suppression. Return it to .5, .5, .5 to
effectively deactivate this effect.
Avoid picking strong colors for the FG Bias. Muted shades work much better. Another
way of looking at this parameter is as a way of preserving a foreground color that
might otherwise be neutralized because it’s too close to the key color. For example, a
pale green object, such as a plant, in front of a greenscreen would normally become
slightly transparent, with the background showing through instead of the pale green.
By setting the FG Bias to the pale green, it is preserved in the composite.
Note: Please don’t shoot plants in front of a greenscreen.
fineControl
The parameters in the fineControl subtree are used to make detailed adjustments to
the matte that is created by the Keylight node.
• shadowBalance, midtoneBalance, and highlightBalance: Located within the
fineControl subtree, these parameters help you when the screen area is slightly off
from a completely pure primary color—for example, cyan instead of pure blue. 708 Chapter 24 Keying
The transparency of the foreground is measured by calculating the difference
between the dominant screen color (blue by default, otherwise the value of the
screenColour parameter) and a weighted average of the other two colors (red
and green).
With the example of a cyan screen, there is a greater difference between the blue
and the red than between the blue and the green, since cyan has more green than
red. Setting the balance to 0 forces Keylight to ignore the second-most dominant
color in the screen, which is green in the example. When set to 1, the weakest screen
color (red) is ignored. There are three controls to tune the low, medium, and
highlight ranges.
• shadowGain, midtoneGain, and highlightGain: Located in the fineControl subtree,
these parameters let you increase the gain to make the main matte more
transparent. This tends to tint the edges the opposite of the screen color—for
bluescreens edges become yellow. Decrease the gain to make the main matte more
opaque.
Note: You can lower the highlightGain parameter (to approximately -0.25) to
strengthen weak areas in the interior of a mask that are due to green or blue casts in
the highlights of foreground subjects in float images.
• midTonesAt: Located in the fineControl subtree, this parameter adjusts the effect of
the balance and gain parameters by changing the level of the midtones they use. For
example, If you are working on a dark shot, you may want to set the midtone level to
a dark gray to make the controls differentiate between tones that would otherwise
be considered shadows.
replaceColour
Spill can be replaced by the replaceColour. This occurs only in the opaque areas of the
holdout matte. This is useful with blue areas in the foreground that you want to keep
blue and opaque. The replaceColour is therefore blue.
fgMult
Allows color correction on the foreground element. This exactly mimics the Shake
Brightness node.
fgGamma
Applies a gamma correction to the foreground element. This exactly mimics the Shake
Gamma node.
saturation
Applies a saturation correction to the foreground element. This exactly mimics the
Shake Saturation node.Chapter 24 Keying 709
colourspace
Keylight models the interaction of the blue/green light from the screen with the
foreground elements. For these calculations to work correctly, you need to specify how
pixel values relate to light levels. This is the function of the colourspace menu.
Therefore, with Cineon plates (or other logarithmic files) you have the option to pull
the key with or without a Delog operator before the key pull.
• log: Color spaces are designed so that a constant difference in pixel values
represents a fixed brightness difference. For example, in the Cineon 10-bit file format,
a difference of 90 between two pixels corresponds to one pixel being twice as bright
as the other.
• linear: This color space has the brightness of a pixel proportional to its value. A pixel
at 128 is twice as bright as a pixel at 64, for example.
• video: Color space has a more complicated relationship, but the brightness is
approximately proportional to the pixel value raised to the power of 2.2.
plumbing
The subparameters in the plumbing subtree allow you to adjust how the input images
are used to create the final output image and matte.
• useHoldOutMatte: If the third image input is used for a holdout matte, toggles the
holdout matte on and off.
• holdOutChannel: Specifies which channel of the image is being used as the holdout
matte.
• useGarbageMatte: If the fourth image input is used for a garbage matte, this button
toggles the garbage matte on and off.
• garbageChannel: Specifies which channel of the image is being used as the garbage
matte.
• bgColor: Either pulls a key on the area outside of the frame (0), or asserts the
background as the background color (for example, usually black).
• clipMode: Sets the output resolution of the node—either the foreground image (1)
or the background image (0) resolution.
LumaKey
The LumaKey node creates a key in the alpha channels based on overall luminance.
Values below loVal are set to zero, and values above hiVal are set to 1. Values in
between are ramped. You also have roll-off control, and an mMult toggle.
This is a fast way to place the luminance of an image into the alpha layer, but this
operation can also be done with a Reorder node set to rgbl.
Parameters
This node displays the following controls in the Parameters tab:
loVal
Any pixel below this value (as calculated by its luminance) turns black.710 Chapter 24 Keying
hiVal
Any pixel above this value (as calculated by its luminance) turns white.
loSmooth
A roll-off factor to provide a smooth drop-off.
hiSmooth
A roll-off factor to provide a smooth drop-off.
matteMult
Toggle to premultiply the RGB channels by the pulled mask.
• 0 = no premultiply
• 1 = premultiply
Primatte (Plug-in)
The Primatte plug-in is the latest update of Photron’s Primatte keying software. The
Shake Primatte node allows you to scrub across an image to determine matte areas in
order to pull a key (or alpha channel) for a composite. The plug-in also works on the
RGB channels to suppress spill (the leaking of blue or green color onto the foreground
objects).
Note that Shake scripts pass special data to the Primatte plug-in. This data is encoded,
which means that Primatte must be set up using the graphical interface.
To learn how to maximize the effectiveness of this node by combining it with other
functions, see Tutorial 6, “Using Primatte,” in the Shake 4 Tutorials, as well as this
manual’s section on “Blue and Green Spill Suppression” on page 687.
Supplying the Background Image
Although you can output Primatte with a premultiplied foreground with no
background image, you should supply a background input if possible, as Primatte still
factors in some of this information. If you do not supply this image, black ringing
appears around the edges. If this is impractical, toggle the replaceMode parameter to
use color and supply an appropriate Replace Color.Chapter 24 Keying 711
In Primatte, you assign color to one of four zones by clicking one of the eight large
operator buttons, then scrubbing for a color in the Viewer. These four zones are
arranged around a center point in 3D color space, with each zone situated like a layer of
an onion. The following diagram shows the operator/zone assignments. Note that the
“decolor all” button scales the entire 3D space, and shifts all color either toward or away
from the foreground. Therefore, it does not involve picking a color, only moving a slider.
Parameters
This node displays the following controls in the Parameters tab:
clipMode
Sets the resolution to that of either the foreground (0) or the background (1).
output
You are not obliged to use Primatte for your composite, especially when you need to
make transformations to the foreground object after the matte is pulled. Pull the matte
on the full-resolution image prior to any scaling. The output setting determines what is
changed by Primatte:
• alpha only: Only the matte is affected.
• on black: The foreground image and the matte are changed.
• comp: If there is a second input image, this composites that background in.712 Chapter 24 Keying
• status: Presents an image with different colors, displaying which parts of the image
fall into the four Primatte zones. This mode is useful to help you troubleshoot your
key.
• Black–Zone 1: All background
• Blue–Zone 2: Transparent foreground
• Green–Zone 3: Suppressed foreground
• Red–Zone 4: All foreground
arithmetic
Determines how Primatte affects the foreground matte channel.
• Replace (0): Replaces the fg matte channel completely.
• Subtract (1): Subtracts the Primatte-derived matte from the incoming matte.
• Multiply (2): Multiplies the two mattes together.
• Add (3): Adds the two mattes together.
processBGColor
This button tells Shake whether or not to consider the pixels outside of the frame.
When disabled (default mode), the area outside of the image is assumed to be 100
percent transparent. When enabled, the area outside of the frame is treated the same
way black pixels are treated by your Primatte scrubs.
gMatteChannel
The channels to be used for the garbage matte. If no garbage matte is assigned, these
parameters have no effect.
hMatteChannel
The channels to be used for the holdout matte. If no holdout matte is assigned, these
parameters have no effect.
replaceMode
When you perform spill suppression through the use of the spillsponge operator or the
fine tuning operator, these suppressed areas are shifted from blue (or whatever your
center value is) toward a different color. There are two modes:
• use image: The default mode. This mode uses colors from either the background
image, or the image connected to the replaceImage input knot, if one has been
assigned.
• use color: Lets you pick a specific color using the ReplaceColor parameter.
ReplaceColor
The color used in the spill suppressed area if the replaceMode parameter is set to
“use color.”Chapter 24 Keying 713
operator
Each button that appears in the group of controls labelled “operator” allows you to
modify the key created by Primatte, using a color you select with the Color control. The
effect of all the operators you click is cumulative, and each operation you perform is
saved in a history of operations that’s accessible via the currentOp slider.
Every time you click an operator, an additional operation is added to the history of
operations represented by the currentOp slider. To modify the currently selected
operation, instead of adding a new one, select an operation using the currentOp slider,
click the color control bearing that operation’s name, then scrub the image to select a
new color range.
The eight operator buttons include the following:
• background: Assigns pixels to the background. Areas of the foreground that you
select with the background operator become100-percent transparent, with no spill
suppression.
• restore detail: Removes transparency on background material. It is useful for
restoring lost details such as hair. It is the equivalent of detailTrans in the “fine
tuning” subtree, but with no slider.
• fine tuning: When the fine tuning mode is clicked, three additional parameters
appear at the bottom of the Parameters tab.
• spillSponge: When this mode is selected, the pointer motion in the Decolor slider
performs a color adjustment of the sampled color against the background. After
sampling a color region from the image, the more to the right the pointer moves,
the less of the background color component (or spill) is included in that color
region. The more to the left the pointer moves, the closer the color component of
the selected region is to the spill suppress color.
• fgTrans: Adjusts the transparency of the matte against the sampled color. After
sampling a color region from the image, the more to the right the pointer moves,
the more transparent the matte becomes in that color region. This is equivalent to
the “make fg trans” button, but offers more control.
• detailTrans: Determines the transparency of the sampled color when it is close to the
background color. The slider in this mode is useful for restoring the color of pixels
that are faded due to similarity to the background color. If you slide to the left,
picked areas are more opaque. This is equivalent to the “restore detail” button, but
offers more control.
• foreground: When this mode is selected, the sampled pixels within the image
window become 100 percent foreground. The color of the sampled pixels is the same
color as in the original foreground image. The matte is completely white.
• make fg trans: Allows you to select regions within foreground elements in order to
make them more transparent. This operation is used to adjust elements like clouds or
smoke. It is the equivalent of the fgTrans conrol in fine tuning mode, except that it
features no slider control.714 Chapter 24 Keying
• decolor all: When this mode is selected, the value parameter appears at the bottom
of the Parameters tab.
Adjusting the value parameter shrinks or expands the polyhedron between zone 4
(all foreground) and zone 3 (foreground plus spill suppress). Positive values expand
the shell, effectively shifting across the entire image more color from the foreground
into the suppressed area. Negative values contract the shell, thereby slipping more
values away from the suppressed area into the foreground area. Because the shells
cannot intersect, if you shrink the shell too much, you also crush the smaller interior
shells, causing all values to shift toward the background.
• spill sponge: Affects the color of the foreground, but not the matte. This operation
suppresses the color you pick. “spill sponge” is usually used on spill areas that are
known to be opaque—for example, blue spill on the face or body. If the change is
too drastic, supplement or replace the operation with fine tuning–spillSponge
adjustment.
• matte sponge: Used to restore foreground areas lost during spill suppression. “matte
sponge” only affects the alpha channel.
currentOp
Each operator you use to perform a scrub operation is maintained separately in
memory. The history of all the operations you’ve performed within the Primatte node is
accessible using the currentOp slider. Moving the slider to the left returns the you to
any previous scrub operation you’ve performed, allowing you to re-adjust or delete it.
To see which operation you’ve selected, look at the name that appears at the top of the
color control that appears below the slider.
delete op
Click “delete op” to delete the operation that’s currently selected in the currentOp
parameter.
Color Control (initially set to “center”)
This control isn’t dedicated to any one parameter. Instead, the color control lets you
assign a color to whichever operator you’ve clicked in the Primatte node. Additionally,
this control displays the color that’s currently assigned to whichever operator you’ve
selected using the currentOp slider.
When you first assign a Primatte node to an image, this control is set to “center.” It is the
first operation you use when you begin keying with the Primatte node. Scrub pixels in
the background of the image to be keyed to define the starting range of color to be
keyed out. Doing so determines the center of the 3D polyhedron (described above),
and is therefore extremely important. When the center operation is selected, the
multiplier parameter appears at the bottom of the Parameters tab.Chapter 24 Keying 715
This initial pixel scrub that defines the center is always operation 0 in the currentOp
slider. To readjust the center, move the currentOp slider all the way to the left, to
operation 0.
Note: Readjusting the center operation will change the effect of all subsequent
operations you have already performed.
As you click additional operators (for example, the “background,” “foreground,” and
“spill sponge” operations), this color control lets you choose a color from the
foreground image to assign to the currently selected operator.
multiplier
When the color control is set to center, the multiplier slider appears. This parameter lets
you modify the size of the center area in 3D space. Therefore, a higher multiplier value
expands the size of the background space, sucking in more of that color. The result is
more transparent. A value lower than 1 reduces the amount of color sucked out by the
initial background color pick.
evalToEnd
As you adjust the currentOp slider, you may want to see the composite only up to that
point. Turn off Eval to End to view the key using only the operators at or before the
current operation selected by the currentOp slider.
active
Instead of clicking delete op to eliminate an operator from the currentOp slider, you
can disable any operation by turning off the active control. This disables the operation
that’s currently selected by the currentOp slider.
SpillSuppress
The SpillSuppress node suppresses blue or green spill, with controls for gain in the other
color channels. SpillSuppress uses a color-correction algorithm, so the entire image is
modified.
Parameters
This node displays the following controls in the Parameters tab:
rGain
Since the spill suppress darkens the overall image, you can use this control to slightly
compensate for the brightness in the other two channels.
gGain
Since the spill suppress darkens the overall image, you can use this control to slightly
compensate for the brightness in the other two channels.
lumGain
An overall luminance gain.716 Chapter 24 Keying
screenColor
Select color to suppress.
• 0 = Suppress blue
• 1 = Suppress green. gGain then converts to bGain.25
717
25 Image Tracking, Stabilization, and
SmoothCam
Shake provides several methods of tracking, stabilizing,
and smoothing moving subjects in your scripts.
Additional tools are provided to process the keyframed
results of these operations, giving you even more
detailed control.
About Image Tracking Nodes
Shake provides four image tracking and stabilization nodes: Tracker, Stabilize,
MatchMove, and SmoothCam.
• Stabilize: The Stabilize node is used to correct unwanted movement in a shot. You
can also use the Stabilize node for matchmoving using the inverseTransform
parameter. Use of the Stabilize node’s inverseTransform parameter is recommended
rather than use of the MatchMove node, since it gives you the option to composite
later with an Over node. This technique also gives you proper pass-through of
onscreen controls for more intuitive control.
• Tracker: The Tracker node does not transform the input image. It is used only to
generate tracks that can then be referenced by the MatchMove and Stabilize nodes,
or nodes such as Move2D and Pan with standard linking techniques.
• MatchMove: The MatchMove node allows two input images—a Foreground (the first
node input) and a Background (the second node input). By tracking one, two, or four
points on the background, the foreground can be set to match the movement of the
background. Placing a logo on the side of a moving truck is the classic example of a
matchmove. The single advantage the MatchMove node has over the Stabilize node is
that you can immediately test your track results.
Note: You can also attach a tracker to a rotoshape or paint stroke. For more
information, see “Attaching a Tracker to a Paint Stroke” on page 586 and “Attaching
Trackers to Shapes and Points” on page 562.718 Chapter 25 Image Tracking, Stabilization, and SmoothCam
• SmoothCam: This node differs from the others above in that it doesn’t track small
groups of pixels. Instead, it evaluates the entire frame, using motion analysis to
derive the movement of the camera. Once derived, this node has two modes. It can
smooth the shot, eliminating unwanted jitter while maintaining the general motion
of the camera. It can also lock the shot, stabilizing a subject within the frame that’s
isolated with a mask. This node can affect translation, rotation, zoom, and
perspective, making it more flexible for certain operations than the other tracking
nodes. For more information on using the SmoothCam node, see “The SmoothCam
Node” on page 754.
How a Tracker Works
A tracker works by analyzing an area of pixels over a range of frames in order to “lock
onto” a pattern as it moves across the screen. You specify the “snapshot” of pixels in
one or more reference frames, then Shake proceeds to “track” that snapshot for a
specified duration of time. In Shake, that snapshot is known as a reference pattern, and
its area is defined by the inner box of the onscreen tracker control:
Ideally, the reference pattern should be some easily identifiable detail with high
contrast—this makes it easier to track.
The tracker advances to each subsequent frame, sampling the area inside the search
region, which is represented by the outer box of the onscreen tracker control. The
tracker positions a box the same size as the reference pattern at the pixel in the first
row, at the first column of the search region, and takes a sample. The tracker then
advances to the next pixel (or subpixel) column in the search region and takes a
second sample. For every sample the tracker takes, it assigns a correlation value by
comparing the current sample to the previously designated reference pattern. When all
of the samples have been taken, Shake assigns the new tracking point to the sample
with the highest correlation value. This process is then repeated, every frame, until the
end of the track range has been reached.
Search region
Track point
Reference patternChapter 25 Image Tracking, Stabilization, and SmoothCam 719
Using referenceBehavior
The referenceBehavior parameter controls if and when the reference pattern is ever
updated. By default, the reference pattern is set to use the start frame (the first frame at
which you start tracking) throughout the entire track, so even the samples within the
last frame of the track are compared to the very first frame. You can change this
behavior to periodically update the reference pattern when various criteria are met,
which can help you to track subjects that change shape due to perspective or motion.
Setting subPixelResolution
The number of samples taken in the search region is determined by the
subPixelResolution parameter. A subPixelResolution of 1 positions the reference pattern
box at every pixel to find a sample. This is not very accurate, because most movement
occurs at the subpixel level—the movement is more subtle than one pixel across, and
therefore is factored in with other objects in the calculation of that pixel’s color. The
next resolution down in subPixelResolution, 1/4, advances the reference pattern box in
.25 pixel increments, and is more accurate.
The example here is a theoretical 1/2 resolution since the pattern is advanced in .5 pixel
increments. Keep in mind that the lower the number, the more samples taken. At 1/4
resolution, it takes 16 times more samples per pixel than at a resolution of 1. At 1/64, it
takes 4096 times more samples per pixel. 720 Chapter 25 Image Tracking, Stabilization, and SmoothCam
For this reason, most trackers don’t handle significant rotational movement very well—
they (Shake’s included) only test for panning changes, not rotational. If they did, they
would have to multiply the amount of panning samples by the amount of degrees for
the number of samples to take, which would be prohibitively costly at this stage. If you
are tracking an object with rotational movement, try using a referenceBehavior set to
update every frame. This means that the reference pattern is updated at every frame,
so you are only comparing a frame with the frame before it, and not the first frame.
Also keep in mind that manual adjustments are a standard solution for many tracking
problems.
This section discusses tracking in depth, including interface features, workflow issues,
and tips on successful tracking and manipulation of tracking data. For specific formats
of each node, see the functions listing at the end of this chapter. For a tutorial on how
to use the Tracker, see Tutorial 7, “Tracking and Stabilization,” in the Shake 4 Tutorials.
Image Tracking Workflow
The following is a general overview of the steps required to generate a track. The steps
are further detailed in subsequent sections.
To generate a track:
1 Apply a motion tracking node to an image.
2 Double-click the tracking node to load its image into the Viewer and its parameters
into the Parameters tab.
Note: If you don’t load the motion tracking node into the Viewer, the track will not be
performed.
3 Play your background clip several times to determine a good tracking point.
4 Make sure that the onscreen controls are visible in the Viewer.
5 Go to the frame that you want to start the track.
6 Position the tracker on the point you want to track, then adjust the reference pattern
and the search region boxes used to identify the desired tracking point.
7 Ensure the trackRange parameter reflects the frame range you want to track.
For FileIn nodes, the default is the range of the clip. For Shake-generated elements, you
must provide a frame range, for example, 1-50.
8 Click the Track Forward or Track Backward button (in the Viewer shelf) to begin
processing.
The track generates animation curves.
9 To stop a track, click the mouse button.Chapter 25 Image Tracking, Stabilization, and SmoothCam 721
Stabilize Additions
If you are using the Stabilize node, include the following additional steps with the
general steps above.
1 Determine if you need one-, two-, or four-point tracking.
Note: For two- or four-point tracking, select “2 pt” or “4 pt” in the trackType parameters.
In the Stabilize node parameters, set applyTransform to active in order to stabilize the
plate.
2 If you are using the Stabilize node for matchmoving, toggle the inverseTransform
parameter from stabilize to match.
MatchMove Additions
If you are using the MatchMove node, include the following additional steps with the
general steps above.
1 Attach the foreground element to the first input of the MatchMove node.
2 Determine if you need one-, two-, or four-point tracking.
3 In the MatchMove parameters, set the outputType to Over (or another compositing
operation).
4 Set the applyTransform parameter to active (to match the motion).
Note: If you are four-point tracking (make sure “4 pt” is enabled in the trackType
parameter), and you want to attach four arbitrary points on the foreground image to
the background, click the BG/FG button in the Viewer shelf to show the foreground.
Next, position the four corners on the foreground element. These represent the corners
that are plugged into the four tracking points. Click the BG/FG button again to show
the background.
5 You may have to set the outputType parameter to Over again (or whichever
compositing mode you selected in step 3).
Adjusting the Onscreen Tracker Controls
The Tracker, Stabilize, and MatchMove nodes share common onscreen interface controls
in the Viewer.
A tracker consists of three onscreen controls:
• Search region: The outer box
• Reference pattern: The inner box722 Chapter 25 Image Tracking, Stabilization, and SmoothCam
• Track point: The center crosshairs
To move the tracker:
m
Click a blank area inside of the search region or the track point, then drag.
To resize the tracking region or the reference pattern:
m Drag a corner and the boxes uniformly scale in the X or Y axes.
The larger the search region, the slower the track.
To scale the search region non-uniformly:
m Drag an edge of the search region.
This is good for “leading” the track point. For example, a bus in a clip moves to the
right. Scale the tracker search region to the right as well, since it doesn’t make sense to
scan to the left of the current track pattern for a pattern match. (This is demonstrated
in Tutorial 7, “Tracking and Stabilization,” in the Shake 4 Tutorials.)Chapter 25 Image Tracking, Stabilization, and SmoothCam 723
Note: For four-point MatchMove and Stabilize operations, the trackers should be
positioned in a counterclockwise order, starting in the lower-left corner. This ensures
the proper alignment of your element when the transformation is applied.
If the reference pattern you are tracking goes offscreen, or becomes obscured by
another object, Shake’s default behavior is to stop the tracking analysis. You can use the
Offset Track button to reposition the reference pattern in the Viewer. When you restart
the tracking process, Shake will continue its motion analysis based on the new
reference pattern, but will update (and keyframe) the original tracking point.
To offset (move) the onscreen tracker control to an unobstructed area of the
image:
1 Cick the Offset Tracker button in the Viewer shelf.
2 Drag the onscreen tracker control (but not the tracking point crosshairs) to a better
position in the Viewer.
3 Click the Track Forward or Track Backward button in the Viewer shelf to restart the
motion analysis.
Shake continues to keyframe the movement of the original tracking point, based on
the movement of the new, offset reference pattern.
Note: When you use the Offset Track function, be sure that the new reference pattern is
in the same perspective plane as the originally tracked feature. If the two features are
not approximately the same distance from the camera, the parallax effect will result in
inaccurate motion tracking.
Offset Track
button disabled
Offset Track
button enabled724 Chapter 25 Image Tracking, Stabilization, and SmoothCam
4 To reset the search area back to the original tracking point, click the Reset Track button.
You can turn off a tracker using the controls in the Parameters tab.
To turn off a specific tracker:
m
Click the Visibility button located next to the track name in the Parameters tab.
All visible trackers are processed when you click the track button, regardless of whether
they have previously tracked keys. When Visibility is disabled for a tracker, that track is
not processed.
A limitation of the MatchMove node is that it does not handle the pass-through of
upstream onscreen controls, so those controls are in their untransformed state.
Viewer Shelf Controls
The following table describes the Viewer shelf buttons that become available with the
tracking nodes.
Reset Track
button
Visibility button
Button Description
Track Backward/
Track Forward
Click to start the tracking process. All visible
trackers attempt to lay down new tracking
keyframes. Tracking continues until one of the
trackRange frame limits is reached—the upper
limit if you are tracking forward or the lower limit if
you are tracking backward, or until your correlation
falls below your failureTolerance setting.
Offset Track–Off The track search region and the tracking point are
linked. If you move one, the other follows.
Offset Track–On The track search region and the tracking point are
offset from each other. If your original sample
pattern becomes obscured, offset the search
region to a different area. Your keyframes are still
saved relative to the original spot.
Reset Track Returns an offset track search region back to the
track point. Chapter 25 Image Tracking, Stabilization, and SmoothCam 725
Next to each track name is a Color Picker and a Visibility button.
To change the color of a tracker, click the color swatch.
Tracking Parameters
All trackers share the following parameters:
Track Display Click to toggle between track display options:
• The left button displays all trackers, curves, and
keyframes.
• The middle button displays just the trackers and
keyframes.
• The right button displays just the trackers.
You can also control visibility of individual trackers
as described below.
FG/BG These buttons appear on the Viewer shelf when
the MatchMove node is active. Click FG/BG to
toggle between adjusting the track points on the
background (BG), or the corners of the foreground
(FG) that are pinned to the track points (when “4
pt” [four-point tracking] is enabled in the trackType
parameter).
Button Description
Parameter Description
trackRange The trackRange parameter is the potential frame range limit of your track. By
default, the range is set to the clip range. For generated elements such as
RGrad, the default takes a range of 1. You can set new limits using Shake’s
standard range description, for example, 10-30x2. If you stop tracking and start
again, processing starts from the current frame and proceeds to the lower or
upper limit of your trackRange (depending on whether you are tracking
forward or backward).
subPixelResolution The subPixelResolution parameter determines the resolution of your track. The
smaller the number, the more precise the track.
Possible values:
1 Area is sampled at every pixel. Not very accurate or
smooth, but very fast.
1/4 Area is sampled at every .25 pixels (16 times more than
with a sampling of 1).
1/16 Area is sampled at every .0625 pixels (256 times more
than with a sampling of 1).
1/32 Area is sampled at every .03125 pixels (1024 times more
than with a sampling of 1).
1/64 Area is sampled at every .015625 pixels (4096 times more
than with a sampling of 1).726 Chapter 25 Image Tracking, Stabilization, and SmoothCam
matchSpace The pixels are matched according to the correlation between the selected
color space—luminance, hue, or saturation. When an image has roughly the
same luminance, but contrasting hues, you should switch to hue-based
tracking.
You can also adjust the weight of the color channels in the matchSpace
subtree.
referenceTolerance A tracking correlation of 1 is a perfect score—there is an exact match between
the original reference frame and the sampled area. When the
referenceTolerance is lowered, you accept greater inaccuracy in your track. If
tracked keyframes are between the referenceTolerance and the
failureTolerance, they are highlighted in the Viewer. Also, in some cases,
referenceBehavior is triggered if the tracking correlation is below the
referenceTolerance.
referenceBehavior This behavior dictates the tracking area reference sample. By default, the
reference pattern is the first frame that the track is started, not necessarily the
first frame of the trackRange. The last two behaviors in the referenceBehavior
list measure the tracking correlation and match it to the referenceTolerance to
decide an action.
use start frame The new samples are compared to
the reference pattern from the first
frame of the track. If you stop tracking
midway, and start again at a later
frame, the later frame is used as the
reference sample.
update every frame The source sample is updated from
the previous frame. This usually
creates an inherent drift in the track,
as tiny errors accumulate. This
method is for movements that have
drastic changes in perspective and
scale.
update from keyframes If you are using a failureBehavior of
“predict location and don’t create
keys” or “don’t predict location,” a
keyframe is not necessarily saved
every frame. In this case, you may
only want to update from the last
frame with a valid keyframe.
update if above reference tolerance This updates the reference sample
from the previous frame if the
correlation is above the
referenceTolerance. The intent is to
update every frame unless you know
the point is obscured. If you use a
predict mode and know there are
obstructions, it keeps the reference
area from updating if the point is
completely obscured.
Parameter DescriptionChapter 25 Image Tracking, Stabilization, and SmoothCam 727
update if below reference tolerance This updates the reference sample
from the previous frame if the
correlation is below the
referenceTolerance. This basically says,
“If I can’t get a good match, then
resample.” This is excellent for gradual
perspective and scale shifts in the
tracking area.
failureTolerance If the correlation of a track falls below this value, it initiates the failureBehavior.
failureBehavior What occurs when the correlation drops below the failureTolerance:
stop The tracker stops if the correlation is
less than the failureTolerance. You can
also press Esc to manually stop
tracking.
predict location and create key If a failure is detected, then the
tracker predicts the location of the
keyframe based on a vector of the last
two keyframes, and continues
tracking in the new area.
predict location and don’t create key Same as above, but it merely predicts
the new search area and does not
create new keyframes until a high
correlation is obtained. This is
excellent for tracked objects that pass
behind foreground objects.
don’t predict location In this case, the tracker merely sits in
the same spot looking for new
samples. New keyframes are not
created.
use existing key to predict location This allows you to manually create
keyframes along your track path. You
then return to the start frame and
start tracking. The search pattern
starts looking where the preexisting
motion path is.
limitProcessing Creates a Domain of Definition (DOD) around the bounding boxes of all active
trackers. Only that portion of the image is loaded from disk when tracking, so
the track is faster. This has no effect on the final output image.
preProcess Toggles on the preprocessing for the tracking area. This applies a slight blur to
reduce fluctuations due to grain. To control the blur amount, open the
preProcess subtree.
blurAmount The amount of blur applied when preprocessing.
trackNName The name of the track. To change the name, click in the text field and enter the
new name.
trackNX/Y The actual track point in X and Y. Use this to link a parameter to a track point.
Parameter Description728 Chapter 25 Image Tracking, Stabilization, and SmoothCam
Tracking Shortcut Menu
Right-click in the text field of a trackName to open a shortcut menu with options for
manipulating your tracks.
Strategies for Better Tracking
Unfortunately, tracking is rarely a magic bullet that works perfectly on the first attempt.
This section discusses some strategies to help you get accurate tracks in various
situations.
trackNCorrelation The correlation value of that key to the original sample. A score of 1 is a
perfect score. 0 is a completely unusable score.
trackNWindow
Parameters
These multiple parameters control the windowing of the tracking box, and are
not relevant to exported values.
Parameter Description
Menu Item Description
Copy/Paste The standard copy and paste commands.
Load Track Displays a list of all currently existing tracks. Select one, and a copy
of that track is loaded into both the X and Y parameters of the
current tracker.
Link Track Displays a list of all currently existing tracks. A link is made from the
current tracker to the tracker you select in the Select Track pop-up
window. Therefore, if you modify the tracker selected with the Link
command, the current tracker is updated. If you delete the tracker
you linked to, you lose your tracking data.
Average Tracks Displays a list of all tracks. Select up to four tracks that you want to
average together, then click OK. An expression is entered into the
current trackX and Y fields that links back to your averaged tracks.
You cannot choose to average your current track—it is deleted by
this action.
Smooth Track Displays a slider to execute a blur function on your tracking curves;
the smooth value is the number of keyframes considered in the
smoothing operation. To view the curves, open the trackName
subtree and click the Load Curve button to load the parameters
into the Curve Editor. If you use Smoothing and Averaging
operations as your standard workflow, it is recommended that you
generate your tracks first with the Tracker node, and then link the
tracks to a MatchMove or a Stabilize node.
Load Track File Loads a Shake-formatted track file from disk. See the end of the
tracking overview for the format of a track file.
Save Track File Saves a Shake-formatted track file to disk.
Clear Track Resets the current tracker. To reset the entire function, right-click an
empty area of the Parameters tab, then choose Reset All Values
from the shortcut menu.Chapter 25 Image Tracking, Stabilization, and SmoothCam 729
Picking a Good Reference Pattern
The ideal reference pattern is one that doesn’t change perspective, scale, or rotation,
and does not move offscreen or become obscured by other objects. The ideal pattern
also maintains overall brightness or color, is very high contrast, and is distinct from
other patterns in the same neighborhood. Meanwhile, in the real world, we have to
contend with all of these factors in our footage.
In the following example, two possible reference pattern candidates include the corner
at area A, or anywhere along the line near B. Area A is the better choice, since B can be
matched anywhere along the horizontal black line, and probably on the dark line that
is below B. One rule is to avoid similar horizontal or vertical patterns when selecting a
candidate pattern. If you can easily find similar patterns, so can the tracker.
Another potential candidate is a word in the sign. However, as the clip advances, the
text becomes an indecipherable blur. The A and B points also move closer together—
the clip has significant scaling on the X axis and some scaling in Y due to the
perspective shift. Although the overall brightness has dropped, contrast remains
relatively high at the A point. The A point is a good candidate for tracking with
referenceBehavior set to “update if below reference tolerance” or “update every frame.”
The reference sample should also be relatively constant and unique over time.
Flickering lights, for example, are not good reference patterns. If the lights were regular
enough, you could try to set the trackRange to match the flicker, that is, 1-5, 10-15, 20-
25, and so on. Granted, this is awkward. A better solution is to set your failureBehavior
to “predict location and don’t create key.” 730 Chapter 25 Image Tracking, Stabilization, and SmoothCam
The following example shows a track marker placed on a TV screen so the client could
place an image on the TV. The default tracker reference pattern is unnecessarily large.
The only interesting detail is the black cross in the middle. Otherwise, most of the
pattern matches up with most of the rest of the image—green. To adjust for this, limit
the reference pattern to more closely match the black crosshairs.
Picking a Good Search Region
You should also suit your search region to match the clip’s movement and the patterns
near the reference pattern. With the default settings, the bus example clip does not
track the lower-left corner of the sign very well. This is because the middle black
horizontal line can easily be matched with the black line at the very base of the search
region in later frames of the clip. The X axis is squeezed so much that vertical details
disappear. Additionally, since the bus is moving to the right, there is no point in
wasting cycles to the left of the point. Remember, the larger the search region, the
more samples the tracker must take.
The corrected search region, illustrated below, is now high enough to not include the
lower black line, and extends minimally to the left.
Adjusted tracker
reference pattern
Default tracker reference patternChapter 25 Image Tracking, Stabilization, and SmoothCam 731
Manually Coax Your Track
Another technique you can use is to manually insert tracking keyframes. For example, if
you have 100 frames to track, you can put in a keyframe every 5 or 10 frames with the
Autokey feature. A helpful trick is to set an increment of 5 or 10 in the Time Bar. Press
the Left Arrow or Right Arrow to jump by the increment amount.
Note: To add a keyframe without moving an onscreen control—for example, to create
a keyframe at frame 20 with the same value as a keyframe at frame 19—turn Autokey
off and then back on.
Once your keyframes are manually entered, return to frame 1, and set the
failureBehavior to “use existing key to predict location.” The tracker searches along the
tracker’s preexisting motion path to find matching patterns.
Identify the Color Channel With the Highest Contrast
The tracker works best with a high-contrast reference pattern. The human eye sees
contrast as represented by value. However, you may sometimes have higher contrast in
saturation or hue, so switch to a different color space with the matchSpace parameter
in the tolerances subtree. A shot may also have a higher contrast in a specific RGB
channel than in the other channels. For example, the blue channel may have a larger
range than the red or green channel. In such a case, apply a Color–Reorder node to the
image, set the channels parameter to bbb, and then perform the track with luminance
selected as the matchSpace.
Delog Logarithmic Cineon Files Prior to Tracking
Because the logarithmic to linear conversion increases contrast, you may have better
results on linear data than on logarithmic data.
Avoid Reducing Image Quality
Ideally, you should track an image with the most amount of raw data. This means if you
apply a Brightness node set to .5 to your image, you lose half of the color information
that could have been used for tracking. It is far better to track the image before the
Brightness function is applied.
Do Not Track Proxies
Don’t ever track an image with proxy settings enabled. Proxies are bad for two
reasons: First, you filter the image so detail is lost. Second, you automatically throw
data away because of data round-off. For example, if using 1/4 proxy, you automatically
throw away four pixels of data in any direction, which means an 8 x 8 grid of potential
inaccuracy.732 Chapter 25 Image Tracking, Stabilization, and SmoothCam
Increasing Contrast and Preprocessing the Image
It is often helpful to apply a Monochrome node to an image, and drop the blue channel
out if you have particularly grainy footage. Another good strategy is to activate the
preProcess flag in the tracker. This applies a small blur to the footage to reduce
irregularities due to video or film grain.
In some cases, you may want to modify your images to increase the contrast in the
reference pattern, either with a ContrastLum node or ContrastRGB node. Since you’re
only using this image to generate tracks, you can disable or remove these nodes after
you’ve finished the track.
Tracking Images With Perspective, Scale, or Rotational Shifts
For images with significant change in size and angle, you can try two different
referenceBehaviors, “update if below reference tolerance” or “update every frame.” The
second choice is the more drastic, because you get an inherent accumulation of tiny
errors when you update every frame. Therefore, try “update if below reference
tolerance” first.
Another strategy is to jump to the midpoint frame of the clip and track forward to the
end frame of the clip. Then return to the midpoint frame and track backward to the
beginning of the clip.
A second strategy is to apply two Stabilize nodes. The first Stabilize node can be
considered a rough stabilize. The second Stabilize then works off of the first, and
therefore has a much better chance of finding acceptable patterns. Since the Stabilize
nodes concatenate, no quality is lost.
Tracking Obscured or Off-Frame Points
There are two basic techniques to correct track points that are obscured by moving off
screen or an object passing in front of them.
The first strategy is to use a different failureBehavior, either “predict location and create
key” or “predict location and don’t create key.” The first setting is good for predictable,
linear motion behavior—it continues to lay down keyframes following the vector of the
last two valid keyframes, so the two frames prior to the pattern become obscured. It is
excellent for points that go off screen and never reappear. The second setting is a
better choice for patterns that reappear because it continues to search on a vector, but
only creates a keyframe if it finds another acceptable pattern. You have an even
interpolation between the frame before the pattern was obscured, and the frame after
it is revealed again. These strategies only work when the clip contains nice linear
movement.Chapter 25 Image Tracking, Stabilization, and SmoothCam 733
The second strategy is to use the Offset Tracker button (in the Viewer shelf). When the
reference pattern becomes obscured, turning on the Offset Tracker button lets you
move the tracking control, picking a new reference pattern and search region in a
different area from the original reference pattern. The offset between the original
reference pattern and the new one is calculated in order to maintain continuity in the
resulting track path.
In the following example, the track is obscured by a lamp post, so the search region
(not the point, just the region) is moved to a nearby pattern and tracking continues
until the original pattern reappears. Even though one region is examined, the points
are saved in another region. The second tracking pattern should travel in the same
direction as your original pattern.
Modifying the Results of a Track
A track can be modified in several ways to massage the data from a less-than-perfect
track. You can manually modify a track in the Viewer or in the Curve Editor, average
tracks together, smooth tracks to remove noise, or remove jitter to smooth out a
camera movement.
Manually Modifying Tracks
To manually adjust a tracking point onscreen, turn on the Autokey button in the
Viewer shelf.
Note: To add a keyframe without moving an onscreen control, for example, to create a
keyframe at frame 20 with the same value as a keyframe at frame 19, turn Autokey off
and then back on.734 Chapter 25 Image Tracking, Stabilization, and SmoothCam
Use the + and – keys (next to the Delete or Backspace key) to zoom in and out of the
clip. The zoom follows the pointer, so place the pointer on the key point in the Viewer
and zoom in. Press the Home key (near the Page Up and Page Down keys on your
keyboard) or click the Home button in the Viewer shelf to return to normal view.
You can also adjust a tracking curve in the Curve Editor. In the tracking node
parameters, open the trackName subtree and click the Load Curve button to load a
parameter into the Curve Editor. In the following example, track1X (only the X
parameter) is loaded into the Editor.
Averaging Tracks
A common technique is to track forward from the first frame to the last, create a
second track, and track backward from the last frame to the first. These two tracks are
then averaged together to (hopefully) derive a more accurate track. If you plan to use
this method, it is recommended that you use the Tracker node, and then load your
tracks into Stabilize or MatchMove with the Load or Link Track functions (in the
trackName text field shortcut menu).
To average tracks:
1 Apply a Tracker node, and in the bottom of the Tracker parameters, click Add.
2 Click Add again.
There are a total of three trackers.
Note: You could also potentially use tracks from any other Stabilize, MatchMove, or
Tracker node.
3 Create tracks on track1 and track2.
4 Right-click track3, then choose Average Tracks from the shortcut menu. Chapter 25 Image Tracking, Stabilization, and SmoothCam 735
5 Select Tracker1.track1 and Tracker1.track2 as the first two inputs, respectively, and leave
the last two inputs set to none. The following illustration shows Stabilize1 to remind
you that any tracking node can be a track source.
6 Click OK.
The third track, track3, is in the middle of the first two tracks.
This works by creating an expression in both the track3X and track3Y parameters. The
expression for the X parameter looks like this:
(Tracker1.track2X+Tracker1.track1X)/2
Because these are linked to track1 and track2 on the Tracker1 node, do not delete them.
For more information on linking, see “Linking to Tracking Data” on page 737.
You can average up to four tracks at one time, but you can of course continue to
manipulate your tracks with further functions, including Average Tracks.
Smoothing Track Curves
You can smooth a track with the Smooth Tracks function in the Tracker parameters.
Prior to smoothing the curve, you may want to copy the track (as a backup) to another
tracker with the Load Track function on the second tracker. 736 Chapter 25 Image Tracking, Stabilization, and SmoothCam
To smooth a track curve:
1 Right-click the track you want to smooth, then choose Smooth Tracks from the shortcut
menu.
The Smooth Track window appears.
2 Enter a value (or use the slider) in the smoothValue field.
The default is 5, which means that 5 track points centered on the currently evaluated
point are used to compute the current point’s new, smoothed value. This is a standard
Gaussian (bell-curve type) filter. In other words, if you leave it at 5, when the value of
frame 12 is computed, frames 10, 11, 12, 13, and 14 are considered. If set to 3, it uses
frames 11, 12, and 13.
The larger the smoothValue, the more points are considered (and thus more
calculations done) for every point in the curve. Even values for smoothValue use the
next largest odd number of frames, but the end ones do not contribute as much.
As an example, the following is a noisy track curve (before smoothing):Chapter 25 Image Tracking, Stabilization, and SmoothCam 737
After the track curve is smoothed:
Linking to Tracking Data
Referencing track point data works similarly to referencing any other parameter within
Shake. The twist here is that since you can rename the track point, you can change the
name of the referred parameter. For example, if you have a Tracker node named
Tracker1, and a track point set to its default name “track1,” do one of the following:
m
To reference the X track data, use: Tracker1.track1X.
m
If you change the name of the track point to “lowerleft,” then the reference is changed
to Tracker1.lowerleftX.
This applies to the Y data as well.
m
You can also use Link Track (in the trackName text field shorcut menu) to link one track
to another track, simultaneously linking the X and Y curves.
Removing Jitter on a Camera Move
The following technique is useful when the clip contains a camera move that you want
to preserve, but which has a lot of jitter. You need to stabilize the shot, but only by the
small amount that is the actual jitter. To do this, you can combine the techniques
mentioned above.
Note: The SmoothCam node was specifically developed to smooth out or lock irregular
camera movement within a shot in a much simpler way. For more information, see “The
SmoothCam Node” on page 754.
To remove jitter and preserve the camera move:
1 Track the plate with a Tracker node (for this example, called Tracker1).
2 In the Tracker1 parameters, click Add to create a second tracker in the same node.
3 Load track1 into track2 using Load Track (right-click in the track2 text field).
4 Right-click track2, then choose Smooth Track from the shortcut menu.
5 Enter a smooth value, click Apply, then click Done.738 Chapter 25 Image Tracking, Stabilization, and SmoothCam
At this point, you have a track and a smoothed version of that track. The following
example shows the Y curves of the two tracks.
6 Create a Stabilize node.
7 In the Stabilize node, expand track1, then enter the following expression in the track1X
and track1Y parameters:
• In track1X, enter: Tracker1.track1X - Tracker1.track2X
• In track1Y, enter: Tracker1.track1Y - Tracker1.track2Y
Thus, you get only the difference between the two curves—the jitter.
Note: This illustration is scaled differently in Y than the above illustrations.
8 In the Stabilize node parameters, set applyTransform to active.
The plate is only panned by the amount of the jitter, and maintains the overall
camera move.
Working With Two-Point Tracking
There are several additional options available when working with two-point tracking.
You can choose to pan, scale, and/or rotate the image. When setting the applyScale
and applyRotate parameters, you have three choices: “none,” “live,” and “baked.”Chapter 25 Image Tracking, Stabilization, and SmoothCam 739
In the applyScale and applyRotate parameters, enable “live” to use the mathematical
calculation of the four curves (track1X, track1Y, track2X, and track2Y)—live mode takes
the track1 and track2 expressions and creates scale or rotational curves you can view in
the Curve Editor.
To convert curves into editable data (generate keyframes), click “baked” in the
applyScale and applyRotate parameters.
Note: Two-point (or four-point) tracking is only available in the MatchMove and
Stabilize nodes (not the Tracker node).
Saving Tracks
Once a track is complete, you can save the track (in the Tracker, MatchMove, or Stabilize
node) for use by another tracking function, a paint stroke, or to a RotoShape node.
Note: Although you cannot attach a saved track file to a paint stroke or RotoShape
node, you can add a tracking node to your script, load the saved track file in the
tracking node, then apply the track to your shape or paint stroke. For more information
on attaching trackers to strokes and shapes, see “Attaching a Tracker to a Paint Stroke”
on page 586 and “Attaching Trackers to Shapes and Points” on page 562.
To save a track file:
1 Create your track using the Tracker, MatchMove, or Stabilize node.
2 In the tracker Parameters tab, right-click the trackName field, then choose Save Track
File from the shortcut menu.
3 In the “Save tracking data to file window,” navigate to the directory in which you want
to save the track file, then enter the track file name.
4 Click OK.
The track file is saved.
To load a track file:
1 Add a Tracker, MatchMove, or Stabilize node.
2 In the tracker Parameters tab, right-click the trackName field, then choose Load Track
File from the shortcut menu.
3 In the “Load tracking data from file” window, navigate to the track you want to load.
4 Click OK.
The track is applied to the tracker.740 Chapter 25 Image Tracking, Stabilization, and SmoothCam
Tracking File Format
The following is a sample saved track file for use with the Save Track File or Load Track
File command (right-click in the track name text field to access the shortcut menu).
TrackName track1
Note: The Save Track File or Load Track File command is different from the Load
Expression command. The Load Expression command is available in the shortcut menu
of any node’s value field and looks for formatted Shake expressions. For example, to
load the above information into a Move2D node, you must load two files, one for the
xPan parameter and one for the yPan parameter. Their formats are similar to the
following:
Linear(0,0462@1, 405@2, ....)
and
Linear(0,210@1, 192@2, ...)
Tracking Nodes
The following section includes the tracking functions located in the Transform Tool tab.
For information on other Transform functions, see Chapter 26, “Transformations, Motion
Blur, and AutoAlign,” on page 763.
MatchMove
The MatchMove node is a dedicated tracking node to match a foreground element to a
background element using one-point (panning), two-point (panning, scaling, or
rotation), or four-point (corner pinning) tracking. Unlike the Tracker or Stabilize node,
MatchMove can perform the compositing operation, or you can pass the transformed
foreground out for further modifications (blur, color corrections, and so on) before you
do a composite.
The MatchMove node can generate up to four tracking points, or you can load in other
tracks (created in Shake or on disk). To load another track, right-click in a trackName
text field, then choose Load Track from the shortcut menu.
Frame X Y Correlation
1.00 462.000 210.000 1.000
2.00 405.000 192.000 1.000
etc...Chapter 25 Image Tracking, Stabilization, and SmoothCam 741
Parameters
This node displays the following controls in the Parameters tab:
applyTransform
The foreground element is only transformed if applyTransform is active.
trackType
You can do one-point, two-point, or four-point matchmoves. Different options appear
with the different types.
• 1 pt: Pans the foreground element to match the tracking point. You can optionally
turn off the X or Y movement.
• 2 pt: Pops up two additional parameters, with the option of matching scaling and
rotation of the background element (with the matchScale and matchRotate
parameters).
• 4 pt: Performs corner-pin matchmoving on the foreground element. The pan, scale,
and angle parameters disappear. Toggle to FG mode in the Viewer to adjust the 4
points that are pinned to the background. These points appear as sourceNX/YPosition.
applyX
Turning off the applyX parameter prevents the foreground element from following the
horizontal movement of the tracked subject.
applyY
Turning off the applyY parameter prevents the foreground element from following the
vertical movement of the tracked subject.
transform parameters
This parameter contains the following subparameters:
• xFilter, yFilter: The transformation filter used. Different filters can be used for
horizontal and vertical transformations. For more information on the different kinds
of filters that are available and how they work, see “Filters Within Transform Nodes”
on page 862.
• motionBlur: Enables the motion blur for the foreground element. A value of 0 is no
blur; 1 is the high setting. A mid-value is a trade-off between speed and quality. This
value is multiplied by the motionBlur parameter in the Globals tab.
• shutterTiming: A subparameter of motionBlur. Shutter length. 0 is no blur, whereas 1
represents a whole frame of blur. Note that standard camera blur is 180 degrees, or a
value of .5. This value is multiplied by the shutterTiming parameter in the Globals tab.
• shutterOffset: A subparameter of motionBlur. The offset from the current frame that
the blur is calculated. Default is 0; previous frames are less than 0. The Global
parameter shutterOffset is added to this.
• aspectRatio: This parameter inherits the current value of the defaultAspect global
parameter. If you’re working on a nonsquare pixel or anamorphic image, it is
important that this value is set correctly to avoid unwanted distortion in the image.742 Chapter 25 Image Tracking, Stabilization, and SmoothCam
refFrame
The reference frame that is used to calculate the null state of the transformation. For
example, scale has a value of 1 and rotate has a value of 0 at the reference frame.
outputType
A pop-up menu that lets you choose the compositing operation used to combine the
foreground element you’re adding to the scene against the background that you’re
tracking. Each menu option follows the standard Shake operator of the same name. To
pass on a tracked foreground without compositing, select Foreground. You can also use
this when modifying the foreground corner points, as the FG/BG button on the Viewer
shelf switches this setting.
clipMode
Selects the output resolution of the node from the Background (1) or the
Foreground (0).
applyScale
Opening the applyScale parameter reveals the scale parameter. In two-point mode, the
toggle control next to the scale slider toggles scaling of the foreground on and off.
Under this parameter is a subparameter that returns the actual scaling and rotation
value used in the transformation. You have the option to view the live calculated curve,
or to bake the curve to create editable data (keyframes).
scale
A slider that determines the calculated scale for two-point matching. The scale at the
refFrame is equal to 1, and all other frames are in reference to that frame.
applyRotate
Opening the applyRotate parameter reveals the rotate parameter. In two-point mode,
the toggle control next to the scale slider toggles rotation of the foreground on and off.
You have the option to view the live calculated curve, or to bake the curve to create
editable data (keyframes).
rotate
A slider that determines the calculated rotation for two-point matching. The angle at
the refFrame is equal to 0, and all other frames are calculated with reference to that
frame.
subPixelResolution
The resolution of your track. The smaller the number, the more precise and slower your
tracking. Possible values include:
• 1: Area is sampled at every pixel. Not very accurate or smooth, but very fast.
• 1/4: Area is sampled at every .25 pixels (16 times more than with a sampling of 1).
• 1/16: Area is sampled at every .0625 pixels (256 times more than with a sampling of 1).Chapter 25 Image Tracking, Stabilization, and SmoothCam 743
• 1/32: Area is sampled at every .03125 pixels (1024 times more than with a sampling
of 1).
• 1/64: Area is sampled at every .015625 pixels (4096 times more than with a sampling
of 1).
tolerances
The tolerances subtree contains subparameters that let you control this node’s
behaviors when the tracking quality goes down.
• matchSpace: The pixels are matched according to the correlation between the
selected color space—luminance, hue, or saturation. When an image has roughly the
same luminance, but contrasting hues, you should switch to hue-based tracking.
You can also adjust the weight of the color channels in the matchSpace subtree:
• redWeight, greenWeight, blueWeight: Three subparameters with sliders let you
weight how closely the tracking operation follows each color channel of the image
being tracked. In general, the color channels with the most contrast for the feature
you’re tracking should be weighed most heavily. Color channels with minimal
contrast for the feature you’re tracking should be de-emphasized.
• referenceTolerance: A tracking correlation of 1 is a perfect score—there is an exact
match between the original reference frame and the sampled area. When the
referenceTolerance is lowered, you accept greater inaccuracy in your track. If tracked
keyframes are between the referenceTolerance and the failureTolerance, they are
highlighted in the Viewer. Also, in some cases, referenceBehavior is triggered if the
tracking correlation is below the referenceTolerance.
• referenceBehavior: A pop-up menu that sets the tracking area reference sample. By
default, the reference pattern is the first frame from which the track is started, not
necessarily the first frame of the trackRange. The last two behaviors in the
referenceBehavior list measure the tracking correlation and match it to the
referenceTolerance to decide an action.
The referenceBehavior pop-up menu contains the following options:
• use start frame: The new samples are compared to the reference pattern from the
first frame of the track. If you stop tracking midway, and start again at a later frame,
the later frame is used as the reference sample.
• update every frame: The source sample is updated from the previous frame. This
usually creates an inherent drift in the track, as tiny errors accumulate. This method
is for movements that have drastic changes in perspective and scale.
• update from keyframes: If you are using a failureBehavior of “predict location and
don’t create keys” or “don’t predict location,” a keyframe is not necessarily saved
every frame. In this case, you may only want to update from the last frame with a
valid keyframe.744 Chapter 25 Image Tracking, Stabilization, and SmoothCam
• update if above reference tolerance: This option updates the reference sample from
the previous frame if the correlation is above the referenceTolerance. The intent is
to update every frame unless you know the point is obscured. If you use a predict
mode and know there are obstructions, this option keeps the reference area from
updating if the point is completely obscured.
• update if below reference tolerance: This option updates the reference sample from
the previous frame if the correlation is below the referenceTolerance. This option
basically says, “If I can’t get a good match, then resample.” This approach is
excellent for gradual perspective and scale shifts in the tracking area.
• failureTolerance: If the correlation value of the tracker’s analysis falls below this value,
Shake initiates the failureBehavior.
• failureBehavior: A pop-up menu containing the following options:
• stop: The tracker stops if the correlation is less than the failureTolerance. You can
also press Esc to manually stop tracking.
• predict location and create key: If a failure is detected, then the tracker predicts the
location of the keyframe based on a vector of the last two keyframes, and
continues tracking in the new area.
• predict location and don’t create key: Same as above, but it merely predicts the new
search area and does not create new keyframes until a high correlation is obtained.
This is excellent for tracked objects that pass behind foreground objects.
• don’t predict location: In this case, the tracker merely sits in the same spot looking
for new samples. New keyframes are not created.
• use existing key to predict location: This allows you to manually create keyframes
along your track path. You then return to the start frame and start tracking. The
search pattern starts looking where the preexisting motion path is.
limitProcessing
This button sets a Domain of Definition (DOD) around the bounding boxes of all active
trackers. Only that portion of the image is loaded from disk when tracking, so the track
is faster. This function has no effect on the final output image.
preProcess
This button toggles preprocessing on and off for the tracking area. This applies a slight
blur to reduce fluctuations due to grain. To control the blur amount, open the
preProcess subtree.
• blurAmount: A subparameter of preProcess. Sets the amount of blur applied when
preprocessing. Chapter 25 Image Tracking, Stabilization, and SmoothCam 745
trackRange
The trackRange parameter is the potential frame range limit of your track. By default,
the range is set to the clip range. For Shake-generated elements such as RGrad, it takes
a range of 1. You can set new limits using Shake’s standard range description, for
example, 10-30x2. If you stop tracking and start again, the process starts from the
current frame until it reaches the lower or upper limit of your trackRange, depending
on whether you are tracking forward or backward.
track1Name, track2Name…
The name of the track. To change the name, click in the text field and type a new name.
The number of trackXParameters corresponds to the number of points you selected in
the trackType parameter. Each trackName parameter contains the following
subparameters:
• track1X: The actual X value of the keyframed track point at that frame. Use this to
link a parameter to a track point. This parameter defaults to the expression width/3.
• track1Y: The actual Y value of the keyframed track point at that frame. Use this to link
a parameter to a track point. This parameter defaults to the expression height/3.
• track1Correlation: The correlation value representing how closely that keyframe
matched the original sample. A score of 1 is a perfect score. 0 is an unusable score.
• track1 Window Parameters: These multiple parameters control the windowing of the
tracking box, and are not relevant to exported values.
• track1CenterX: Determines the horizontal position of the track point. This
parameter defaults to the expression width/3.
• track1CenterY: Determines the vertical position of the track point. This parameter
defaults to the expression height/3.
• track1Visible: This parameter is the same as the visibility button that’s immediately
to the right of the trackName parameter, and toggles visibility of that tracker box
and its keyframes off and on.
• track1Enabled: If trackEnabled is not turned on, that tracker will not be used
during the next track analysis.
Stabilize
The Stabilize node is a dedicated tracking node that locks down an image, removing
problems such as camera shake or gate weave. You can do one-point (panning), twopoint (panning, scaling, or rotation), or four-point (corner-pinning) stabilization. Tracks
can be generated in the Stabilize node, or can be read in. To read in a track, right-click
in the trackName text field, then choose Load Track from the shortcut menu.
Parameters
This node displays the following controls in the Parameters tab:
applyTransform
The foreground element is only transformed if applyTransform is active.746 Chapter 25 Image Tracking, Stabilization, and SmoothCam
inverseTransform
Inverts the transformation. Use this to “unstabilize” the shot. For example, you stabilize
a shot with a Stabilize, apply compositing operations, and then copy the first Stabilize to
the end of the node tree. By inverting the transformation, you return the shot to its
former unstable condition.
trackType
You can do one-point, two-point, or four-point matchmoves. Different options appear
with the different types.
• 1 pt: Pans the foreground element to match the tracking point. You can optionally
turn off the X or Y movement.
• 2 pt: Opens two additional parameters, with the option to match scaling and
rotation of the background element (with the matchScale and matchRotate
parameters).
• 4 pt: Performs cornerpin matchmoving on the foreground element. The pan, scale,
and angle parameters disappear. Four track points are used to make the
transformation.
applyX
Turning off the applyX parameter prevents the foreground element from following the
horizontal movement of the tracked subject.
applyY
Turning off the applyY parameter prevents the foreground element from following the
vertical movement of the tracked subject.
transform parameters
This section contains subparameters that control how the input image is transformed
according to the derived tracking information.
• xFilter, yFilter: A pop-up menu that sets the transformation filter used. Different filters
can be used for horizontal and vertical transformations. For more information on the
different kinds of filters that are available and how they work, see “Filters Within
Transform Nodes” on page 862.
• transformationOrder: A pop-up menu that sets the order that transformations are
executed. The default setting is trsx. This means that transformations are performed
in the following order: translate, rotate, scale, shear.
• motionBlur: Enables the motion blur for the foreground element. A value of 0 is no
blur; 1 is the high setting. A mid-value is a trade-off between speed and quality. This
value is multiplied by the motionBlur parameter in the Parameters tab.
• shutterTiming: A subparameter of motionBlur. Controls shutter length. 0 is no blur,
whereas 1 represents a whole frame of blur. Note that standard camera blur is 180
degrees, or a value of .5. This value is multiplied by the global shutterTiming
parameter.Chapter 25 Image Tracking, Stabilization, and SmoothCam 747
• shutterOffset: A subparameter of motionBlur. Controls the offset from the current
frame that the blur is calculated. Default is 0; previous frames are less than 0. The
global shutterOffset parameter is added to this.
• aspectRatio: This parameter inherits the current value of the defaultAspect global
parameter. If you’re working on a nonsquare pixel or anamorphic image, it is
important that this value is set correctly to avoid unwanted distortion in the image.
refFrame
The reference frame that is used to calculate the null state of the transformation. For
example, scale has a value of 1 and rotate has a value of 0 at the reference frame.
applyScale
A parameter that becomes available when the tracker is set to two-point mode. Three
buttons that allow you to disable the transform (click “none”), view the live calculated
curve (click “live”), or to bake the curve (click “bake”) to create editable data (keyframes).
• scale: Opening the applyScale subtree reveals the scale slider, which determines the
calculated scale for two-point matching. The scale at the refFrame is equal to 1, and
all other frames are in reference to that frame.
applyRotate
A parameter that becomes available when the tracker is set to two-point mode. Three
buttons that allow you to disable the transform (click “none”), view the live calculated
curve (click “live”), or to bake the curve (click “bake”) to create editable data (keyframes).
• rotate: Opening the applyRotate subtree reveals the rotate parameter. A slider
determines the calculated rotation for two-point matching. The angle at the refFrame
is equal to 0, and all other frames are calculated with reference to that frame.
subPixelResolution
The resolution of your track. The smaller the number, the more precise and slower your
tracking analysis. The possible values are:
• 1: Area is sampled at every pixel. Not very accurate or smooth, but very fast.
• 1/4: Area is sampled at every .25 pixels (16 times more than with a sampling of 1).
• 1/16: Area is sampled at every .0625 pixels (256 times more than with a sampling of 1).
• 1/32: Area is sampled at every .03125 pixels (1024 times more than with a sampling
of 1).
• 1/64: Area is sampled at every .015625 pixels (4096 times more than with a sampling
of 1).
Note: Most current computers are fast enough to quickly compute motion tracking
even at 1/64 resolution, so don’t feel the need to be conservative with this setting.
tolerances
The tolerances subtree contains subparameters that let you control this node’s
behaviors when the tracking quality decreases.748 Chapter 25 Image Tracking, Stabilization, and SmoothCam
• matchSpace: The pixels are matched according to the correlation between the
selected color space—luminance, hue, or saturation. When an image has roughly the
same luminance, but contrasting hues, you should switch to hue-based tracking.
You can also adjust the weight of the color channels in the matchSpace subtree.
• matchSpace (subtree): Three subparameters with sliders let you weight how closely
the tracking operation follows each color channel of the image being tracked. In
general, the color channels with the most contrast for the feature you’re tracking
should be weighted most heavily. Color channels with minimal contrast for the
feature you’re tracking should be de-emphasized.
• referenceTolerance: A tracking correlation of 1 is a perfect score—there is an exact
match between the original reference frame and the sampled area. When the
referenceTolerance is lowered, you accept greater inaccuracy in your track. If tracked
keyframes are between the referenceTolerance and the failureTolerance, they are
highlighted in the Viewer. Also, in some cases, referenceBehavior is triggered if the
tracking correlation is below the referenceTolerance.
• referenceBehavior: A pop-up menu that sets the tracking area reference sample. By
default, the reference pattern is the first frame at which the track analysis begins, not
necessarily the first frame of the trackRange. The last two behaviors in the
referenceBehavior list measure the tracking correlation and match it to the
referenceTolerance to decide an action.
The referenceBehavior pop-up menu contains the following options:
• use start frame: The new samples are compared to the reference pattern from the
first frame of the track. If you stop tracking midway, and start again at a later frame,
the later frame is used as the reference sample.
• update every frame: The source sample is updated from the previous frame. This
usually creates an inherent drift in the track, as tiny errors accumulate. This method
is for movements that have drastic changes in perspective and scale.
• update from keyframes: If you are using a failureBehavior of “predict location and
don’t create keys” or “don’t predict location,” a keyframe is not necessarily saved at
every frame. In this case, you may only want to update from the last frame with a
valid keyframe.
• update if above reference tolerance: This option updates the reference sample from
the previous frame if the correlation is above the referenceTolerance. The intent is
to update every frame unless you know the point is obscured. If you use a predict
mode and know there are obstructions, this option keeps the reference area from
updating if the point is completely obscured.
• update if below reference tolerance: This option updates the reference sample from
the previous frame if the correlation is below the referenceTolerance. This basically
says, “If I can’t get a good match, then resample.” This option is excellent for
gradual perspective and scale shifts in the tracking area.Chapter 25 Image Tracking, Stabilization, and SmoothCam 749
• failureTolerance: If the correlation value of the tracker’s analysis falls below the value
in this field, Shake initiates the failureBehavior.
• failureBehavior: A pop-up menu containing the following options:
• stop: The tracker stops if the correlation is less than the failureTolerance. You can
also press Esc to manually stop tracking.
• predict location and create key: If a failure is detected, then the tracker predicts the
location of the keyframe based on a vector of the last two keyframes, then
continues tracking in the new area.
• predict location and don’t create key: Same as above, but this option merely
predicts the new search area and does not create new keyframes until a high
correlation is obtained. This option is excellent for tracked objects that pass behind
foreground objects.
• don’t predict location: In this case, the tracker merely sits in the same spot looking
for new samples. New keyframes are not created.
• use existing key to predict location: This option allows you to manually create
keyframes along your track path. You then return to the start frame and begin
tracking. The search pattern starts looking along the preexisting motion path.
limitProcessing
This button creates a Domain of Definition (DOD) around the bounding boxes of all
active trackers. Only that portion of the image is loaded from disk when tracking, so
the track analysis is faster. This setting has no effect on the final output image.
preProcess
This button turns on preprocessing for the tracking area, applying a slight blur to
reduce fluctuations due to grain. To control the blur amount, open the preProcess
subtree.
• blurAmount: A subparameter of preProcess that sets the amount of blur applied
when preprocessing to improve the quality of tracks in clips with excessive grain
or noise.
trackRange
The trackRange parameter is the potential frame range limit of your track. By default,
the range is set to the clip range. For Shake-generated elements such as RGrad, this
parameter takes a range of 1. You can set new limits using Shake’s standard range
description, for example, 10-30x2. If you stop tracking and start again, the analysis
starts from the current frame until it reaches the lower or upper limit of your
trackRange, depending on whether you are tracking forward or backward.750 Chapter 25 Image Tracking, Stabilization, and SmoothCam
track1Name, track2Name…
The name of the track. To change the name, click in the text field and type a new name.
The number of tracks corresponds to the number of points you selected in the
trackType parameter. Each trackName parameter contains the following subparameters:
• track1X: The actual X value of the keyframed track point at that frame. Use this to
link a parameter to a track point. This parameter defaults to the expression width/3.
• track1Y: The actual Y value of the keyframed track point at that frame. Use this to link
a parameter to a track point. This parameter defaults to the expression height/3.
• track1Correlation: The correlation value representing how closely that keyframe
matched the original sample. A score of 1 is a perfect score. 0 is an unusable score.
• track1 Window Parameters: These multiple parameters control the windowing of the
tracking box, and are not relevant to exported values.
• track1CenterX: Determines the horizontal position of the track point. This
parameter defaults to the expression width/3.
• track1CenterY: Determines the vertical position of the track point. This parameter
defaults to the expression height/3.
• track1Visible: This parameter is the same as the visibility button that’s immediately
to the right of the trackName parameter, and toggles visibility of that tracker box
and its keyframes off and on.
• track1Enabled: If trackEnabled is not turned on, that tracker will not be used
during the next track analysis.
Tracker
The Tracker node is used only to generate and contain tracking information. Unlike the
MatchMove and Stabilize functions, it has no capability to alter the input image. As
such, the Tracker node is used to create tracks that are referenced either in other
tracking nodes or in non-tracking nodes such as Move2D, Rotate, and so on.
While MatchMove and Stabilize are limited to creating one, two, or four track points, the
Tracker node can hold as many trackers as you want. To add a tracker, click the Add
button at the bottom of the Parameters tab. If your workflow continually uses
Smoothing and Averaging of tracks for application in a Stabilize or MatchMove, you
should probably generate the tracking data using a Tracker node.
With the Tracker node, you can delete trackers, and save the tracks to disk. To save an
individual track, right-click that tracker’s trackName field, then choose Save Track from
the shortcut menu.
Warning: You cannot clone a Tracker node in the Node View using the Paste Linked
command.Chapter 25 Image Tracking, Stabilization, and SmoothCam 751
Parameters
This node displays the following controls in the Parameters tab:
trackRange
The trackRange parameter is the potential frame range limit of your track. By default,
the range is set to the clip range. For Shake-generated elements such as RGrad, this
parameter takes a range of 1. You can set new limits using Shake’s standard range
description, for example, 10-30x2. If you stop tracking and start again, the analysis
starts from the current frame until it reaches the lower or upper limit of your
trackRange, depending on whether you are tracking forward or backward.
subPixelResolution
The resolution of your track. The smaller the number, the more precise and slower your
tracking analysis. The possible values are:
• 1: Area is sampled at every pixel. Not very accurate or smooth, but very fast.
• 1/4: Area is sampled at every .25 pixels (16 times more than with a sampling of 1).
• 1/16: Area is sampled at every .0625 pixels (256 times more than with a sampling of 1).
• 1/32: Area is sampled at every .03125 pixels (1024 times more than with a sampling
of 1).
• 1/64: Area is sampled at every .015625 pixels (4096 times more than with a sampling
of 1).
matchSpace
The pixels are matched according to the correlation between the selected color
space—luminance, hue, or saturation. When an image has roughly the same
luminance, but contrasting hues, you should switch to hue-based tracking.
referenceTolerance
A tracking correlation of 1 is a perfect score—there is an exact match between the
original reference frame and the sampled area.When the referenceTolerance is
lowered, you accept greater inaccuracy in your track. If tracked keyframes are between
the referenceTolerance and the failureTolerance, they are highlighted in the Viewer.
Also, in some cases, referenceBehavior is triggered if the tracking correlation is below
the referenceTolerance.
referenceBehavior
This behavior dictates the tracking area reference sample. By default, the reference
pattern is the first frame at which the track analysis begins, not necessarily the first
frame of the trackRange. The last two behaviors in the referenceBehavior list measure
the tracking correlation and match it to the referenceTolerance to decide an action.
The referenceBehavior pop-up menu contains the following options:
• use start frame: The new samples are compared to the reference pattern from the
first frame of the track. If you stop tracking midway, and start again at a later frame,
the later frame is used as the reference sample. 752 Chapter 25 Image Tracking, Stabilization, and SmoothCam
• update every frame: The source sample is updated from the previous frame. This
usually creates an inherent drift in the track, as tiny errors accumulate. This method is
for movements that have drastic changes in perspective and scale.
• update from keyframes: If you are using a failureBehavior of “predict location and
don’t create keys” or “don’t predict location,” a keyframe is not necessarily saved
every frame. In this case, you may only want to update from the last frame with a
valid keyframe.
• update if above reference tolerance: This option updates the reference sample from
the previous frame if the correlation is above the referenceTolerance. The intent is to
update every frame unless you know the point is obscured. If you use a predict mode
and know there are obstructions, this option keeps the reference area from updating
if the point is completely obscured.
• update if below reference tolerance: This option updates the reference sample from
the previous frame if the correlation is below the referenceTolerance. This option
basically says, “If I can’t get a good match, then resample.” It is excellent for gradual
perspective and scale shifts in the tracking area.
failureTolerance
If the correlation value of the tracker’s analysis falls below the value in this field, Shake
initiates the failureBehavior.
failureBehavior
A pop-up menu containing the following settings:
• stop: The tracker stops if the correlation is less than the failureTolerance. You can also
press Esc to manually stop tracking.
• predict location and create key: If a failure is detected, then the tracker predicts the
location of the keyframe based on a vector of the last two keyframes, then continues
tracking in the new area.
• predict location and don’t create key: Same as above, but this option merely predicts
the new search area and does not create new keyframes until a high correlation is
obtained. This option is excellent for tracked objects that pass behind foreground
objects.
• don’t predict location: In this case, the tracker merely sits in the same spot looking for
new samples. New keyframes are not created.
• use existing key to predict location: This allows you to manually create keyframes
along your track path. You then return to the start frame and start tracking. The
search pattern starts looking where the preexisting motion path is.
limitProcessing
This button creates a Domain of Definition (DOD) around the bounding boxes of all
active trackers. Only that portion of the image is loaded from disk when tracking, so
the track is faster. This setting has no effect on the final output image. Chapter 25 Image Tracking, Stabilization, and SmoothCam 753
tolerances
The tolerances subtree contains subparameters that let you control this node’s
behaviors when the tracking quality decreases.
matchSpace
Not to be confused with the matchSpace parameter above—the matchspace subtree
has three subparameters with sliders that let you weight how closely the tracking
operation follows each color channel of the image being tracked. In general, the color
channels with the most contrast for the feature you’re tracking should be weighted
most heavily. Color channels with minimal contrast for the feature you’re tracking
should be de-emphasized.
preProcess
This button turns on preprocessing for the tracking area, applying a slight blur to
reduce fluctuations due to grain. To control the blur amount, open the preProcess
subtree.
• blurAmount: A subparameter of preProcess that sets the amount of blur applied
when preprocessing to improve the quality of tracks in clips with excessive grain or
noise.
track1Name, track2Name…
The name of the track, itself a subtree of parameters containing the data for that
particular tracking region. To change the name, click in the text field and enter the new
name. The number of tracks corresponds to the number of tracking regions you’ve
created using the Add and Delete buttons. Each trackName parameter contains the
following subparameters:
• track1X: The actual X value of the keyframed track point at that frame. Use this to
link a parameter to a track point. This parameter defaults to the expression width/3.
• track1Y: The actual Y value of the keyframed track point at that frame. Use this to link
a parameter to a track point. This parameter defaults to the expression height/3.
• track1Correlation: The correlation value representing how closely that keyframe
matched the original sample. A score of 1 is a perfect score. 0 is an unusable score.
• track1 Window Parameters: The parameters within the track1Window Parameters
section define the size and position of the tracker boxes used to perform the motion
tracking analysis. Each of the position and sizing parameters in this section is
transient, meaning they’re not saved with the project. If you close a project, these
parameters return to their defaults when that project is reopened. The following are
the default expressions that each parameter is set to.
• track1Left: width/2-height/30
• track1Right: width/2+height/30
• track1Bottom: height/2-height/30
• track1Top: height/2+height/30
• track1leftSearch: width/2-height/15
• track1RightSearch: width/2+height/15754 Chapter 25 Image Tracking, Stabilization, and SmoothCam
• track1BottomSearch: height/2-height/15
• track1TopSearch: height/2+height/15
• track1CenterX: width/2
• track1CenterY: height/2
• track1Visible: This parameter is the same as the visibility button that’s immediately
to the right of the trackName parameter.
• track1Enabled: If trackEnabled is not turned on, that tracker will not be used
during the next track analysis.
Add, Delete, Save, Load
These buttons allow you to create and remove additional tracking regions. You can also
save tracking data, and read it back in using the Save and Load buttons.
CornerPin
The CornerPin node can be used to push the four corners of an image into four
different positions, or to extract four positions and place them into the corners. For
more information on the CornerPin node, see “CornerPin” on page 795.
The SmoothCam Node
This transformation node differs from the other tracking nodes described previously in
this chapter in that it doesn’t focus the track on small groups of pixels. Instead, it
evaluates the entire image at once, using advanced motion analysis techniques to
extract transformation data.
Once this information is derived, this node has two modes. It can smooth the shot,
eliminating unwanted jitter while maintaining the general motion of the camera, or it
can lock the shot, stabilizing the subject. This node can affect translation, rotation,
zoom, and perspective, making it more flexible for certain operations than the other
tracking nodes.
The SmoothCam node is primarily useful for removing unwanted trembling from less
than stable crane or jib arm moves, eliminating teetering from handheld walking shots,
or reducing vibrations in automotive shots. Secondarily, the SmoothCam node can be
used to stabilize shots that may be difficult to lock using the Stabilize node.
As useful as the SmoothCam node is, be aware that motion blur that is present in the
image will remain, even though the subject in the shot is successfully smoothed or
locked. This may or may not affect your approach to the composite.
Important: Interlaced images must be deinterlaced prior to use with the SmoothCam
node. For more information, see “Setting Up Your Script to Use Interlaced Images” on
page 196.Chapter 25 Image Tracking, Stabilization, and SmoothCam 755
Masking Important Features
The SmoothCam node has two inputs. The first one is for the input image to be
processed. The second input is for an optional matte with which you can isolate a
subject or area that you want the SmoothCam node to ignore while performing its
analysis.
When creating a matte to use with the SmoothCam node, white areas are ignored, and
black areas are analyzed.
Using the SmoothCam Node
The SmoothCam node works similarly to the AutoAlign node, in that the input image
must be analyzed first in order to derive the data that is used to smooth or lock the
image. This data is stored within the Shake script itself, so that each clip needs to be
analyzed only once.
To analyze the media using the SmoothCam node:
1 Attach a SmoothCam node to the image you want to process.
2 Load the SmoothCam node’s parameters into the Parameters tab.
3 If necessary, set the analysisRange to the frame range you want to process.
By default, the analysisRange is set to the number of frames possessed by the media of
the FileIn node to which it is attached.
4 Choose an analysisQuality mode. Start with Normal, which provides excellent results in
most cases.
Note: If, at the end of this process, the result is not what you’d hoped, then you should
try again with the analysisQuality mode set to high. Be aware this will significantly
increase the time it takes to perform this analysis.
5 Click the “analyze” button.
The frames within the frame range are analyzed, and this data is stored within your
script. To stop the analysis at any time, press Esc.
After the analysis has been performed, it’s time to choose the mode with which you
want the SmoothCam node to process the image.
To smooth a shot using the SmoothCam node:
1 Once the analysis has concluded, set steadyMode to smooth.
2 Open the smooth subtree.756 Chapter 25 Image Tracking, Stabilization, and SmoothCam
3 Adjust the translationSmooth, rotationSmooth, and zoomSmooth sliders to increase or
decrease the amount of smoothing that is attempted. At 0, no smoothing is attempted
in that dimension. At higher values, SmoothCam attempts to smooth a wider range of
variations in the movement of the image from one frame to the next.
• translationSmooth: Smooths the X and Y motion of the shot.
• rotationSmooth: Smooths rotation in the shot.
• zoomSmooth: Smooths a zoom within the shot.
Important: It’s faster, and will yield more accurate results, if you set the smoothing
parameters of dimensions in which you know the camera is not moving to 0. For
example, if you know for a fact that the camera is neither rotating nor zooming, set
rotationSmooth and zoomSmooth to 0. In particular, if the camera is not zooming,
increasing the zoomSmooth value may actually cause unwanted transformations in
the image.
To lock and match shots using the SmoothCam node:
1 Once the analysis has concluded, set steadyMode to lock.
2 Open the lockDown subtree.
3 The setting of the inverseTransform parameter depends on what you’re trying to do:
• If you want to lock the analyzed shot itself, leave inverseTransform set to lock.
• If you want to apply the motion from this shot to another image, attach this node to
the other image, then set inverseTransform to match.
4 Turn on the locking parameters for the dimensions of motion you want to affect.
• translateLock: Locks the subject in the X and Y dimensions.
• rotateLock: Locks the rotation of the subject.
• zoomLock: Removes zoom from the shot, maintaining the relative size of the subject.
• perspectiveLock: Locks the perspective of the subject, performing the equivalent of a
four-corner warp to maintain the size and shape of the subject.
Troubleshooting SmoothCam Effects
If the output is unsatisfactory, there are several things you can try to improve the result.
Change the Smoothing Parameters
If you’re trying to smooth the motion in a shot, you should first try changing the
smoothing parameters. This can be accomplished without having to reanalyze the clip.
Reanalyze the Media at a Higher Quality
Next, try changing the analysisQuality to high, and reanalyze the media.Chapter 25 Image Tracking, Stabilization, and SmoothCam 757
Try Editing the Analysis Data
If neither of the prior solutions helps, try loading the confidence parameter into the
Curve Editor, then look for frames where the confidence parameter falls to 0. If the
image transformation at these frames stands out, you can try loading the translationX,
translationY, rotation, and/or zoom parameters within the motion subtree into the
Curve Editor, then delete any keyframes that create unusual spikes at those frames.
Removing Black Borders Introduced by Smoothing and Locking
When you use the SmoothCam node, the resulting transformations that are made to
the input image either to smooth or lock the shot cause moving black borders to
appear around the edges of the output image. While this is necessary to achieve the
desired effect, you probably don’t want these black borders to appear in the final shot.
There are several ways you can choose to handle this border.
Using the clipMode Parameter
The clipMode parameter provides different ways you can treat the size of the output
frame in order to include or exclude this black region. After you’ve analyzed the input
image and picked the settings necessary to smooth or lock your shot, you can choose
one of three clipMode parameters.
union
This option expands the rendered frame to include the full area within which the input
image is transformed. If you scrub through the SmoothCam’s output with this option
turned on, the image appears to float about within a black frame larger than itself. The
frame size of the output image is larger than that of the input image.
intersection
This option contracts the rendered frame to exclude any black area. The result is a
stable output image that fills the frame, but with a frame size that’s smaller than the
input image.758 Chapter 25 Image Tracking, Stabilization, and SmoothCam
in
This option maintains the frame size of the input image as that of the output image. The
result is a moving black area that encroaches around the edges of the output image.
You can use one of the above three clipMode options to produce the output image
most useful for your purposes.
Note: Whichever clipMode you use, areas of the image that end up being clipped are
preserved by Shake’s Infinite Workspace, available for future operations.
Scaling the Output Image to Fit the Original Frame Size
If you need to output the resulting image at the same size as the original, the quickest
fix is to leave the clipMode Parameter set to “in, “ and use a Scale node after the
SmoothCam node to enlarge the image to the point where all instances of black
borders fall outside the edge of the frame. The disadvantage of this method is the
resulting softening of the image, depending on how much it must be enlarged.
Painting the Edges Back In
A more time-intensive, but possibly higher-quality solution would be to paint the
missing edge data with a QuickPaint node. A variety of techniques can be employed,
including using the clone brush to sample other parts of the image, or using the Reveal
Brush to paint in parts of a secondary clean-plate image that you create separately.
Warping the Edges
One last suggestion would be to experiment with the different warping nodes, to
stretch the edges of the image to fill any gaps. For example, you can experiment with
the LensWarp node to stretch the edges of the image out. This solution is highly
dependent on the type of image, and may introduce other image artifacts that may or
may not be acceptable.
Union Intersection InChapter 25 Image Tracking, Stabilization, and SmoothCam 759
Parameters in the SmoothCam Node
This node displays the following controls in the Parameters tab:
analysisRange
The range of frames of the input image to analyze. By default, this parameter inherits
the number of frames in the source media file represented by the FileIn node to which
it’s connected, not the timeRange in the Globals tab.
processedRange
This is not a user-editable parameter. It indicates the overall frame range that has been
analyzed.
analysisQuality
Defines the level of detail for the motion analysis. There are two levels of quality,
“normal” and “high.” In most situations, “normal” should produce an excellent result.
Should you notice any artifacts, set this parameter to “high.”
analyze
Click this button to analyze the frame range defined by the analysisRange parameter.
Pressing Esc while Shake is analyzing stops the analysis.
clipMode
The clipMode parameter provides different ways you can treat the size of the output
frame in order to include or exclude black regions introduced by the SmoothCam
node’s transformations. The clipMode parameter has three options:
• union: Expands the rendered frame to include the full area within which the input
image is transformed. If you scrub through the SmoothCam node’s output with this
option turned on, the image appears to float about within a black frame larger than
itself. The frame size of the output image is larger than that of the input image.
• intersection: Contracts the rendered frame to exclude any black area. The result is a
stable output image that fills the frame, but with a frame size that’s smaller than the
input image.760 Chapter 25 Image Tracking, Stabilization, and SmoothCam
• in: Maintains the frame size of the input image as that of the output image. The
result is a moving black area that encroaches around the edges of the output image.
steadyMode
The SmoothCam node has two modes: Smooth and Lock.
• Smooth: This mode smooths the apparent motion of the camera, while allowing the
general movement in the frame to proceed. It’s useful for removing jitter from a
camera movement. When enabled, this mode has three sliders for each of the
dimensions that can be smoothed.
The amount of smoothing can be increased or decreased along a sliding scale.
• translationSmooth: Smooths motion in both the X and Y dimensions.
• rotationSmooth: Smooths image rotation.
• zoomSmooth: Smooths an uneven zoom.
Note: Don’t turn on zoomSmooth unless you’re positive that the image is being
zoomed. Otherwise this parameter will not have the desired effect.
• Lock: This mode attempts to lock the motion of the principal subject in the shot to
eliminate motion. As a result, the background will appear to move around the
subject being tracked.
• InverseTransform: This control lets you invert the effect of this node, so that other
images can be matchmoved using the same motion.
• translateLock: Locks the image in both X and Y dimensions.
• rotateLock: Locks the rotation of the image.Chapter 25 Image Tracking, Stabilization, and SmoothCam 761
• zoomLock: Locks an image that is being zoomed.
Note: Don’t turn on zoomLock unless you’re absolutely positive that the image is
being dynamically zoomed.
• perspectiveLock: Locks an image experiencing a change in perspective, similar to a
reverse corner-pin.
Motion
The values within the parameters in the motion subtree are not meant to be editable,
or even directly intelligible. They’re available to be loaded into the Curve Editor so that
you can find possible spikes in the analysis data that, along with a low confidence
value, might indicate an error. In this case, you have the option of deleting the
offending keyframe, in order to see if the resulting smoothing or locking operation
becomes more acceptable.
• confidence: Unlike the tracking nodes, the confidence value is restricted to one of
three possible values. You can load the confidence parameter into the Curve Editor to
quickly find problem areas in the analysis. The values are:
• 1: Indicates an analysis in which Shake has high confidence. A keyframe is
generated at these frames.
• 0.5: Indicates an uncertain analysis. Shake also generates a keyframe at these
frames.
• 0: Indicates that Shake has no confidence in the analysis. No keyframe is
generated.
• translationX, translationY: Contains the X and Y translation data.
• rotation: Contains the rotation data.
• zoom: Contains the zoom data.26
763
26 Transformations, Motion Blur, and
AutoAlign
Shake’s transformation nodes provide many ways to
geometrically manipulate the position, size, and
orientation of images in your composition. The
parameters within these nodes can also be animated—
either manually or using expressions—to create motion
and accompanying motion-blur effects.
About Transformations
Shake has a wide variety of nodes that can be used to create various kinds of
transformations. Many of the transformation nodes—found in the Transform Tool tab—
perform simple operations, such as pan, rotate, scale, shear, and corner-pin (four-corner
warping). These nodes are all linear operations, and turning on motion blur for these
nodes results in realistic blur for any parts of the image affected by animated transform
parameters.
The Transform tab also contains nodes that perform changes in resolution, including
the Resize, Viewport, Window, and Zoom nodes. These nodes are covered in more detail
in “Controlling Image Resolution” on page 180.
The Move2D and Move3D nodes are the most flexible operators, since they include
most of the parameters contained in the simple transform nodes. The processing
requirements are the same whether you use a Move2D or a Pan node to move an
image, since the Pan operation is simply a macro of the Move2D node. Given this
choice, however, you’ll find that using the Move2D node for multiple transformations is
more efficient than simultaneously applying Pan, Rotate , and Scale nodes one after the
other, due to the way Shake handles the internal computations.
Because Shake has an Infinite Workspace, effects are not clipped when they move in
and out of frame with a second operation. For more information, see “Taking
Advantage of the Infinite Workspace” on page 405.764 Chapter 26 Transformations, Motion Blur, and AutoAlign
Concatenation of Transformations
Many of the transform nodes concatenate, similar to the way color-correction nodes
concatenate. Like color corrections, compatible transform nodes that are connected to
one another are concatenated so that each operation is collapsed into a single
calculation, optimizing both processing time and image quality. You can tell which
transform nodes concatenate by the letter “C” in the upper-left corner of the icon in the
Transform tab. In the screenshot below, you can see that the CameraShake, CornerPin,
and Move3D nodes all concatenate, but the Orient node does not.
Thanks to concatenation, you can apply a Move2D node, a Rotate node, and a CornerPin
node, and Shake resolves all three nodes by collapsing them into a single internal
calculation, thus executing the end result in one operation. Not only is the render
optimized, but the resulting image is of much higher quality (since the image is filtered
one time instead of three times).
For example, if you apply a Rotate node to an image of the earth so that the lit side
faces the sun, and then use a second Rotate node to orbit the earth around the sun,
Shake determines the combined transformation and calculates it in one pass. This is
important to understand because it means you spend (in this case) half of the
processing time, and only filter the image once.
Note: To see the destructive effects of repetitive filtering, try panning an image 20
times in another compositing application.
It is to your advantage to employ transform concatenation whenever possible. As with
color-correction nodes, transform nodes only concatenate when they’re connected
together, so arrange the operations in your node tree wisely. Any other type of node
inserted between other transform nodes breaks the concatenation. Chapter 26 Transformations, Motion Blur, and AutoAlign 765
For example, you cannot apply a Rotate node, an Over node, a Blur node, and then a
Move2D node and have the Rotate and the Move2D concatenate. Instead, it’s best to
make sure that the Rotate and Move2D nodes are placed together in the node tree. In
many cases, a simple change like this results in dramatic improvements in the speed
and quality of your images.
See “Creating Motion Blur in Shake” on page 778 for another good example of using
concatenation to your advantage.
The following nodes concatenate with one another:
• CameraShake
• CornerPin
• Move2D
• Move3D
• Pan
• Rotate
• Scale
• Shear
• Stabilize
Making Concatenation Visible
You can set showConcatenationLinks to “enhanced” in the enhancedNodeView subtree
of the Globals tab, then enable the enhanced Node View to see which nodes are
currently concatenating. A green line connects transform nodes that concatenate,
which makes it easy to see where the chain may be broken. For more information, see
“Using the Enhanced Node View” on page 221.
Transform nodes that
aren’t concatenated
Transform nodes that are
concatenated766 Chapter 26 Transformations, Motion Blur, and AutoAlign
Inverting Transformations
The Move2D and CornerPin nodes have an inverseTransform parameter. This aptly
named parameter inverts the effect of the transformation, numerically. For example, a
pan of 100 with inverseTransform activated becomes a pan of -100. The parameters
themselves are not changed, just their effects on the composition.
In the case of Move2D, you can use inverseTransform to turn imported tracking data
into stabilization data. In the case of CornerPin, you can set the four corners to map to
four arbitrary points. The classic example of this is to pin an image onto the front of a
television screen. Using the inverseTransform parameter, you could extract the original
image that appears angled on the television screen, and remap it into a flat image. This
technique is helpful for generating texture maps from photos for 3D renders.
Onscreen Controls
Most of the transform nodes have onscreen controls that let you interactively
manipulate your images directly in the Viewer. These controls appear whenever that
node’s parameters are loaded.
Note: Many of these controls are also shared by other nodes with similar functionality.
If an image moves offscreen—that is, beyond the boundaries of the Viewer—its
transform controls remain visible in the Viewer area. When an image undergoes an
extreme transformation and moves “offscreen,” the best way to locate it is to zoom out
in the Viewer until you see its transform controls.Chapter 26 Transformations, Motion Blur, and AutoAlign 767
Viewer Shelf Controls
When you use an active node with onscreen controls, additional controls also appear in
the Viewer shelf (a row of buttons that appears directly underneath the Viewer).
Accelerating Viewer Interactivity
There are two fast ways you can speed up Shake’s performance when using onscreen
transform controls to perform transformations:
To quickly scrub through an animation, set the Update mode (located in the upperright corner of the interface) to “release” then move the playhead. To select release
from the Update mode list, click and hold the button labeled, “manual” or “always,”
then select “release.” Because the image does not update until you release the mouse
button, this setting lets you manipulate the controls freely without being slowed
down by constant image processing.
You can also lower the Global interactiveScale parameter. Doing so dynamically
lowers the resolution of images as they’re being manipulated in Shake. This allows
you to see the changes you’re making to the image as you’re making them, albeit at
lower resolution. When you release the mouse button, the image returns to the
current resolution of the project.
Viewer shelf of the Move2D node768 Chapter 26 Transformations, Motion Blur, and AutoAlign
The following table shows the common onscreen control buttons.
Button Description
Onscreen Controls–
Show
Displays the onscreen controls. Click to toggle between Show
and Hide mode.
Onscreen Controls–
Show on Release
Hides onscreen controls while you modify an image. To access
this mode, click and hold the Onscreen Controls button, then
choose this button from the pop-up menu, or right-click the
Onscreen Controls button, then choose this option from the
shortcut menu.
Onscreen Controls–Hide Turns off the onscreen controls. To access this mode, click and
hold the Onscreen Controls button, then choose this button
from the pop-up menu, or right-click the Onscreen Controls
button, then choose this option from the shortcut menu.
Autokey When Autokey is on, a keyframe is automatically created each
time an onscreen control is moved. To enable, click the button,
or right-click it, then choose this option from the shortcut
menu.
To manually add a keyframe without moving the onscreen
controls, click Autokey off and on.
Delete Keyframe Deletes the keyframe at the current frame. This button is useful
because some nodes (such as Move2D) create multiple
keyframes when you modify a single parameter. This button
deletes the keyframes from all associated parameters at the
current frame.
To delete all keyframes for a parameter, right-click the Delete
Keyframe button, then choose Delete All Keys from the
shortcut menu.
Lock Direction–Off Allows dragging of onscreen controls in both the X and Y
directions.
Lock Direction to X Allows dragging of onscreen controls in the X direction only. To
enable, click and hold the Lock Direction button, then choose
this button from the pop-up menu.
Lock Direction to Y Allows dragging of onscreen controls in the Y direction only. To
enable, click and hold the Lock Direction button, then choose
this button from the pop-up menu.
Onscreen Color Control Click this swatch to change the color of the onscreen controls.
Path Display–Path and
Keyframe
Displays the motion path and the keyframe positions in the
Viewer. You can select and move the keyframes onscreen.Chapter 26 Transformations, Motion Blur, and AutoAlign 769
Transform Controls
The most commonly used transform node is Move2D. The Move2D node combines the
controls of the Rotate, Pan, and Scale nodes into a single operation. These nodes share a
number of onscreen transform controls that enable you to manipulate images directly
in the Viewer.
Pan
The pan controls provide two ways to move the image within the Viewer. Drag
anywhere within the image bounding box to reposition the image freely. Drag the
vertical arrowhead to restrict position changes to the yPan parameter. Drag the
horizontal arrowhead to restrict position changes to the xPan parameter.
You can also press Q or P while dragging anywhere within the Viewer to pan an image
without having to position the pointer directly over it.
Scale
Drag any of the corner controls to scale the entire image up or down, affecting the
xScale and yScale parameters. Drag the top or bottom controls to scale the image
vertically, affecting only the yScale parameter. Drag the left or right controls to scale
the image horizontally, affecting only the xScale parameter.
Path Display–Keyframe Displays only the keyframe positions in the Viewer. To access
this mode, click and hold the Path Display button, then choose
this button from the pop-up menu.
Path Display–Hide The motion path and keyframes are not displayed in the
Viewer. To access this mode, click and hold the Path Display
button, then choose this button from the pop-up menu.
Button Description770 Chapter 26 Transformations, Motion Blur, and AutoAlign
Drag the center control to move the point about which scaling is performed, affecting
the xCenter and yCenter parameters.
There is an additional method you can use to scale images in the Viewer.
To scale an image in the Viewer without using the scale handles:
1 Select an image.
2 With the pointer positioned over the Viewer, press E or I.
3 When the dimension pointer appears, drag in the direction you want to scale the image.
The colors in the dimension pointer correspond to the pan controls.
When you drag, the dimension pointer turns into an axis arrow, indicating the
dimension in which you are scaling the image -- horizontal, vertical or diagonal. Chapter 26 Transformations, Motion Blur, and AutoAlign 771
Rotate
Drag the blue rotate control to rotate the image about the center point, affecting the
angle parameter. Drag the white center control to move the center point itself,
affecting the xCenter and yCenter parameters.
To rotate an image without positioning the pointer directly over it, press W or O, then
drag in the Viewer. The dimension pointer appears, allowing you to to rotate the image
in any direction.
Move2D
The Move2D node combines the onscreen transform controls for the Pan, Scale, and
Rotate nodes into a single, all-purpose transformation node.772 Chapter 26 Transformations, Motion Blur, and AutoAlign
After an image is rotated with the Move2D node, the horizontal and vertical panning
controls (arrowheads) lock movement to the new orientation of the image.
Move3D
Similar to the Move2D node, the Move3D node adds three colored dimensional angle
controls to control xAngle (red), yAngle (green), and zAngle (blue) parameters. This lets
you simulate 3D transformations with 2D images.
As with the Move2D node, when an image is panned with the Move3D node, the
horizontal and vertical panning controls lock movement to the new orientation of the
image, instead of the absolute orientation of the Viewer frame.
There are keyboard shortcuts (identical to those used within the MultiPlane node) that
you can use to manipulate images without having to position the pointer directly over
them. For more information, see “Transforming Individual Layers” on page 500.
Note: Unlike similar controls available for layers connected to the MultiPlane node, the
constrained pan controls do not move the image forward or back if the yAngle is tilted.
To manipulate images within a more fully featured 3D environment, connect them to a
MultiPlane node, and use the available controls. For more information, see Chapter 18,
“Compositing With the MultiPlane Node,” on page 485.Chapter 26 Transformations, Motion Blur, and AutoAlign 773
Crop
This onscreen transform control, available in the Crop node, lets you drag any corner to
crop two sides of an image at once. Drag any outside edge to crop that edge by itself.
This affects the cropLeft, cropBottom, cropRight, and cropTop parameters.
Drag the crosshairs at the center to move the entire crop box (simultaneously affecting
all four crop parameters), while the image remains in place.
CornerPin
This onscreen transform control, available in the CornerPin node, lets you drag any
corner to distort the image relative to that corner. Drag any edge to distort the image by
moving both corners on that side. Drag the crosshairs to reposition the entire image.774 Chapter 26 Transformations, Motion Blur, and AutoAlign
Onscreen Controls Across Multiple Transformations
If you apply multiple transformations to an image, all downstream onscreen controls
are transformed along with the image. This lets you accurately visualize that control’s
effect on the image.
Note: This is also true for the onscreen controls of other nodes, like the RGrad and
Text nodes.
In the following example, an RGrad node is connected to a CornerPin node. The
CornerPin node is used to place the RGrad in perspective.
In the screenshot above, the image output by the CornerPin node is loaded in the
Viewer, but the parameters of the RGrad node above it are loaded in the Parameters tab.
As a result, the RGrad controls inherit the perspective shift from the CornerPin node.
Displaying Multiple Versions of the Same Control
If you create a node tree where several versions of the same node are visualized,
transform controls are displayed for each copy of the node. This can be used to your
advantage, to provide you with multiple perspectives of the same control, allowing for
precise manipulation of the image. Chapter 26 Transformations, Motion Blur, and AutoAlign 775
In this example, the CornerPin node is composited over the original RGrad node.
As shown in the above image, manipulating the RGrad1 node while viewing the Over1
node results in the display of multiple controls. Changes made with one control modify
both. To break this link, copy the original RGrad node and connect it to the new node.
Important: The MatchMove node is the only exception to this behavior. For nodes
above the MatchMove node, the onscreen controls appear without transformation.
Scaling Images and Changing Resolution
There are two types of scaling functions:
• Functions that scale the size of the image in the frame, but do not change the actual
resolution (Scale, Move2D)
• Functions that scale the size of the image in the frame, and also change the output
resolution (Resize, Zoom, Fit)
Additionally, the Crop, Window, and ViewPort nodes can be used to change the image
resolution by cutting into an image or expanding its borders.
Finally, the SetDOD node crops in an area of interest, called the Domain of
Definition (DOD).
The following tables discuss the differences between the scaling functions.
Node
Changes
Resolution
Changes
Pixel Scale
Breaks Infinite
Workspace
Changes Relative
Aspect Ratio
Scale, Move2D No Yes No Yes
Crop Yes No Yes No
Window Yes No Yes No
ViewPort Yes No No No776 Chapter 26 Transformations, Motion Blur, and AutoAlign
SetDOD No No Yes No
Resize Yes Yes No Yes
Fit Yes Yes Yes No
Zoom Yes Yes No Yes
Node Example Example Parameters Notes
(No node) Resolution =
100 x 100 pixels
The unmodified image
Scale, Move2D xScale = .5
yScale = .5
Scale is a subset of the
Move2D function.
There is no processing
speed increase when
using Scale instead of
Move2D.
Scale, Move2D xScale = 1.6
yScale = 1.6
Crop -33 , -33, 133, 133 When using the
default parameters, 0,
0, width, height, Crop
breaks the Infinite
Workspace, and resets
the color beyond the
frame to black.
Crop 33, 33, 67, 67
Node
Changes
Resolution
Changes
Pixel Scale
Breaks Infinite
Workspace
Changes Relative
Aspect RatioChapter 26 Transformations, Motion Blur, and AutoAlign 777
Window -33, -33, 166, 166 Window is identical to
Crop, except that you
specify the output
resolution in the third
and fourth parameters.
Window 33, 33, 34, 34
ViewPort 33, 33, 67, 67 ViewPort is identical to
Crop, except that it
does not cut off the
Infinite Workspace, and
is therefore primarily
used to set a
resolution.
SetDOD 33, 33, 67, 67 Used to limit the
calculation area of the
node tree to within
the DOD; considerably
speeds up renders.
Resize 72, 48
Fit 72, 48 Fit and Resize are
identical, except that
Fit preserves the pixel
aspect ratio, padding
the edges with black.
Zoom .72, .48 Zoom and Resize are
identical, except that
Zoom gives a scaling
factor and Resize gives
a resolution for its
parameters.
Node Example Example Parameters Notes778 Chapter 26 Transformations, Motion Blur, and AutoAlign
Creating Motion Blur in Shake
Motion blur can be applied to any animated transformation. Each transform node has
its own motion blur settings, so you can fine-tune each node’s effect individually. There
is also a global set of motion blur parameters that adjusts or replaces the existing
values, depending on the parameter.
Note: You can also use the global motion blur parameter to temporarily turn motion
blur off.
You can control the quality of the blur using the motionBlur slider, found in the
Parameters tab of most transform nodes.
• 0 disables motion blur for that operation.
• 0.5 = good quality.
• 1 = excellent quality.
• Higher values may be used for even better quality.
Opening the motionBlur subtree reveals two additional parameters that control the
look of the resulting motion blur:
shutterTiming
This parameter controls the duration of the blur. By default, it goes from the current
frame to halfway toward the next frame, which is a value of .5, equivalent to 180
degrees of camera shutter. Real-world film cameras are generally at 178 degrees.
shutterOffset
This parameter controls the frame at which the shutter opening is considered to start.
This value is 0 by default, so it starts at the current frame and calculates the motion
that occurs up until the next frame. When set to -1, the motion from the previous frame
up to the current frame is calculated.
Multiple Elements With Independent Motion Blur
Sometimes you have to place multiple Move2D nodes somewhat counterintuitively in
order to take advantage of transform node concatenation. For example, an Over node
placed between two Move2D nodes breaks concatenation. To get around this, apply
Move2D nodes one after the other, then use the Over node afterwards.
Motion Blur for Concatenated Nodes
When multiple transform nodes are concatenated, the highest blur settings from that
collection of nodes is used for the overall calculation.Chapter 26 Transformations, Motion Blur, and AutoAlign 779
In the following example, two elements are composited together to simulate a car
moving forward with spinning wheels: an image of a car body and a single graphic
representing the wheel graphic. The wheel graphic is used twice, once for the back
wheel and once for the front wheel. The colored dots on the wheels will illustrate the
improper and proper arrangements of nodes necessary to produce realistic motion blur
for this simulation.
In the following node tree, a Move2D node (renamed SpinWheel) is animated to rotate
the wheel. The PositionFrontWheel node is a non-animated Move2D that pans a copy of
SpinWheel to the front wheel position. The two wheel nodes are composited together
(RearWheelOverFront), and the result is composited over the car body (WheelsOverCar).
To animate the car forward, a Move2D node (MoveCar1) is applied, which pans the
entire image to the right.780 Chapter 26 Transformations, Motion Blur, and AutoAlign
The result is inaccurate when motion blur is applied. This is because the SpinWheel
node applies the blur for the turning wheel, and then three nodes later, the MoveCar
node applies the blur to the already-blurred wheels. Instead of the individual paths of
the color dots on the wheels, the result is a horizontal smear.
The following image shows the WheelsOverCar node immediately before the entire car
is panned. The result is bad because the wheels are blurred without consideration of
the forward momentum added in a later node.
To correct this, it is more efficient to apply three Move2D nodes to pan the three
elements separately, and then composite them together. In the following tree, the
MoveCar1, MoveCar2, and MoveCar3 nodes are identical, and MoveCar2 and MoveCar3
are clones of MoveCar1.Chapter 26 Transformations, Motion Blur, and AutoAlign 781
Note: To create a cloned node, copy the node (press Command-C or Control-C) and
clone the node using the Paste Linked command in the Node View shortcut menu (or
press Command-Shift-V or Control-Shift-V).
Because the SpinWheel and MoveCar1 nodes are transformations, these nodes
concatenate. The SpinWheel, PositionFrontWheel, and MoveCar2 nodes also concatenate.
The result is three transformations, the same amount as the previous tree, but with an
accurate blur on the wheels.
Applying Motion Blur to Non-Keyframed Motion
Ordinarily, motion blur only appears when you keyframe a parameter of a transform
node in order to create Shake-generated motion. If you simply read in an image
sequence with a moving subject within the frame, such as a man running along a road,
Shake will not create any motion blur, since nothing is actually animated.
To create motion blur for such non-keyframed elements, you can apply a smear effect
using a Move2D node. This is done by turning on the useReference parameter (at the
bottom of the Parameters tab) and setting referenceFrame (in the useReference
subtree) to “time” (the default).782 Chapter 26 Transformations, Motion Blur, and AutoAlign
The following example uses a previously rendered a clip of a swinging pendulum.
To add blur to a non-animated object with the useReference parameter:
1 Locate the center of rotation for the pendulum, then type the center values in the
Move2D center field.
2 Approximate the rotation, then animate the angle to match the rotation.
In this example, the angle is -40 at frame 5 and 40 at frame 24.
3 Type the expression “time” into the referenceFrame value field (in the useReference
subtree).
4 Set motionBlur to 1.
When the animation is played back, the entire element swings out of frame due to the
new animation.
5 Enable useReference (set it to 1). Chapter 26 Transformations, Motion Blur, and AutoAlign 783
The element remains static, but the blur is still applied as if it were moving.
For a lesson on this subject, see Tutorial 4, “Working With Expressions,” in the Shake 4
Tutorials book.
The AutoAlign Node
The AutoAlign node is unique among the various transform nodes in that it can take
multiple image inputs and combine them into a single output, similar to a layering
node. This node is designed to take two or three images or image sequences that
overlap, and align them in different ways to create a single, seamless output image.
Frame 5
Frame 12
Frame 24784 Chapter 26 Transformations, Motion Blur, and AutoAlign
Unlike similar photographic tools, the AutoAlign node works with both stills and image
sequences. As a result, you could film three side-by-side shots of an expanse of action,
and later turn these into a single, extremely wide-angle background plate. Similarly,
you can set a single still image to align with the same image features in a second shot
with camera motion.
Stitching Images Together
When you connect two or three overlapping images or image sequences to the
AutoAlign node, they’re assembled into a single composite panoramic image. The input
images can be either vertically or horizontally oriented—the AutoAlign node
automatically determines the areas of overlap and aligns them appropriately.
Note: The order in which you connect images to the AutoAlign node doesn’t matter;
their position is automatically determined.
For example, if you connect the following three side-by-side shots of a skyline to the
inputs of an AutoAlign node:
The overlapping areas are analyzed, matched, and the images are repositioned and
manipulated to fit together.Chapter 26 Transformations, Motion Blur, and AutoAlign 785
As you can see in the above example, the resulting image may have an irregular border,
depending on the amount and position of overlap, and the warping required to
achieve alignment. If necessary, the border can be straightened with a Crop node.
Aligning Overlapping Images
You can also use this node to align two images that almost completely overlap. An
obvious use of this is to align an image with something in the frame that you want to
remove with a second “clean plate” image. After you’ve aligned the two images, you
can use the aligned output image as a background plate for use in a paint or roto
operation to remove the unwanted subject.
The primary advantage of the AutoAlign node in this scenario is that it allows you to
precisely align a clean plate image to an image sequence in two difficult types of
situations:
• If the clean plate image is from a different source, or has different framing or other
characteristics that make it difficult to align using regular transformations
• If the target image is in motion, and alignment is difficult using regular trackers
For example, if you have the following shot of a man jumping across a bottomless
chasm, with the camera moving dramatically to follow the action, it may be difficult to
line up a still image clean plate.
Moving camera with unwanted safety rigging Still image clean plate786 Chapter 26 Transformations, Motion Blur, and AutoAlign
Using the AutoAlign node to align the second image with the first, you can quickly
match the clean plate still to the moving image sequence, and then output the newly
aligned and animated clean plate for use by a paint or rotoscoping operation to paint
out the safety line.
Blending at the Seams
Two parameters help to automatically manage the seams between images that have
been aligned with the AutoAlign node:
• blendMode automatically adjusts the overlapping border between two aligned
images, intelligently selecting which portions of each image to retain for a seamless
result.
• matchIllum automatically adjusts the brightness of each aligned image so that the
exposure matches.
AutoAlign Limitations
Successful use of the AutoAlign node is highly dependent on the content of the input
images, and results will vary widely from shot to shot. In general, moving images with
a great deal of perspective and parallax can be difficult for the AutoAlign node to
analyze properly, and may cause unexpected results.
blendMode set to none blendMode set to smartBlendChapter 26 Transformations, Motion Blur, and AutoAlign 787
AutoAlign Image Requirements
Although the AutoAlign node is a very flexible tool, it produces the best results with
material that is produced with this node in mind.
• There should be at least a 15-to-20-percent overlap between any two images for the
AutoAlign node to work properly (the amount that’s necessary may vary depending
on the image).
• You may still be able to successfully line up images even when limited regions within
each image don’t line up, although this may affect the overall quality of the resulting
alignment. Examples include clouds, waves, pedestrians, and birds. If you can arrange
for it in advance, exact matches from simultaneously captured images are usually
preferable.
• The warping performed by the AutoAlign node is designed to align multiple
panoramic images that have been shot by dollying the camera laterally. Multiple
images created by panning the camera from a single position may result in
undesirable curvature in the final result.
• Interlaced images must be deinterlaced prior to analysis.
• When aligning image sequences, the AutoAlign node aligns the input frames in time
according to each image sequence’s timeShift parameter, as represented in the Time
View. Offsetting an image sequence in the Time View offsets which frames will be
aligned with one another.
• When aligning overlapping images, an object or person in one image that is not in
the other may prevent successful analysis if it occupies too large an area.
• Images with an animated width or height cannot be aligned.
Example 1: A Procedure for Simple Image Alignment
This example covers how to set up a panoramic stitching operation using the AutoAlign
node.
To automatically align two or three images:
1 In the Time View, adjust the offsets of the image sequences that you intend to use.
2 Create an AutoAlign node, and attach up to three images to it.
The order in which you attach them doesn’t matter.
3 Load the AutoAlign node’s parameters into the Parameters tab.
4 If necessary, set the analysisRange to the frame range you want to analyze.
5 If you’re analyzing images for the first time, set the mode parameter to precise.
Image Stabilization Within the AutoAlign Node
You must select one of the input images to be used as the positional reference for
AutoAlign’s analysis, using the lockedPlate pop-up menu. If the shot you want to
designate as the lockedPlate is not already steady, the other input images will be
transformed to match the motion in this shot. 788 Chapter 26 Transformations, Motion Blur, and AutoAlign
If you’re not satisfied with the result later in the operation, change the mode to robust
and re-analyze the images.
6 Click the “analyze” button.
Shake steps through each frame in the image sequences and analyzes each set of
aligned images at every frame in the designated analysisRange.
Note: The analysis can be interrupted at any time by pressing Esc.
7 Once the analysis has concluded, set clipLayer to All in order to expand the Region of
Interest (ROI) to the overall width of the newly generated panorama.
8 Choose an input layer from the lockedPlate pop-up menu.
For purposes of creating a panorama, you should use an image that’s either already
stable, or one that has been locked down using the SmoothCam or Stabilize node.
Note: If the combined image is very wide, it may also be a good idea to set the
centermost image as the lockedPlate.
9 To output the entire panorama as a single image, set ouputFrame to All.
10 If the seams between each aligned image are visible, set the blendMode parameter to
smartBlend to make them invisible.
11 If the aligned images have different exposures, turn on matchIllum.
If you can still see differences in color or exposure, you can attach additional colorcorrection nodes between the upstream image and the AutoAlign node to make the
necessary adjustments.
Example 2: Aligning a Clean Plate Image With a Moving Shot
This example shows how to align a clean plate still image with an image sequence in
which the camera is moving.
To align a clean plate image with an overlapping image sequence:
1 Connect the clean plate image and the moving shot (in this example, the mountaineer
image) to an AutoAlign node. Chapter 26 Transformations, Motion Blur, and AutoAlign 789
The order in which they are connected is not important.
2 Use the clipLayer and lockedPlate pop-up menus to choose the input to which the
clean plate image is connected.
In this example, you’ll be choosing Input1. Leave the mode at the default setting of
Precise.
3 Set the analysisRange to the number of frames you want to align.
This parameter defaults to the maximum number of frames within the longest image
sequence that’s connected to the AutoAlign node.
4 Click “analyze.”
Shake begins the image analysis. The first frame may take longer than the other frames
in the analysis. The analysis should speed up after the first frame. As the analysis is
performed, the playhead steps through the Time Bar frame by frame, creating a
keyframe at each analyzed frame.
Note: The analysis can be interrupted at any time by pressing Esc.790 Chapter 26 Transformations, Motion Blur, and AutoAlign
5 Once the analysis has concluded, changing blendMode to mix and scrubbing through
the Time Bar shows you how well the resulting alignment works.
The Mix setting, in the case of two images that almost completely overlap, results in an
a 50 percent blend of both images. In the above image, the ripples in the snow appear
to align perfectly.
6 In order to use this result in a paint operation, set the following parameters:
a Set clipLayer to Input2, so that the moving shot defines the resolution of the output
image.
b Set lockedPlate to Input2 as well, so that the clean plate image moves along with the
background.
c Set outputFrame to Input1, and set blendMode to none, so that the only image that
is output is the newly animated and aligned clean plate image.
The resulting output image is a transformed version of the clean plate image that
matches the position of matching overlapping features in the mountaineer image.
Original clean plate image Autoaligned clean plate imageChapter 26 Transformations, Motion Blur, and AutoAlign 791
7 The auto-aligned clean plate can now be used in paint or rotoscoping operations to
remove unwanted rigging from the actor in the mountaineer shot. For example, you
can connect the original Mountaineer image to the first input of a QuickPaint node, and
the output from the AutoAlign node to the second input.
8 With this setup, you can use the Reveal Brush to paint out the rigging against the
backdrop of the clean plate.
Original Image
Reveal Brush exposing auto-aligned image792 Chapter 26 Transformations, Motion Blur, and AutoAlign
AutoAlign Parameters
The AutoAlign node displays the following controls in the Parameters tab:
analyze
Click this button to perform the analysis that is the first step in aligning two or three
images. This analysis creates a preprocessed data set that is used to perform the actual
alignment. Images only have to be analyzed once, and the resulting transform data is
stored inside the Shake script. The other parameters in the node act upon this
transformation data—changing the parameters has no effect on the analysis data.
If, at any time, you’re not satisfied with the results of the analysis, you can change the
mode and re-analyze the analysisRange.
analysisRange
The range of frames you want to process. By default this parameter inherits the
maximum number of frames in the FileIn node at the top of the tree.
processedRange
A non-editable display field that shows the number of frames that have already been
analyzed. When you first create an AutoAlign node, this parameter reads “.”
Opening the processedRange subtree reveals the following subparameters:
• confidence: The confidence value indicates the level of matching accuracy that
resulted from the analysis. Each frame’s confidence is restricted to one of three
possible values. You can load the confidence parameter into the Curve Editor to
quickly find problem areas in the analysis. The values are:
• 1: Indicates an analysis in which Shake has high confidence. A keyframe has been
generated at these frames.Chapter 26 Transformations, Motion Blur, and AutoAlign 793
• 0.5: Indicates an uncertain analysis. Despite the uncertainty, Shake has generated a
keyframe at these frames.
• 0: Indicates that Shake has no confidence in the analysis. No keyframe has been
generated at these frames.
• mMatrix, nMatrix: These parameters contain the data accumulated by clicking the
“analyze” button.
mode
Two options—precise and robust—let you change the method used to perform the
analysis. In general, set this mode to precise the first time you analyze the input
images. If the results are not satisfactory, change the mode to robust, then re-analyze.
clipLayer
Lets you to choose which image defines the resolution of the output image—Input1,
Input2, Input3, or All, which sets the resolution to the final size of the aligned and
merged images.
lockedPlate
This parameter lets you choose which input image to use as a positional reference. The
other two input images will be stabilized and aligned to match the position of the
lockedPlate. The lockedPlate also affects how the other two images are warped to
match the overall perspective of the final image. If the final image is going to be a
wide-angle shot, it may be a good idea to set the center image as the lockedPlate.
• lockedPlateOffsetX, lockedPlateOffsetY: These subparameters of lockedPlate contain
the transform keyframes used to align the plates. Changing the lockedPlate also
changes the keyframe set displayed by these parameters.
outputFrame
Lets you choose which image is output by the AutoAlign node. The options are
identical to those of the clipLayer parameter.
blendMode
This parameter provides options for automatically blending the seams where the input
images meet. There are three choices:
• none: The seams are not blended.
• mix: Both sides of each seam are simply feathered together.
• smartBlend: The highest quality mode. The best pixels from either side of each seam
are used to patch together a seamless image.
matchIllum
Turning this parameter on automatically adjusts the brightness of the input images so
that the lighting is consistent across the entire output image.794 Chapter 26 Transformations, Motion Blur, and AutoAlign
Note: If necessary, you can preprocess images connected to the AutoAlign node with
other color-correction nodes to even out differences in gamma and contrast that aren’t
addressed by the matchIllum parameter.
The Transform Nodes
In addition to the AutoAlign node, Shake features numerous other transform nodes. The
following section includes information on the transform nodes, which are located in
the Transform Tool tab.
For information on the transform nodes that are used for tracking and stabilization
(MatchMove, Stabilize, and Tracker), see Chapter 25, “Image Tracking, Stabilization, and
SmoothCam,” on page 717.
For information on the transform functions that affect image resolution (Fit, Resize, and
Zoom), see “Nodes That Affect Image Resolution” on page 183.
For information on cropping functions (AddBorders, Crop, Viewport, and Window), see
“Cropping Functions” on page 186.
CameraShake
The CameraShake node is a macro that applies noise functions to the pan values of
Move2D. Since it is a good example of how to use noise, look at the actual macro
parameters. Usually, a Scale is appended following a CameraShake node to make up for
the black edges that appear. Because of the concatenation of transformations, this does
not double-filter your image, so speed and quality are maintained.
Parameters
This node displays the following controls in the Parameters tab:
xFrequency, yFrequency
The x and y frequency of the shake. Higher numbers create faster jitter.
xAmplitude, yAmplitude
The maximum amount of pixels the element is moved by a single camera shake. Chapter 26 Transformations, Motion Blur, and AutoAlign 795
seed
When Shake generates a random pattern of values, you need to make sure for
purposes of compositing that you can recreate the same random pattern a second
time. In other words, you want to be able to create different random patterns,
evaluating each one until you find the one that works best, but then you don’t want
that particular random pattern to change again.
Shake uses the seed value as the basis for generating a random value. Using the same
seed value results in the same random value being generated, so that your image
doesn’t change every time you re-render. Use a single value for a static result, or use
the keyword “time” to create a pattern of random values that changes over time.
For more information on using random numbers in expressions, see “Reference Tables
for Functions, Variables, and Expressions” on page 941.
motionBlur
A Motion Blur quality level of 0. 0 produces no blur, whereas 1 represents standard
filtering. For more speed, use a value less than 1. This value is multiplied by the global
motionBlur parameter.
• shutterTiming: A subparameter of motionBlur used to specify shutter length. 0 is no
blur, whereas 1 represents a whole frame of blur. Note that standard camera blur is
180 degrees, or a value of .5. This value is multiplied by the global shutterTiming
parameter.
• shutterOffset: A subparameter of motionBlur. This is the offset from the current frame
at which the blur is calculated. Default is 0; previous frames are less than 0.
CornerPin
The CornerPin node can be used to push the four corners of an image into four
different positions, or to extract four positions and place them into the corners. The first
use is ideal for positioning an image in an onscreen television, for example. The second
mode is handy for extracting texture maps, among other things. The four coordinate
pairs start at the lower-left corner of the image (x0, y0), and work around in a
counterclockwise direction to arrive at the upper-left corner of the image (x3, y3).
To perform an “unpin” (push four points on the image to the four corners of the image),
switch the Update mode to “manual,” position the four points, enable inverseTransform,
and click Update.
If the result is too blurred, lower the anti-aliasing value. For information on four-point
tracking, see Chapter 25, “Image Tracking, Stabilization, and SmoothCam,” on page 717.
Note: You can also use the Move3D node for perspective shift.796 Chapter 26 Transformations, Motion Blur, and AutoAlign
Parameters
This node displays the following controls in the Parameters tab:
x0, y0, x1, y1, x2, y2, x3, y3
Eight parameters controlling the position of each corner in the corner-pin operation.
xFilter, yFilter
A pop-up menu that lets you pick which method Shake uses to transform the image.
For more information, see “Filters Within Transform Nodes” on page 862.
inverseTransform
A button that inverts the transform. In this case, it puts the four corners into the four
coordinates (0, or pinning), or pulls the four coordinates to the corners (1, or
unpinning).
antialiasing
Individual antialiasing. A value of 0 brings out more clarity.
motionBlur
Motion Blur quality level. 0 is no blur, whereas 1 represents standard filtering. For more
speed, use less than 1. This value is multiplied by the global motionBlur parameter.
• shutterTiming: A subparameter of motionBlur used to specify shutter length. 0 is no
blur, whereas 1 represents a whole frame of blur. Note that standard camera blur is
180 degrees, or a value of .5. This value is multiplied by the global shutterTiming
parameter.
• shutterOffset: A subparameter of motionBlur representing the offset from the current
frame at which the blur is calculated. Default is 0; previous frames are less than 0.
useReference
This is to be used to apply motion blur to previously animated elements. See Tutorial 4,
“Working With Expressions,” in the Shake 4 Tutorials.
Crop
For information on the Crop node, see “Crop” on page 186.
Fit
For information on the Fit node, see “Fit” on page 183.
Flip
The Flip node flips an image upside down. You can also use the Scale or Move2D node
to invert the image by setting the yScale value to -1. There are no parameters for the
Flip node.
Note: Flip (or a yScale of -1 ) forces the buffering of the image into memory.Chapter 26 Transformations, Motion Blur, and AutoAlign 797
Flop
The Flop node flops the image left and right. Unlike the Flip node, this does not buffer
the image into memory. You can also use the Scale or Move2D node to invert the image
by setting the yScale value to -1. There are no parameters for the Flop node.
MatchMove
For information on the MatchMove node, see “MatchMove” on page 740.
Move2D
The Move2D node combines many of the other transform nodes, including Pan, Scale,
Shear, and Rotate. The xCenter and yCenter parameters apply to both scaling and
rotation centers. If you need different centers, you can append a second node (Move2D,
Rotate, or Scale) and switch scaling or rotation to that node. This does not cost you any
process time, because Shake concatenates neighboring transforms into one big
transform. Therefore, you lose neither quality nor calculation time.
Also, Shake’s Infinite Workspace comes into play when you have two transforms
together. (For example, one transform rotates the image, and the second transform
rotates the image back, without clipping the corners.) However, in terms of workflow,
when you pan around small elements that are later composited on larger resolution
backgrounds, you do not have to crop your small elements out to create a larger space.
Parameters
This node displays the following controls in the Parameters tab:
xPan, yPan
These values let you move the image horizontally and vertically. 0, 0 represents the
center of the frame, and the xPan and yPan sliders allow adjustments up to plus or
minus the total width and height of the image frame.
angle
Lets you rotate the image around its center point. The slider allows adjustments plus or
minus 360 degrees, but you can enter any value you want into the numeric field.
aspectRatio
This parameter inherits the current value of the defaultAspect global parameter. If
you’re working on a nonsquare pixel or anamorphic image, it is important that this
value is set correctly to avoid unwanted distortion in the image.
xScale, yScale
These parameters let you change the horizontal and vertical scale of the image. By
default, the yScale parameter is locked to the xScale parameter. The sliders let you
adjust the scale of the image from 0 to 4 times the current size, but you can enter any
value into the numeric field.798 Chapter 26 Transformations, Motion Blur, and AutoAlign
Note: Entering a negative value into the xScale or yScale numeric field reverses the
image along that axis.
xShear, yShear
These parameters let you shear the image horizontally and vertically. The sliders let you
adjust the shearing of the image anywhere between 0 and 1 (1 creates 45 degrees of
shearing), but you can enter any value you want into the numeric field.
xCenter, yCenter
These parameters let you move the horizontal and vertical position of the center point
around which all transformations occur. For example, moving the center point to the
right changes the point about which the image rotates when you adjust the angle
parameter.
xFilter, yFilter
Lets you pick which method Shake uses to transform the image. For more information,
see “Filters Within Transform Nodes” on page 862.
transformationOrder
The order the transform is executed, with:
• t = translate
• r = rotate
• s = scale
• x = shear
By default, this is set to “trsx.”
inverseTransform
Inverts the transform. This can be used to quickly convert tracking data into
stabilization data.
motionBlur
Motion Blur quality level. 0 is no blur, whereas 1 represents standard filtering. For more
speed, use less than 1. This value is multiplied by the global motionBlur parameter.
• shutterTiming: A subparameter of motionBlur used to specify shutter length. 0 is no
blur, whereas 1 represents a whole frame of blur. Note that standard camera blur is
180 degrees, or a value of .5. This value is multiplied by the global shutterTiming
parameter.
• shutterOffset: A subparameter of motionBlur representing the offset from the current
frame at which the blur is calculated. Default is 0; previous frames are less than 0.Chapter 26 Transformations, Motion Blur, and AutoAlign 799
useReference
Applies the transform to the image or doesn’t. If it doesn’t, and you have animated
values, useReference applies a motion blur to the image, but does not actually move it.
This is good for adding blur to plates. See below for an example.
• 0 = Move image
• 1 = Smear-mode; image is not moved
• referenceFrame: A subparameter of useReference used for image stabilization. By
default, this parameter is set to “time,” that is, in reference to itself. However, if
inverseTransform is set to 1, you are stabilizing, with the assumption that any
animation you have applied matches up to the source animation. Setting the
reference frame locks the movement in to a specific frame.
Note: For an example of using the useReference parameter, see “Applying Motion Blur
to Non-Keyframed Motion” on page 781.
Move3D
The Move3D node allows you to create perspective changes by rotating and visually
moving the image in depth. It is similar in behavior to the Move2D node.
Some Move3D parameters, such as zPan, have no effect unless the fieldOfView
parameter is set to a value greater than 0. The angle of the Z axis is controlled, as in
Move2D, by the angle parameter. There is no shearing, but you can rotate the image in
X or Y, and keep the fieldOfView set to 0 to get orthogonal shearing effects.
Parameters
This node displays the following controls in the Parameters tab:
xPan, yPan, zPan
These values let you move the image horizontally, vertically, and in and out along the Z
axis. 0, 0 represents the center of the frame, and the xPan and yPan sliders allow
adjustments up to plus or minus the total width and height of the image frame.
zPan has no effect unless the fieldOfView value is set to greater than 0.
xAngle, yAngle, zAngle
Lets you rotate the image around its center point, along any axis in three-dimensional
space. The slider allows adjustments plus or minus 360 degrees, but you can enter any
value you want into the numeric field.
aspectRatio
This parameter inherits the current value of the defaultAspect global parameter. If
you’re working on a nonsquare pixel or anamorphic image, it is important that this
value is set correctly to avoid unwanted distortion in the image.800 Chapter 26 Transformations, Motion Blur, and AutoAlign
xScale, yScale, zScale
These parameters let you change the scale of the image along any axis. By default, the
yScale parameter is locked to the xScale parameter. The sliders let you adjust the scale
of the image from 0 to 4 times the current size, but you can enter any value into the
numeric field.
Note: Entering a negative value into the xScale, yScale, or zScale numeric field reverses
the image along that axis.
xCenter, yCenter, zCenter
These parameters let you change the position of the center point around which all
transformations occur. For example, moving the center point to the right changes the
point about which the image rotates when you adjust the angle parameter.
fieldOfView
Designates the vertical field of view, and affects the appearance of perspective shift.
When this is 0, the view is considered to be orthogonal, with increasing perspective
changes when you combine zPan, angleX, and angleY with a higher fieldOfView.
xFilter, yFilter
Lets you pick which method Shake uses to transform the image. For more information,
see “Filters Within Transform Nodes” on page 862.
transformationOrder
The order the transform is executed, with
• t = translate
• r = rotate
• s = scale
By default, this is set to “trs.”
Note: If you are trying to transfer camera setups from 3D applications, this is the order
transforms are pushed onto the matrix. The r transform can be replaced with yzx
(representing the three dimensions used by the 3D camera) and the resulting
transformationOrder would read tyzxs.
inverseTransform
Inverts the transform. This can be used to quickly convert tracking data into
stabilization data.
motionBlur
Motion Blur quality level. 0 is no blur, whereas 1 represents standard filtering. For more
speed, use less than 1. This value is multiplied by the global motionBlur parameter.Chapter 26 Transformations, Motion Blur, and AutoAlign 801
• shutterTiming: A subparameter of motionBlur used to specify shutter length. 0 is no
blur, whereas 1 represents a whole frame of blur. Note that standard camera blur is
180 degrees, or a value of .5. This value is multiplied by the global shutterTiming
parameter.
• shutterOffset: A subparameter of motionBlur representing the offset from the current
frame at which the blur is calculated. Default is 0; previous frames are less than 0.
useReference
Applies the transform to the image or doesn’t. If it doesn’t, and you have animated values,
useReference applies a motion blur to the image, but does not actually move it. This is
good for adding blur to plates. 0 = Move image. 1 = Smear-mode (image is not moved).
• referenceFrame : A subparameter of useReference used for image stabilization. By
default, this parameter is set to “time,” that is, in reference to itself. However, if
inverseTransform is set to 1, you are stabilizing, with the assumption that any
animation you have applied matches up to the source animation. Setting the
reference frame locks the movement in to a specific frame.
Orient
The Orient node rotates the image by 90-degree increments, resizing the image frame if
necessary. (For example, for one turn on a 300 x 600 frame, it rotates it 90 degrees, and
makes a 600 x 300 frame.) You can also apply a Flip node and Flop node to the image.
To rotate in degrees, use a Rotate node or a Move2D node.
Parameters
This node displays the following controls in the Parameters tab:
nTurn
Number of 90-degree increments the image is rotated, for example:
• 0 = no rotation
• 1 = 90 degrees turn counterclockwise
• 2 = 180 degrees turn counterclockwise
• 3 = 270 degrees turn counterclockwise
• 4 = no rotation
flop
Flops the image left and right around the Y axis.
• 0 = no flop
• 1 = flop
flip
Flips the image up and down around the X axis.
• 0 = no flip
• 1 = flip802 Chapter 26 Transformations, Motion Blur, and AutoAlign
Pan
The Pan node pans the image with subpixel precision. To wrap an image around the
frame (for example, anything that moves off the right edge of the frame reappears on
the left), use the Scroll node.
Parameters
This node displays the following controls in the Parameters tab:
xPan, yPan
These values let you move the image horizontally and vertically. 0, 0 represents the
center of the frame, and the xPan and yPan sliders allow adjustments up to plus or
minus the total width and height of the image frame.
motionBlur
Motion Blur quality level. 0 is no blur, whereas 1 represents standard filtering. For more
speed, use less than 1. This value is multiplied by the global motionBlur parameter.
• shutterTiming: A subparameter of motionBlur used to specify shutter length. 0 is no
blur, whereas 1 represents a whole frame of blur. Note that standard camera blur is
180 degrees, or a value of .5. This value is multiplied by the global shutterTiming
parameter.
• shutterOffset: A subparameter of motionBlur representing the offset from the current
frame at which the blur is calculated. Default is 0; previous frames are less than 0.
Resize
For information on the Resize node, see “Resize” on page 184.
Rotate
The Rotate node rotates the image with subpixel precision.
Parameters
This node displays the following controls in the Parameters tab:
angle
Lets you rotate the image around its center point. The slider allows adjustments plus or
minus 360 degrees, but you can enter any value you want into the numeric field.
aspectRatio
This parameter inherits the current value of the defaultAspect global parameter. If
you’re working on a nonsquare pixel or anamorphic image, it is important that this
value is set correctly to avoid unwanted distortion in the image.
xCenter, yCenter
These parameters let you move the horizontal and vertical position of the center point
around which all transformations occur. For example, moving the center point to the
right changes the point about which the image rotates when you adjust the angle
parameter.Chapter 26 Transformations, Motion Blur, and AutoAlign 803
motionBlur
Motion Blur quality level. 0 is no blur, whereas 1 represents standard filtering. For more
speed, use less than 1. This value is multiplied by the global motionBlur parameter.
• shutterTiming: A subparameter of motionBlur used to specify shutter length. 0 is no
blur, whereas 1 represents a whole frame of blur. Note that standard camera blur is
180 degrees, or a value of .5. This value is multiplied by the global shutterTiming
parameter.
• shutterOffset: A subparameter of motionBlur representing the offset from the current
frame at which the blur is calculated. Default is 0; previous frames are less than 0.
Scale
The Scale node scales the image with subpixel precision. To rotate and scale an image,
you probably want to use the two independent nodes, Rotate and Scale, rather than a
Move2D node, because you can control the centers independently. Unlike the Resize
node, Scale does not change the image resolution.
If you scale by a negative number on the Y axis, you buffer the image. You can also use
the Flip and Flop nodes to invert your image.
Parameters
This node displays the following controls in the Parameters tab:
xScale, yScale
These parameters let you change the horizontal and vertical scale of the image. By
default, the yScale parameter is locked to the xScale parameter. The sliders let you
adjust the scale of the image from 0 to 4 times the current size, but you can enter any
value into the numeric field.
Note: Entering a negative value into the xScale or yScale numeric field reverses the
image along that axis.
aspectRatio
This parameter inherits the current value of the defaultAspect global parameter. If
you’re working on a nonsquare pixel or anamorphic image, it is important that this
value is set correctly to avoid unwanted distortion in the image.
xCenter, yCenter
These parameters let you move the horizontal and vertical position of the center point
around which all transformations occur. For example, moving the center point to the
right changes the point about which the image rotates when you adjust the angle
parameter.
motionBlur
Motion Blur quality level. 0 is no blur, whereas 1 represents standard filtering. For more
speed, use less than 1. This value is multiplied by the global motionBlur parameter.804 Chapter 26 Transformations, Motion Blur, and AutoAlign
• shutterTiming: A subparameter of motionBlur used to specify shutter length. 0 is no
blur, whereas 1 represents a whole frame of blur. Note that standard camera blur is
180 degrees, or a value of .5. This value is multiplied by the global shutterTiming
parameter.
• shutterOffset: A subparameter of motionBlur representing the offset from the current
frame at which the blur is calculated. Default is 0; previous frames are less than 0.
Scroll
The Scroll node is similar to the Pan node, except that the image wraps around the
frame, and reappears on the opposite side. Because there is no way to track a pixel,
there is no motion blur available in the Scroll node.
Parameters
This node displays the following controls in the Parameters tab:
xScroll, yScroll
The X and Y panning values.
motionBlur
Motion Blur quality level. 0 is no blur, whereas 1 represents standard filtering. For more
speed, use less than 1. This value is multiplied by the global motionBlur parameter.
• shutterTiming: A subparameter of motionBlur used to specify shutter length. 0 is no
blur, whereas 1 represents a whole frame of blur. Note that standard camera blur is
180 degrees, or a value of .5. This value is multiplied by the global shutterTiming
parameter.
• shutterOffset: A subparameter of motionBlur representing the offset from the current
frame at which the blur is calculated. Default is 0; previous frames are less than 0.
SetDOD
The SetDOD node limits the active area, or Domain of Definition (DOD), to a limited
window. If you have a large element with only a small portion of non-black pixels, you
can apply a SetDOD node to isolate the area and to speed all image processes.
Shake automatically assigns a DOD to an .iff image. For example, if you scale down an
image, Shake limits the DOD to the scaled area. This remains in effect for later
composites, but does not affect the current script.
You can procedurally access the DOD of an image with the following variables:
• dod[0] Left edge of DOD
• dod[1] Bottom edge of DOD
• dod[2] Right edge of DOD
• dod[3] Top edge of DOD
For example, calling FileIn1.dod[0] returns the minimum X value of the FileIn1 node’s
DOD.Chapter 26 Transformations, Motion Blur, and AutoAlign 805
Parameters
This node displays the following controls in the Parameters tab:
left, right, bottom, top
As their names imply, these parameters let you set the outer boundaries of the DOD.
For more information, see “The Domain of Definition (DOD)” on page 82.
Shear
The Shear node skews the image left and right, or up and down. Motion blur can also
be applied.
Parameters
This node displays the following controls in the Parameters tab:
xShear, yShear
These parameters let you shear the image horizontally and vertically. The sliders let you
adjust the shearing of the image anywhere between 0 and 1 (1 creates 45 degrees of
shearing), but you can enter any value you want into the numeric field.
xCenter, yCenter
These parameters let you move the horizontal and vertical position of the center point
around which all transformations occur. For example, moving the center point to the
right changes the point about which the image rotates when you adjust the angle
parameter.
motionBlur
Motion Blur quality level. 0 is no blur, whereas 1 represents standard filtering. For more
speed, use less than 1. This value is multiplied by the global motionBlur parameter.
• shutterTiming: A subparameter of motionBlur used to specify shutter length. 0 is no
blur, whereas 1 represents a whole frame of blur. Note that standard camera blur is
180 degrees, or a value of .5. This value is multiplied by the global shutterTiming
parameter.
• shutterOffset: A subparameter of motionBlur representing the offset from the current
frame at which the blur is calculated. Default is 0; previous frames are less than 0.
Stabilize, Tracker
For information on the Stabilize and Tracker nodes, see “Tracking Nodes” on page 740.
Viewport, Window
For information on the Viewport and Window nodes, see “Cropping Functions” on
page 186.
Zoom
For information on the Zoom node, see “Zoom” on page 185.27
807
27 Warping and Morphing Images
Shake provides powerful warping and morphing tools
that are flexible enough to use for a wide variety of
compositing tasks, from creating or correcting lens
distortions, to morphing one subject into another.
About Warps
Shake’s various linear transformation nodes, such as Move2D, Move3D, Rotate, and Scale,
operate on entire images so that each pixel is moved, rotated, or scaled by the same
amount. Even the CornerPin node applies the same transformation to every pixel—an
amount defined by the location of each of the four corners. Shake’s warp nodes,
including iDisplace, Turbulate, and Twirl, and the Warper and Morpher nodes, differ from
linear transformations in that each pixel’s transformation is calculated individually.
Depending on the type and settings of the warp you’re creating, each pixel of an image
can be moved independently of its neighbor. For this reason, warps can be referred to
as “nonlinear transformations.”
The Basic Warp Nodes
Shake’s basic warp nodes are good for deforming one image using another as a guide,
making rippling patterns, randomizing the texture of an image, and other similar
effects. These nodes are quite powerful, but their main strength is in making wholesale
image deformations while requiring a minimum of manual control, using either
mathematical expressions or secondary images to define the warping being done. This
section discusses the basic warp nodes, located in the Warp Tool tab.
DisplaceX
The DisplaceX node is a general purpose warping tool that is similar to WarpX, except
you can access a second warping image to control the distribution of a warp. Any
formula can be entered in the x and y fields for custom warps.
You can also create multiline expressions with this node.808 Chapter 27 Warping and Morphing Images
Parameters
This node displays the following controls in the Parameters tab:
overSampling
The actual number of samples per pixel equals this number squared. For better
antialiasing, increase the number in the value field.
xExpr, yExpr
The expression for where the pixel information is pulled. Expressions of x and y return
the same image. Expressions of x+5, y+5 pull the color from the pixel five units up and
to the right of the current pixel. You can access the values of the warping image (the
second one) with r, g, b, a, and z.
Here are some example expressions:
“x*r”
“x+((r-.5)*30*r)”
“x+cos(y/140)*70*g”
“x+r*r*cos(x*y/100)*100”
xDelta, yDelta
Sets the maximum distance that any pixel is expected to move, but doesn’t actually
move it. Any given pixel in a displaced image may be affected by any pixel within the
Delta distance. This means it must consider a much greater amount of pixels that may
possibly affect the currently rendered pixel. This is bad. However, if you set a Delta
value that is too small, you get errors if your expression tells the pixel to move beyond
that limit. Therefore, you’ll want to do some testing to balance between speed with
errors, or accuracy with drastically slower renders. Our advice: Start small and increase
the size until the errors disappear.
IDisplace
The IDisplace node is a hard-wired version of DisplaceX intended to warp an image
based on the intensity of a second image. The formula used is x-(a*xScale) and y-
(a*yScale).Chapter 27 Warping and Morphing Images 809
The following image is a checkerboard warped with a QuickShape node. Because the
shape is black and white, with little gray, it is difficult to make out the distortion in the
checkerboard.
As the following image demonstrates, it is often a good idea to insert a blur between a
high-contrast distortion image and the IDisplace node.810 Chapter 27 Warping and Morphing Images
This technique combines well with the Relief macro in the “Cookbook” chapter of this
manual.
Parameters
This node displays the following controls in the Parameters tab:
xScale, yScale
The number of pixels that the foreground image is offset by the background image.
xDOffset, yDOffset
A panning factor applied to the image. Intensity is usually 0 to 1; 1 is 100 percent of the
x/yScale factor.
xChannel, yChannel
The channel from the background image that is used to distort the foreground image.
xDelta, yDelta
The anticipated distance the pixels will move. If this value is too high, calculations slow
down. If it is too low, black holes will appear in the image.Chapter 27 Warping and Morphing Images 811
LensWarp
This node lets you make subtle or large adjustments to an image to either correct for,
or simulate, different types of film and video lenses. As a corrective measure, you can
use this node to remove barrel distortion in an image. You can also use this node to
simulate the lens used in one image, in order to warp another image you’re
compositing over it so that they appear to have been shot using the same lens. This
node affords more appropriate control over the result than the PinCushion node.
There are two ways you can use the LensWarp node.
Manual Adjustments
You can manually adjust the factor and Kappa values to create the desired effect. If
you’re making a simple correction, this may be the fastest way to proceed.
Automatic Calibration
You can also draw shapes along the edge of any distorted features within the image
that are supposed to be straight, and use the calibrate button to automatically correct
the image. This provides the most accurate result, especially for moving images.
To automatically calibrate the LensWarp node:
1 Attach the LensWarp node to an image with wide-angle distortion.
By default, the LensWarp node is in Add Shapes mode.
2 In the Viewer, identify one or more curved features that should be straight, then trace
each curve with an open shape.
Important: Make sure the LensWarp node is turned off in the Parameters tab before
adding calibration shapes.
Each feature should be traced with at least four points. The more curved a feature is,
the more points you should use to trace it with. On the other hand, you should rarely
need to use more than ten points per feature.
Before After LensWarp undistort812 Chapter 27 Warping and Morphing Images
To finish drawing an open shape, double-click to draw the last point and end the
shape.
Note: The LensWarp node does not use Bezier curves.
3 If you’re correcting an image sequence, scrub through the frame range to find more
curved features, then trace these as well.
The more features you identify in different areas of the frame, the more accurate the
final result will be.
4 When you’re finished, click the “analyze” button in the LensWarp parameters to calculate
the result.
The analyze state changes to “analysis up to date.”
5 Do one of the following:
• If you’re correcting distortion in an image, set the LensWarp node to Undistort.
• If you’re applying the distortion from this image to another one, detach the LensWarp
node, reattach it to the other image, and set the LensWarp node to Distort.
LensWarp Viewer Shelf Controls
This node has a subset of Shake’s standard shape-drawing tools that you can use to
identify distorted features for automatic calibration.
Button Description
Add Shapes Click this button to add shapes with which to trace distorted
features that you want to correct. You can only draw
polygonal shapes with the LensWarp node.
Edit Shapes Click this button to edit shapes that you’ve made.
Import/Export
Shape Data
Lets you import or export shapes.Chapter 27 Warping and Morphing Images 813
Parameters
This node displays the following controls in the Parameters tab:
analyze
After you’ve created one or more shapes to trace distorted features within the image in
the Viewer, clicking calibrate automatically sets the factor and Kappa parameters to the
values necessary to either correct or match the lens distortion.
analyze state
This parameter indicates whether or not the analysis has been made.
Off, Distort, Undistort
Use this control to toggle the LensWarp node between three modes:
• Off: Disables the effect of the LensWarp node. Make sure the LensWarp node is
turned off before you add calibration shapes.
• Distort: Warps the image to introduce lens curvature to the image. This mode is
useful for matching the lens curvature in another image.
• Undistort: Warps the image to remove lens curvature.
factor
A two-dimensional parameter that lets you adjust the amount of vertical and horizontal
curvature. Opening the subtree lets you edit the X and Y dimensions independently.
kappa
The amount of warping applied to the image. Typical adjustments fall between 0 and
1.5, but higher adjustments can be made if necessary.
Negative values produce opposite effects—if the LensWarp node is set to Distort,
negative kappa values undistort the image. If it is set to Undistort, negative values
distort the image. This enables you to animate changes from one state to another.
center
The center of the warp performed to the image. By default, xCenter is set to width/2,
and yCenter is set to height/2.
Delete Control
Point
Select a point and click this button to remove it from the
shape.
Enable/Disable
Shape Transform
Control
Lets you show or hide the transform control at the center of
each shape. Hiding these controls will prevent you from
accidentally transforming shapes while making adjustments
to control points.
Button Description814 Chapter 27 Warping and Morphing Images
overSampling
An integer value that represents the numbers of samples per pixel that are taken into
account when performing a warp. This parameter is set to 1 by default, which results in
faster rendering times. However, extreme warping effects may introduce aliasing
artifacts that can be reduced or eliminated by increasing this value, up to a maximum
value of 4. Increasing this parameter may cause render times to increase dramatically.
Note: Although the slider is limited to a range of 1 to 4, you can enter larger values into
this parameter’s value field.
PinCushion
The PinCushion node distorts the corners of the image in and out to mimic a particular
type of edge lens distortion. You can push the values below 0 as well.
Parameters
This node displays the following controls in the Parameters tab:
overSampling
The actual number of samples per pixel equals this number squared. For better
antialiasing, increase the number.
xFactor, yFactor
The amount of distortion. Adjusting the xFactor bends the sides of the image, while
adjusting the yFactor bends the top and bottom. Positive values bow the image
outside the frame, and negative values bow the image into the frame.
Randomize
The Randomize node randomizes the position of each pixel within a certain distance. To
randomize the color, create a Rand node, and layer it with your image using an IMult
node.
Parameters
This node displays the following controls in the Parameters tab:
overSampling
The actual number of samples per pixel equals this number squared. For better
antialiasing, increase the number.
seed
When Shake generates a random pattern of values, you need to make sure for
purposes of compositing that you can recreate the same random pattern a second
time. In other words, you want to be able to create different random patterns,
evaluating each one until you find the one that works best, but then you don’t want
that particular random pattern to change again.Chapter 27 Warping and Morphing Images 815
Shake uses the seed value as the basis for generating a random value. Using the same
seed value results in the same random value being generated, so that your image
doesn’t change every time you re-render. Use a single value for a static result, or use
the expression “time” to create a pattern of random values that changes over time.
For more information on using random numbers in expressions, see “Reference Tables
for Functions, Variables, and Expressions” on page 941.
xAmplitude, yAmplitude
The amount of randomization in pixel distance.
xOffset, yOffset
An offset to the random pattern.
Turbulate
The Turbulate node is similar to the Randomize node, except that it passes a continuous
field of noise over the image, rather than just randomly stirring the pixels around. This
is a processor-intensive node.
Parameters
This node displays the following controls in the Parameters tab:
overSampling
The actual number of samples per pixel equals this number squared. For better
antialiasing, increase the number.
detail
The amount of fractal detail. The higher the number, the more iterations of fractal
noise. This can be very processor-intensive.
xNoiseScale, yNoiseScale
The scale of the waves.
xAmplitude, yAmplitude
The amount of randomization in pixel distance.
xOffset, yOffset
Lets you offset the effect on the image.
seed
When Shake generates a random pattern of values, you need to make sure for
purposes of compositing that you can recreate the same random pattern a second
time. In other words, you want to be able to create different random patterns,
evaluating each one until you find the one that works best, but then you don’t want
that particular random pattern to change again.816 Chapter 27 Warping and Morphing Images
Shake uses the seed value as the basis for generating a random value. Using the same
seed value results in the same random value being generated, so that your image
doesn’t change every time you re-render. Use a single value for a static result, or use
the expression “time” to create a pattern of random values that changes over time.
For more information on using random numbers in expressions, see “Reference Tables
for Functions, Variables, and Expressions” on page 941.
Twirl
The Twirl node creates a whirlpool-like effect.
Parameters
This node displays the following controls in the Parameters tab:
startAngle
The amount of twirl near the Center of the rotation.
endAngle
The amount of twirl away from the Center.
aspectRatio
This parameter inherits the current value of the defaultAspect global parameter. If
you’re working on a nonsquare pixel or anamorphic image, it is important that this
value is set correctly to avoid unwanted distortion in the image.
xCenter, yCenter
Center of the twirl.
xRadius, yRadius
Radius of the twirl circle.
bias
Controls how much of the twist occurs between the center and the Radius. 0 means
the outer Radius is not rotated; 1 means the center is not rotated.
antialiasing
Anti-aliasing on the effect.
WarpX
The WarpX node is a general-purpose warping tool, similar to ColorX, except that
instead of changing a pixel’s color, you change its position. Any formula can be entered
in the xExpr and yExpr fields for custom warps. You can also create multiline
expressions in this node.
One note of caution: WarpX does not properly set the DOD, so you may need to
manually attach a Transform–SetDOD node following the WarpX node.Chapter 27 Warping and Morphing Images 817
The following examples are on a grid. By modifying x and y, you specify from what
pixel the information is pulled. For example, x+5, y+5 shifts the image left and down by
5 pixels.
Expr Value
xExpr x
yExpr y
Expr Value
xExpr x+3*sin(y/10)
yExpr y+3*sin(x/10)818 Chapter 27 Warping and Morphing Images
Expr Value
xExpr float xc=(x-width/2); float yc=(y-height/2);
float r=sqrt(xc*xc+yc*yc); float newr=r*sin(r/100);
width/2+ newr*xc/r
yExpr float xc=(x-width/2); float yc=(y-height/2);
float r=sqrt(xc*xc+yc*yc); float newr=r*sin(r/100);
height/2+ newr*yc/r
Expr Value
xExpr ((x/width-0.5)*sin(3.141592654*y/height)+0.5)*width
yExpr yChapter 27 Warping and Morphing Images 819
Expr Value
xExpr float xc=(x-width/2); float yc=(y-height/2);
float r=sqrt(xc*xc+yc*yc); float a=atan2(yc,xc);
float newA= a+3.141592654/2*r/200; width/2+r*cos(newA)
yExpr float xc=(x-width/2); float yc=(y-height/2);
float r=sqrt(xc*xc+yc*yc); float a=atan2(yc,xc);
float newA= a+3.141592654/2*r/200; height/2+r*sin(newA)
Expr Value
xExpr float xc=(x-width/2); float yc=(y-height/2);
float r=sqrt(xc*xc+yc*yc); float newr=r+3*sin(r/2);
width/2+ newr*xc/r
yExpr float xc=(x-width/2); float yc=(y-height/2);
float r=sqrt(xc*xc+yc*yc); float newr=r+3*sin(r/2);
height/2+ newr*yc/r 820 Chapter 27 Warping and Morphing Images
Expr Value
xExpr float xc=(x-width/2); float yc=(y-height/2);
float r=sqrt(xc*xc+yc*yc); float a=atan2d(yc,xc);
float newA= a+((int)a)%8-4; width/2+r*cosd(newA)
yExpr float xc=(x-width/2); float yc=(y-height/2);
float r=sqrt(xc*xc+yc*yc); float a=atan2d(yc,xc);
float newA= a+((int)a)%8-4; width/2+r*sind(newA)
Expr Value
xExpr float xc=(x-width/2); float yc=(y-height/2);
float r=sqrt(xc*xc+yc*yc); float newr=r*r/200; width/2+
newr*xc/r
yExpr float xc=(x-width/2); float yc=(y-height/2);
float r=sqrt(xc*xc+yc*yc); float newr=r*r/200;
height/2+ newr*yc/r Chapter 27 Warping and Morphing Images 821
Parameters
This node displays the following controls in the Parameters tab:
overSampling
The actual number of samples per pixel equals this number squared. For better
antialiasing, increase the number.
xExpr, yExpr
The expression to be placed. See above for examples.
xDelta, yDelta
Sets the maximum distance that any pixel is expected to move, but doesn’t actually
move it. A given pixel in an image may be affected by any pixel with the Delta distance.
This means that Shake must consider a much greater amount of pixels that may
possibly affect the currently rendered pixel. This is bad. However, if you set a Delta
value that is too small, you get errors if your expression tells the pixel to move beyond
that limit. Therefore, you’ll want to do some testing to balance between speed with
errors, or accuracy with drastically slower renders. It is recommended that you start
small and increase the size until the errors disappear.
The Warper and Morpher Nodes
Shake’s shape-based warping nodes, the Warper and Morpher, let you easily create
specific warping effects using shape tools that are very similar to those used by the
RotoShape node. Using these tools, you can deform parts of an image to conform to
shapes you create in the Viewer.
Warper and Morpher Memory Usage
The Warper and Morpher nodes use a lot of memory when processing high-resolution
images—using four image channels of the full image buffer in float space for each
processing thread. As a result, memory usage may become an issue when warping and
morphing large images with multi-threaded processing enabled. In this situation,
virtual memory usage may noticeably slow processing speed when the maximum
available RAM is used.
For example, if you have 2 GB of RAM in your computer, and Shake plus assorted OS
operations use 300 MB, this leaves 1.7 GB of total memory for image processing by the
Warper or Morpher node for any given frame. You can calculate the RAM used for a
frame at a given image size using the following formula:
4 * (image width * image height * 4) * (number of threads)822 Chapter 27 Warping and Morphing Images
Using this formula yields the following memory usage table:
If you don’t have enough RAM to handle the resolution you’re working at, switch the
maxThread parameter (in the renderControls subtree of the Globals tab) to 1. This
reduces the memory requirements for this operation.
Using Shapes to Warp and Morph
Both the Warper and Morpher nodes allow you to use animated shapes to control and
animate the deformation of an image. Both nodes work using four types of shapes that
you draw. These shapes work together to define which parts of an image will be
deformed to fit into shapes that you define.
Types of Control Shapes
The following simple example clearly shows the different shapes used to create a warp
effect used to bend the thumb down.
Number of Threads 2K Image Calculation 4K Image Calculation 8K Image Calculation
1 49MB 195 MB 778 MB
2 97 MB 389 MB 1.6 GB
Boundary
shape
Target shape
Connection
line
Source shapeChapter 27 Warping and Morphing Images 823
The following example shows multiple instances of these same basic shapes employed
to create a more complex effect—that of lava within a still image flowing forward.
• Source Shapes: These are shapes you draw that conform to the subject of the source
image you want to deform. They generally follow well-defined contour lines of the
subject—examples might include the edge of a person’s face, the contours of the
eyes, eyebrows, nose, and mouth, or the outline of an arm or leg. Source shapes are
light blue by default.
• Target Shapes: These are shapes you draw that define the shape you want the
deformed image to conform to. For example, if you want to warp the eyes of a cat to
make them appear to bulge on cue, you would create animated target shapes
defining the new shape of the eyes. Target shapes are dark blue by default.
• Connection Lines: These are automatically created when you connect a source shape
to a target shape, and indicate the correspondence of each point in a source shape
to its destination on the target shape. It is by pairing source shapes with target
shapes that Shake is able to create controlled deformations of an image. Although
four connection lines are automatically created for each source/target shape pair, you
can create additional ones to give you added control over the deformation.
Connection lines are purple by default.
• Boundary Shapes: The Warper and Morpher nodes can sometimes create unwanted
deformations in areas surrounding the parts of the image you intend to manipulate.
Boundary shapes are essentially shapes that are both source and target shapes,
which is how they keep affected pixels from moving. You can use boundary shapes
to minimize the effect of a warp on surrounding parts of an image, either by
excluding whole regions of the source, or by “pinning down” specific areas that you
don’t want to be deformed. You can create as many boundary shapes as necessary,
since it may take more than one to pin down an image completely. Boundary shapes
are orange by default.
Boundary
shape
Target shape
Connection
line
Source shape824 Chapter 27 Warping and Morphing Images
• Displaced Target Shapes: These are not shapes you either create or modify directly.
Instead, they’re indicators that show the amount of displacement in that region of
the image, based on either the overallDisplacement parameter, or that shape’s
Displacement parameter if it is being animated independently. These shapes are
designed to help you see what the deformation will be without having to render the
entire image. Displaced target shapes are pink by default.
Note: The colors of each control shape type can be modified in the shapeColors
subtree of the colors subtree of the Globals tab.
For information on drawing shapes in the Warper and Morpher nodes, see “Creating and
Modifying Shapes” on page 830.
Source shapes and target shapes may be drawn separately, or you can duplicate the
source shape you create and modify it to quickly create a target shape. It’s not
necessary for the source and target shapes to have the same number of points, since
the actual path that an animated deformation will follow runs along the connection
lines that appear once you connect a source shape to a target shape.
Controlling a Warp Effect Using Several Shapes
In both the Warper and Morpher nodes, you may create as many source/target shape
pairs as necessary to deform various parts of the subject. Unlike the RotoShape node,
which only allows for the creation of closed shapes, the Warper and Morpher nodes
allow you to create closed shapes, open-ended shapes, and single-point shapes. This
flexibility allows you to create any kind of deformation you need.Chapter 27 Warping and Morphing Images 825
In some instances, you can create a more convincing effect using multiple source/
target shape pairs. In the following example, four source/target shape pairs are used to
create the effect of the lava flowing forward.
In this example, a single large shape is used to pull the entire outside shape of the lava
forward. An unbroken shape is usually best for warping the outer boundries of an
image, but this can sometimes create an unnatural stretching within the subject itself.
By adding individual source/target pairs within the lava image, the individual ripples
and eddies of the lava can be individually animated to achieve a more naturallooking effect.
For more information about drawing and manipulating shapes, see “Creating and
Modifying Shapes” on page 830.
Multi-shape warp with OverallDisplacement at 0
Multi-shape warp with OverallDisplacement at 1826 Chapter 27 Warping and Morphing Images
Animating Control Shapes
Unless you’re deforming a still image, it will probably be necessary to animate the
source and target shapes you use to fit the motion of the subject you’re deforming. For
example, if you’re creating a warp for an actor who’s moving, you’ll need to animate the
source shape to conform to the outlines of the actor so that they follow his or her
motion. You’ll then need to animate the target outlines to follow the same motion.
Here’s a shortcut that may save you some effort when you create a warp effect using an
animated shape. First, animate the source shape that defines the area of the image you
want to warp. Afterwards, you can duplicate and modify it as necessary to use as the
target shape, without having to reanimate the entire shape.
For more information about keyframing shapes, see “Animating Shapes” on page 557.
Using Motion Tracking to Animate Control Shapes
In addition to manually keyframing source and target shapes, you can attach Stabilize
or Tracker nodes to either source or target shapes to aid you when rotoscoping moving
features. This works identically to the way you attach Stabilize or Tracker nodes to
shapes in the RotoShape node. For more information, see “Attaching Trackers to
Shapes” on page 562.
Controlling Warp and Morph Deformation Using Connection Lines
When you first connect a source shape to a target shape in the Viewer, four connection
lines appear that run from the source to the target shapes. These lines serve two
purposes. First, they show you which segments of a source shape correspond to which
segments of its connected target shape. Second, their angles define the path the pixels
of the image will follow when warping from their original position to the target
position you’ve defined.
The start and end points of connection lines that are connected to the source and
target shapes can be moved by turning on the Edit Connection button, and then
dragging them back and forth along the shapes themselves. Changing the angle of the
lines by moving the in or out point of a connection line independently allows you to
redefine the angle of deformation for all pixels in that area of the warp.Chapter 27 Warping and Morphing Images 827
Connection lines can be moved, and even animated, to control the speed and direction
of deformation. Additional connection lines may also be created to give you more
precise control over the deformation itself.
Using Boundary Shapes to Limit Deformation in an Image
The Warper and Morpher nodes both work by pushing and pulling pixels from the
region of an image defined by the source shapes to the region defined by the target
shapes. When part of an image is warped, the surrounding area stretches to
accommodate the change, as if the image is on a sheet of rubber being pushed and
pulled to distort it.
The region affected by the resulting deformation is not limited to the area defined by
the source/target shape pairs. In fact, you’ll notice that a significant area of the image
surrounding each source/target shape pair is also deformed. While there is a 100
percent displacement at the actual position of the source and target shapes, the total
area of deformation lessens gradually with the distance from the shape pair. This may
result in a warp or morph not only affecting the intended subject, but also the
surrounding background.
This aspect of the Shake warper is useful in that it helps to smooth the transition
between the warped and unwarped parts of your image, resulting in a more realistic
effect. It also means that sometimes it’s not necessary to create as many source/target
shape pairs as you might think—a single shape pair’s area of influence may be enough
to create the effect you want.
On the other hand, there are usually parts of an image that you don’t want warped. For
example, if you’re warping someone’s eyebrows, chances are you don’t want his or her
hair to be distorted as well. You exclude parts of an image from being affected by the
Warper or Morpher node using boundary shapes.
Warp without boundary shape828 Chapter 27 Warping and Morphing Images
It’s important to understand that boundary shapes don’t eliminate distortion from the
surrounding image; they minimize it.
It may take more than one boundary shape to completely lock down an image.
Fortunately, you can create as many boundary shapes as necessary to eliminate
unwanted distortion in an image.
Important: Target shapes should never cross boundary shapes. Doing so may create
unwanted distortion, possibly tearing in the resulting image.
There are many ways you can use boundary shapes to isolate parts of an image from
deformation. One technique is to use a closed shape to surround a pair of source/
target shapes, thus minimizing their effect on the surrounding area of the image.
Warp with boundary shapeChapter 27 Warping and Morphing Images 829
For example, if you want to isolate a warping operation to a particular region, you can
create a closed boundary shape to lock off just that area. Sometimes, you may have to
use several concentric rings of boundary shapes to completely lock down an area of
the image. You can also use open boundary shapes to “pin down” specific areas of an
image that you don’t want to be affected by a warping effect. For example, if you were
creating a warp effect to manipulate an animal’s face, you could use a combination of
open and closed shapes and single-point shapes to prevent the eyes and nose from
being affected by the warp you’re applying to the eyebrow area.
Note: By default, the outer edge of the frame is also used as a boundary shape, by
default. This behavior can be disabled by turning off the addBorderShape parameter in
the Parameters tab for the Warper or Morpher node you’re adjusting, but this may
produce unexpected results.
Isolating the Subject of Deformation Prior to Warping or Morphing
Even when you use one or more boundary shapes to pin down areas surrounding a
warp effect, you may find that some of the surrounding image is still affected, however
slightly. For this reason, it may be useful to isolate the subject of the image prior to
using either the Warper or Morpher node. Ideally, the subject of the warp effect was
shot against bluescreen or greenscreen, and can be keyed. If not, you can always
rotoscope the image using a RotoShape node.
In either case, the Warper and Morpher nodes affect the alpha channel of the image
along with the RGB portion, so you can always add either node to the tree after you’ve
isolated your subject by keying or rotoscoping. This way, you can add a clean
background no matter how extreme the warping effect is.
Closed
boundary
shape
Open
boundary
shapes830 Chapter 27 Warping and Morphing Images
Creating and Modifying Shapes
Many of the shape controls of the Warper and Morpher nodes are identical to those of
the RotoShape node, and all share the same methods for creating tangents, closing
shapes, inserting and deleting points, and so on. If necessary, you can refer to the
RotoShape documentation for more information on creating and modifying shapes.
Warper and Morpher Viewer Shelf Controls
When a Warper or Morpher node is selected, the following buttons appear in the
Viewer shelf.
.
Button Description
Add Shapes Creates new shapes. Closed shapes are created by
clicking the first shape point you created. Open
shapes and single-point shapes are created by
double-clicking when creating the last point, or by
right-clicking in the Viewer, then choosing Finish
Shape from the shortcut menu.
Edit Shapes Allows you to edit shapes.
Connect Shapes Clicking this button allows you to create a source/
target shape pair by first clicking the shape you
want to be the source, and then clicking a second
time the target shape you want to link it to.
To define a boundary shape, click this button, then
click twice a shape you want to turn into a boundary
shape. This effectively makes a single shape into
both a source and target shape.
Edit Connections Once two shapes have been joined with the
Connect Shapes button, the location and angle of
each connection line that links source to target
shapes may be edited by clicking this button. With
this control turned on, select the source or
destination point of a connection line.
Show/Hide Tangents Toggles the Viewer among showing All shape
tangents (the handles that allow you to manipulate
Bezier curves), None, or Pick, which only shows the
shape tangents of individually selected points.
Lock/Unlock Tangents Locks or unlocks all shape tangents in the Viewer. If
locked, shape points may still be moved, but the
tangents defining the angle of curvature remain
locked.
Warp/Morph controlsChapter 27 Warping and Morphing Images 831
Spline/Linear Mode Toggles selected to act as either corner points or
Bezier points.
Delete Control Point Deletes selected points on a shape.
Key Current Shape/
Key All Shapes
Toggles between two shape keyframing modes. In
All Shapes, all shapes are keyframed whenever any
one shape is modified with Autokey on. In Current
Shape, only the selected shape is keyframed when
Autokey is on.
Enable/Disable Shape
Transform Control
When turned on, this control makes the shape
transform controls for each shape visible in the
Viewer. Each shape can be manipulated as a whole
using this control. When turned off, all shape
transform controls are hidden, and cannot be used.
Visibility Toggles These buttons toggle the visibility of specific types
of shapes in the Viewer. From left to right, they
control:
• Source Shape Visibility
• Target Shape Visibility
• Connection Line Visibility
• Boundary (or lockdown) Shape Visibility
• Unconnected Shape Visibility
• Displaced Target Shape Visibility
Each control affects the visibility of all shapes of that
type in the Viewer. Individual shapes may be made
invisible using controls in the Parameters tab.
However, the Visibility toggles in the Viewer shelf
supersede the Visibility settings in the Parameters tab.
Each setting in the Select Display Image pop-up
menu of the Warper and Morpher Viewer shelf allows
these controls to be toggled independently. For
example, in the Warper, the visibility settings of the
source image can differ from those used by the
warped (target) image.
Button Description832 Chapter 27 Warping and Morphing Images
Drawing and Editing Shapes
The biggest difference between drawing shapes with the RotoShape node and with the
Warper and Morpher nodes is that while RotoShape allows you to draw only closed shapes,
the Warper and Morpher nodes allow you to create not only closed shapes, but also open
shapes and single-point shapes. Open shapes make it very simple to define deformations
for visual features like eyebrows, muscle outlines, and other contours that don’t require a
closed outline. Single-point shapes allow you to define deformations for small image
details, and are also very effective as boundary shapes you can use to pin down parts of
the image you don’t want to be affected by nearby source/target shape pairs.
The Warper and Morpher nodes both warp the image using the same shape controls,
and the methods used to create and edit shapes in each node are identical.
Drawing New Shapes
Drawing new shapes works the same whether you’re creating a source, target, or
boundary shape. In each case, you create a new, unassigned shape first, and then
assign its type in a subsequent step. Unassigned shapes appear yellow, by default.
To create a new unassigned shape:
1 Click the right side of a Warper or Morpher node to load its parameters into the
Parameters tab, and its controls into the Viewer shelf.
2 In the Viewer shelf, click the Add Shape button.
3 In the Viewer, begin drawing a shape by clicking anywhere on the image to place a
point.
Lock Shapes These three buttons lock all source, target, and
boundary shapes in the Viewer, preventing them
from being edited. Each control locks all shapes of
that type in the Viewer.
Individual shapes may be locked using controls in
the Parameters tab. However, the Shape Lock
buttons in the Viewer shelf supersede the Lock
buttons in the Parameters tab.
Select Display Image The Source/Warped Image pop-up menu allows you
to toggle the Viewer’s display between the
unmodified and modified images.
You may quickly jump between views by pressing:
• F9 to view the original source image
• F10 to view the original target image (Morpher
only)
• F11 to view the warped image
Button DescriptionChapter 27 Warping and Morphing Images 833
If necessary, zoom into the image in the Viewer to better trace the necessary features of
the subject you want to warp or morph.
4 Continue clicking to add more points to the shape.
• Click once to create a sharply angled point.
• To create a point with tangent controls to make a Bezier curve, drag to one side of
the point until the angled point becomes a curve.
Note: The distance you have to drag before the angled point becomes a curve is
customizable via the rotoTangentCreationRadius parameter in the shapeControls
subtree of the guiControls subtree in the Globals tab. For more information on
customizing Shake’s shape creation tools, see “Customizing Shape Controls” on
page 843.
5 There are three ways you can end shape drawing to create different kinds of shapes:
• To create a single point shape, right-click in the Viewer immediately after creating the
first point, then choose Finish Shape from the shortcut menu.
• To create an open shape, either double-click when creating the last point of the
shape, or right-click and choose Finish Shape from the shortcut menu.
• To create a closed shape, click the first point of the shape you created.
Added point
Drag to create a Bezier
point.
Click the first point
to close the shape.834 Chapter 27 Warping and Morphing Images
Important: You can only create single-point shapes and open shapes in the Warper and
Morpher nodes. You cannot create these kinds of shapes in the RotoShape node.
Every time you create a new shape, an additional shape parameter appears in the
Parameters tab of the corresponding Warper or Morpher node. By default, each new
shape parameter that’s created is named “shape1Name,” and the middle number is
incremented with each new shape you draw. These names can be changed to more
easily identify the specific parts of the subject you’ve isolated for individual
manipulation later.
Editing Shapes
Once you’ve created a shape, there are several ways you can modify it. These
techniques also work for keyframing shapes used for animated warping or morphing
effects. For more information about keyframing shapes, see “Animating Single or
Multiple Shapes” on page 558.
When editing shapes that are close to other shapes, it may be helpful to turn off
Enable/Disable Shape Transform Control in the Viewer shelf, to hide transform controls
from other shapes that may overlap the shape you’re editing. After your source/target
shape pairs have been defined, it may also be helpful to turn off the visibility of shape
types that you don’t need to see. For example, turning off the visibility of all source
shapes while you’re editing their corresponding target shapes will prevent accidental
adjustment of the wrong overlapping points. You can turn different groups of visibility
controls on and off for each setting of the Select Display Image pop-up menu in the
Viewer shelf.
Important: In order to edit Warper or Morpher shapes, it’s important to make sure the
Edit Shapes button is turned on.
To edit a shape:
1 Click the right side of a Warper or Morpher node to load its parameters into the
Parameters tab, and its controls into the Viewer shelf.
2 In the Viewer shelf, click the Edit Shapes button.
3 Select one or more points you want to edit by doing one of the following:
• Click a single point to select it.
• Shift-click additional points to add them to the selection.
• Click in the Viewer and drag a selection box over all the points you want to select.Chapter 27 Warping and Morphing Images 835
• Hold the Shift key down and drag to add points to a selection.
• Hold the Command or Control key down, then drag to remove points from the
selection.
• Move the pointer over the edge, or the transform control, of a shape, and press
Control-A or Command-A to select every point on that shape.
4 When the selected points are highlighted, rearrange them as necessary by doing one
of the following:
• To move one or more selected points, drag them where you want them to go.
• To move one or more selected points using that shape’s transform control, press the
Shift key while you drag over the transform control.
Note: Using the transform control without the Shift key pressed modifies the entire
shape, regardless of how many points are selected. For more information on using
the transform control, see page 837.
To add a point to a shape:
1 Click the Edit Shapes button.
2 Shift-click the part of the shape where you want to add a control point.
A new control point appears on the shape where you clicked.
To remove one or more points from a shape:
1 Select the point or points you want to remove.
2 Do one of the following:
• Click the Delete Control Point button in the Viewer shelf.
• Press the Delete key (near the Home and End keys).
Those points disappear, and the shape changes to conform to the remaining points.
To convert linear points to Bezier points, and vice versa:
1 Select the point or points you want to convert.
2 Click the Spline/Linear Mode button to convert linear points to Bezier points, or Bezier
points to linear points.
An optional step is to set the Show/Hide Tangents button to All or Pick in order to view
tangents as they’re created.836 Chapter 27 Warping and Morphing Images
To change a curve by editing a point’s tangent handles:
1 Make sure the Show/Hide Tangents button is set to All (to view all tangents) or Pick (to
view only the tangents of points that you select).
2 Make sure the Lock/Unlock Tangents button is set to Unlock.
3 Do one of the following:
• To change the length of one of the tangent handles independently from the other,
while keeping the angle of both handles locked relative to each other, drag a handle
to lengthen or shorten it. You can also rotate both handles around the axis of the
selected point.
• To change the angle of one of the tangent handles relative to the other, along with
its length, press the Command or Control key while dragging a handle around the
axis of the selected point. The selected tangent handle moves, but the opposing
tangent handle remains stationary.Chapter 27 Warping and Morphing Images 837
To keep the angle of both tangent handles at 180 degrees relative to one another,
keeping the lengths of each side of the tangent identical, press the Shift key while
dragging either of the tangent handles around the axis of the selected point. If you
Shift-drag tangent handles that were previously angled, they are reset.
To edit a shape using its transform control:
1 Make sure that Enable/Disable Shape Transform Control is turned on.
When you move, scale, or rotate a shape using its transform control, each
transformation occurs relative to the position of the transform control. To move a
shape’s transform control in order to change the center point about which that shape’s
transformation occurs, press the Command or Control key while dragging the
transform control to a new position.
2 To manipulate the shape, drag one of the transform control’s handles:
• Drag the center of the transform control to move the entire shape in the Viewer. Both
the X and Y handles will highlight to show you’re adjusting the X and Y coordinates
of the shape.
• Drag the diagonal scale handle to resize the shape, maintaining its current aspect
ratio.
Diagonal scale handle838 Chapter 27 Warping and Morphing Images
• Drag the X handle to resize the shape horizontally, or drag the Y handle to resize the
shape vertically.
• Drag the rotate handle to rotate the shape about the axis of the transform control.
Showing and Hiding Shapes
Individual shapes may be hidden, if necessary, to help you isolate one or more shapes
when making adjustments. Hiding shapes simply makes them invisible. Hiding a shape
has no effect on the resulting warp effect—all source/target shape pairs continue to
warp the image as before.
When a Warper or Morpher node is selected in the Node View, each shape in that node
is labeled in the Viewer. By default, each shape is numbered in the order that it was
created. These names can be customized in that shape’s corresponding parameter in
the Parameters tab. These names help you identify which shapes are which when
you’re changing their individual visibility.
Scale width (x)
handle
Scale height (y)
handle
Rotate handleChapter 27 Warping and Morphing Images 839
To show or hide an individual shape directly in the Viewer, do one of the
following:
m
Right-click anywhere in the Viewer to display the Viewer shortcut menu, then choose
the Shape Visibility submenu, and select a label that corresponds to the shape you
want to show or hide. Shapes that are checked are shown, while shapes that are
unchecked are hidden.
m
In the Parameters tab, click the Visibility button of the shape parameter that
corresponds to the shape you want to show or hide. These controls are linked to the
settings in the Shape Visibility submenu of the Viewer shortcut menu. Changes made
to one automatically apply to the other.
You can also show or hide all shapes of a particular type using the Visibility toggles in
the Viewer shelf. Each control affects the visibility of all shapes of that type in the
Viewer. The Visibility toggles supersede the Visibility settings in the Parameters tab.
Each setting in the Select Display Image pop-up menu in the Viewer shelf of the Warper
and Morpher nodes allows these controls to be set independently. For example, in the
Warper, the visibility settings set when displaying the source image can differ from
those set when displaying the warped (target) image.
To show or hide all shapes of a particular type:
m
Click the Visibility toggle in the Viewer shelf that corresponds to the shape type you
want to hide.
Duplicating Shapes
A fast and easy way to create corresponding target shapes once you’ve drawn a source
shape is to duplicate it, then modify the duplicate. This is especially useful for instances
where the general shape of the target you want to create is similar to the source.
To duplicate a shape:
1 Click the Edit Shapes button to allow you to select shapes in the Viewer.
2 Move the pointer over the edge, or the transform control, of the shape you want to
duplicate, then right-click and choose one of the following commands from the
shortcut menu:
• Choose Duplicate Shape to simply duplicate the shape.
• Choose Duplicate and Connect Shape (or press Control-D or Command-D) to
duplicate the shape and automatically connect the duplicate to the source shape
you clicked.
Note: After using the Duplicate or Connect Shape command, locking or hiding the
source shape immediately insures that you won’t accidentally modify it when making
changes to the new duplicate.840 Chapter 27 Warping and Morphing Images
Copying Shapes From a RotoShape Node
You can copy shapes from a RotoShape node and paste them into a Warper or Morpher
node for use as a source, target, or boundary shape. This is especially useful in cases
where you’ve already isolated the subject using a RotoShape node.
Important: If you copy a shape with a soft edge from a RotoShape node, only the main
center shape is pasted into a Warper or Morpher node. The soft edges are not used.
To copy a shape from a RotoShape node:
1 With the pointer over the transform control of the shape you want to copy in the
Viewer, do one of the following:
• Right-click, then choose Copy Shape from the shortcut menu.
• Press Command-C or Control-C.
Note: To copy all currently visible shapes in a node, Control-click or right-click in the
Viewer, then choose Copy Visible Shapes from the shortcut menu.
Important: When copying a shape, the pointer must be directly over the shape you
intend to copy. Otherwise, you may not copy the correct shape.
2 Select the Warper or Morpher node into which you want to paste the rotoshape.
3 Do one of the following:
• Right-click in the Viewer, then choose Paste Shapes from the shortcut menu.
• Press Command-V or Control-V.
The pasted shape appears just like any other newly created closed shape, and you can
modify or duplicate it as necessary.
Connecting Source and Target Shapes
To create the actual warp or morph effect, you need to connect each source shape
you’ve created to a corresponding target shape. You can do so by either drawing two
shapes separately and connecting them afterwards, or by drawing the source shape
and duplicating it to use it as a starting point for the target. (Two shortcut menu
commands—Duplicate Shape, and Duplicate and Connect Shape—make this easy.)
Regardless of your intended use for the shapes you’ve created, until they’re connected
to one another, they remain unassigned.
To connect a separately drawn source shape to a target shape:
1 Click the Connect Shapes button.
2 Click a source shape.
3 Immediately click the target shape you want to connect to the source shape.Chapter 27 Warping and Morphing Images 841
After the shapes are connected, the source shape appears with a light blue path, and
the target shape appears with a dark blue path, indicating that the connection has
been made. Purple connection lines appear between the source and target shapes to
show which parts of each shape are connected.
In the Parameters tab of the corresponding Warper or Morpher node, an additional
connection parameter appears for the connection you established. By default, each
new connection parameter that’s created is named “connection1Name,” with the
middle number incremented as each new connection is created. These names can be
changed to more easily identify each connection for individual manipulation later on.
To disconnect a source shape from a target shape:
m
In the Parameters tab of the corresponding Warper or Morpher node, click the Delete
button of the connection parameter you want to break.
After disconnecting a source/target shape pair, both shapes become unassigned, and
turn yellow by default.
For more information about the parameters of the Warper node, see “Parameters in the
Warper Node” on page 846. For more information about the parameters of the Morpher
node, see “Additional Controls and Parameters in the Morpher Node” on page 855.
Modifying Connection Lines
Once you’ve connected a pair of source/target shapes, connection lines appear to show
the deformation path that pixels in the source shape will follow to conform to the
target shape. These connection lines can be moved to change the path and alter the
look of the warp or morph effect. You can also add more connection lines to increase
the amount of control you have over the warp or morph effect.
To move the start or end point of a connection line independently:
1 Click the Edit Connections button.
2 Drag a connection point to another location on the shape. The connection point’s
movement is restricted to the contour of the shape.
To move the entire connection line at once:
1 Click the Edit Connections button.
2 Drag a bounding box or Shift-click each point to select both the start and end points of
the connection line you want to move.842 Chapter 27 Warping and Morphing Images
3 With both points selected, dragging one of them will move both at the same time.
Both ends of the connection line are restricted to moving along the contours of the
source and target shapes, and you can’t move a connection point past another
connection point.
To add more connection lines to a source/target shape pair:
1 Click the Edit Connections button.
2 Shift-click either a source or target shape at the location where you want a new
connection line to be created.
A new connection line is immediately created where you clicked. The other end of the
new connection line is placed at the closest point of the corresponding shape in the pair.
Locking Source and Target Shapes
Once you’ve created one or more source or target shapes, you can lock them
individually in a Warper or Morpher node’s Parameters tab, or all together using the
Lock Shapes buttons in the Viewer shelf. This is useful if you’re modifying source and
target shapes that are very close together, and you want to make changes to one
without accidentally moving the other.
To lock all source and/or target shapes in the currently selected node, do one
of the following:
m
Click the Lock Source Shapes button to lock all source shapes.
m
Click the Lock Target Shapes button to lock all target shapes.
m
Click the Lock Boundary Shapes button to lock all boundary shapes.
Connection linesChapter 27 Warping and Morphing Images 843
You can also lock individual source and target shapes using the lock button to the left
of each shape parameter in that node’s Parameters tab. However, the Lock Shapes
buttons in the Viewer shelf always supersede these individual shape-locking parameter
controls. See “Parameters in the Warper Node” on page 846 for more information.
Defining Boundary Shapes
You can use any open or closed shape or single-point shape as a boundary shape to
pin down areas of the image you don’t want to be warped, or to exclude whole areas
of the image from being affected by the source/target shape pairs you’ve created. You
can create as many boundary shapes as you need to lock areas of the image you don’t
want to be warped.
Since boundary shapes are essentially shapes that are both source and target shapes
simultaneously, you also define them using the Connect Shapes button.
To make an unassigned shape into a boundary shape:
1 Select the Warper or Morpher node you’re working on, then create a new shape
outlining the region of the image you want to lock down.
2 To turn this shape into a boundary shape, do one of the following:
• Click the Connect Shapes button, then click the shape you’ve created twice.
• Right-click the shape, then choose Set as Boundary Shape from the shortcut menu.
The shape turns orange by default to indicate that it’s now a boundary shape, and a
new connection parameter appears in the Parameters tab of the Warper or Morpher
node. By default, each connection parameter that defines a boundary shape is named
after the shape it corresponds to. For example, if the original shape was named
“shape3,” the connection parameter that defines it as a boundary shape is named
“shape3_boundary.”
Turning Boundary Shapes Into Unassigned Shapes
Once you’ve turned a shape into a boundary shape, the only way to turn it back into an
unassigned shape is to delete the connection parameter that corresponds to it in the
Parameters tab of the Warper or Morpher node, using that parameter’s Delete button.
Customizing Shape Controls
Several parameters in the shapeColors subtree of the colors subtree, and in the
shapeControls subtree of the guiControls subtree of the Globals tab, allow you to
customize the color of shapes and behavior shape controls in the Viewer.844 Chapter 27 Warping and Morphing Images
Shape Colors
By default, the paths of source shapes are light blue; paths of target shapes are dark
blue; paths of connection lines are purple; paths of boundary shapes are orange; and
paths of unassigned shapes are yellow. These colors can all be changed using the
following parameters in the shapeColors section of the colors subtree in the Globals tab.
To change the default color of a shape color parameter:
1 Click the color swatch of the shape color parameter you want to change.
2 Use the Color Picker to select a new color to use for that shape type.
All shapes of that type are now displayed with the new default color you selected.
Shape Editing Controls
Various behaviors for selecting points, creating Bezier curves, and adjusting each
shape’s transform control can be customized in the shapeControls subtree of the
guiControls subtree of the Globals tab. You can modify how each of these controls
works to better suit your working style or input method—for example, whether you
use a graphics tablet or mouse.
Each parameter has a slider that adjusts the control’s behavior.
rotoAutoControlScale
An option which, when enabled, increases the size of the transform controls of shapes,
based on the vertical resolution of the image to which the shape is assigned. This
makes it easier to manipulate a shape’s transform control even when the image is
scaled down by a large ratio.
rotoControlScale
A slider that allows you to change the default size of all transform controls in the
Viewer.
You can also resize every transform control appearing in the Viewer by holding down
the Command or Control key while dragging the handles of any transform control in
the Viewer.
Parameter Shape Type Default Color
ShapeColor Unassigned shapes Yellow
sourceColor Source shapes Light Blue
targetColor Target shapes Dark Blue
connectionColor Connection lines Purple
boundaryColor Boundary shapes Orange
lockedColor Locked shapes Gray
displacedColor Displaced Target shapes PinkChapter 27 Warping and Morphing Images 845
rotoTransformIncrement
This parameter allows you to adjust the sensitivity of shape transform controls. When
this parameter is set to lower values, transform handles move more slowly when
dragged, allowing more detailed control. At higher values, transform handles move
more quickly when dragged. A slider lets you choose from a range of 1-6. The default
value is 5, which matches the transform control sensitivity of previous versions of Shake.
rotoPickRadius
This parameter lets you select individual points on a shape that fall within a user-
definable region around the pointer. This allows you to easily select points that are near
the pointer that may be hard to select by clicking directly. A slider allows you to define
how far the pointer may be from a point to select it, in pixels.
rotoTangentCreationRadius
This parameter lets you define the distance you must drag the pointer when drawing a
shape point to turn it into a Bezier curve. Using this control, you can make it easier to
create curves when drawing shapes of different sizes. For example, you could increase
the distance you must drag to avoid accidentally creating Bezier curves, or you can
decrease the distance you must drag to make it easier to create Bezier curves when
drawing short shape segments.
Using the Warper Node
The Warper node is useful for creating targeted deformations to alter the shape of a
subject in an image. Examples might include making someone’s nose grow, making an
animal’s eyes widen in surprise, or causing a bump to grow on someone’s forehead. The
Warper can be used to make a static change to a subject, or it can be animated to
cause dynamic changes in the subject’s shape.846 Chapter 27 Warping and Morphing Images
Parameters in the Warper Node
A simple example of a Warper node used to warp an image with a single pair of source/
target shapes would appear with the following parameters. (For Warper nodes with
more source/target shape pairs defined, there will be more shapeName and
connectionName parameters listed.)
displayImage
A pop-up menu that allows you to choose whether the Viewer displays the source
image or the warped image. The warped image will be neither displayed nor rendered
if this pop-up menu isn’t set to Warped Image.
overallDisplacement
Defines the amount of displacement that is applied to all source/target shape pairs
simultaneously. A value of 0 applies no displacement, 0.5 applies displacement halfway
between the source and target shapes, and 1 applies the maximum displacement to
match the target shape. It is also possible to set this parameter to a value greater than
1, although this results in an overlapping displacement which may not be desirable.
Note: When creating a warp effect, you may achieve a more realistic or organic effect if
you adjust the displacement of each individual source/target shape pair separately,
rather than relying on this single control to animate the displacement of every shape
pair in the node.
General Warper
parameters
Individual shape and
connection parameters
Visibility buttons
Enable/Disable Lock controls Name Delete buttons
buttonsChapter 27 Warping and Morphing Images 847
addBorderShape
A button that allows you to use the border of the image as a control shape to limit the
warping effect. By default this parameter is turned on, and is the recommended setting
for most cases. Turning this control off results in each source/target pair having a
considerably more exaggerated effect on the image, and may necessitate the use of
additional boundary shapes to control the effect.
overSampling
An integer value that represents the number of samples per pixel that are taken into
account when performing a warp. This parameter is set to 1 by default, which results in
faster rendering times. However, extreme warping effects may introduce aliasing
artifacts that can be reduced or eliminated by increasing this value, up to a maximum
value of 4. Increasing this parameter may cause render times to increase dramatically.
Note: Although the slider is limited to a range of 1 to 4, you can enter larger values into
this parameter’s value field.
• dodPad: A subparameter of overSampling. This slider lets you pad the DOD around
the image affected by the Warper node by 0 to 100 pixels. The Warper node tries to
automatically calculate a new DOD for the affected image, but in certain instances
the resulting DOD may be too small. In these instances, the dodPad parameter lets
you expand an incorrectly calculated DOD to avoid clipping.
shape1Name
In this example, the shape1Name parameter represents the source shape. Additional
controls in all shape parameters allow you to turn the shape on or off, make the shape
itself visible or invisible in the Viewer, lock the shape to prevent any further changes to
it, or delete the shape.
shape2Name
In this example, the shape2Name parameter represents the target shape. Each target
shape has a corresponding shapeName parameter.
connection1Name
Connection parameters represent both connection lines that connect source shapes to
target shapes, and boundary shapes that you’ve defined. Deleting a connection
parameter deletes either the corresponding connection line, or turns a boundary shape
back into an unassigned shape.
• connection1Displacement: This subparameter of the connection1Name parameter
defines the amount of displacement that is applied to the source/target shape pair
defined by the connection1Name parameter. Each source/target shape pair has its
own corresponding connectionDisplacement parameter, allowing you to animate
each warp independently for a more organic, natural look. By default, each
connectionDisplacement parameter is linked to the overallDisplacement parameter,
so that they all animate together.848 Chapter 27 Warping and Morphing Images
A Warper Node Example
The Warper node is extremely flexible, and can be used for a wide variety of image
distortion or manipulation tasks. In this example, we’ll use the Warper to change a dog’s
facial features.
To warp an image:
1 Attach the Warper node to an image.
2 First, draw and, if necessary, animate your source shapes (see “Drawing New Shapes” on
page 832).
These shapes define the parts of the subject you want to warp.
3 When you’re ready to finish your shape, do one of the following:
• Click the first point if you want to create a closed shape.
• Double-click when creating the last point to create an open shape.
• To create a single-point shape, immediately after creating your first point, right-click
and choose Finish Shape from the shortcut menu.
To add additional source shapes to define additional warp areas, click the Add Shape
button. Each shape you create using the Morpher node is yellow, indicating that it’s
unassigned and does not yet have any effect on the image.
Unassigned shapes defining the area to warpChapter 27 Warping and Morphing Images 849
Next, you need to create a corresponding target shape for each source shape you
created. Target shapes define the contour of deformation to which pixels identified by
each source shape are moved.
4 Create target shapes using the same shape-drawing techniques used in step 2.
Note: Another technique you can use to create a target shape quickly is to duplicate
the source shape by right-clicking it and choosing Duplicate Shape from the shortcut
menu. You can also choose Duplicate and Connect Shape (or press Control-D or
Command-D), in which case you can skip step 5. If you’re using either of these options,
you may want to animate the original source shape first, as the copied shape inherits
the animation.
As you create target shapes for each source shape, they remain yellow to indicate that
they’re still unassigned, and have no effect on the image.
To create the actual warping effect, you have to connect the shapes you created a pair
at a time.
5 To do this, use the following steps:
a Click the Connect Shapes button.
b Click a source shape that conforms to the actual position of the first feature you
identified.
c Immediately click the corresponding target shape you created.
Unassigned shapes are yellow.850 Chapter 27 Warping and Morphing Images
After this second click, the source/target shape pair is defined, the shape colors change,
and a connectionName parameter appears in the Parameters tab. Because the
overallDisplacement parameter defaults to 1, the effect is immediately seen (see
“Connecting Source and Target Shapes” on page 840).
Once connected, source shapes become light blue, target shapes become dark blue,
and the connection lines between them become purple. These colors can be
customized, if necessary. For more information on customizing shape colors in the
Viewer, see “Customizing Shape Controls” on page 843.
6 If necessary, click the Edit Connections button, then drag in the Viewer to adjust the
position of the connection lines running between the source and target shapes.
7 Drag the source and target connection points and slide them along the shape to
change the angle of deformation in order to create the effect you need.
Once connected, source shapes are light blue and target shapes are dark blue.Chapter 27 Warping and Morphing Images 851
In this example, the connection lines are straightened in the eyes (see “Modifying
Connection Lines” on page 841).
8 If necessary, adjust the amount of warp by modifying the overallDisplacement
parameter in the Parameters tab.
You can also adjust the displacement of each source/target shape pair individually
using the connectionDisplacement parameter in that pair’s connectionName
parameter.
A value of 0 in the overallDisplacement parameter results in an unwarped image. A
value of 0.5 produces a warp that’s halfway between the source and target shapes, and
a value of 1 results in a warp that completely conforms to the target shape.
To see the actual warp effect, choose Warped Image from the Select Display Image
pop-up menu in the Viewer shelf (or press the F11 key).
Final adjusted source/target shape pairs with modified connection points852 Chapter 27 Warping and Morphing Images
Note: In addition to viewing the actual warp effect, you can view the position of the
displacement targets, as defined by the overallDisplacement and
connectionDisplacement parameters, by turning on the Displaced Target Shape
Visibility button in the Viewer shelf. These indicators are designed to help you see what
the deformation will be without having to render the entire image. Displaced target
shapes are pink by default.
This example displays a characteristic of the warper—it works as if the image is made
of a sheet of rubber and you’re actually pushing the pixels of the image around,
stretching the surrounding image. In the above image, the pixels of the eyebrow are
moved up because they lie directly on the path of the source shape. You’ll also notice
that the right edge of the eyebrow appears to stretch back to the original position of
the eyebrow. This is because the pixels surrounding the eyebrow are stretching to fill in
the areas of the image where the eyebrow used to be. If the effect is not what you
want, modify the source shape to redefine the area of the image being manipulated.
You’ll notice that, in addition to the eye and eyebrow being warped, a significant area
of the face surrounding the source/target shape pair is also affected, and the top of the
head is pushed upward. To limit the warping effect to the region immediately
surrounding the eyes and eyebrows, create one or more boundary shapes.
9 First, choose Source Image from the Select Display Image pop-up menu in the Viewer
shelf. This allows you to draw your boundary shapes to match features in the original
image.
10 Next, draw more shapes identifying the areas of the source image you want to lock
down.
Pink displaced target shapes indicate the value of the displacement parameters.Chapter 27 Warping and Morphing Images 853
Boundary shapes can be either open, closed, or single-point shapes, depending on
how much of the image you want to lock down. In this instance, you want to exclude
the entire image from the warp effect except for the eye, eyebrow, and surrounding
region, so a closed shape is drawn surrounding this area.
11 Right-click the shape you just created, then choose Set as Boundary Shape from the
shortcut menu.
This effectively sets this shape to be both a source and target shape, which pins that
area of the image down.
Now, you’ll probably need to make adjustments to refine the effect you’re trying to
achieve. It will probably be helpful to use the Visibility and Lock buttons to assist you
when manipulating the shapes, so that you don’t accidentally adjust the wrong points
when two shapes overlap. Use the controls in the Viewer shelf to change the visibility
and locking of all shapes of a given type simultaneously, or set the visibility and locking
of each shape individually in the Parameters tab.
Using the Parameters tab controls for each individual shape, you may rename the
shape, enable it, toggle its visibility, lock the curve so it is visible but can’t be modified,
or delete it. Additional parameters also appear for each connection line and boundary
shape definition you’ve set up.
12 To create an animated effect, keyframe the overallDisplacement parameter to animate
every source/target shape pair simultaneously.
Boundary shapes are orange.854 Chapter 27 Warping and Morphing Images
You can also individually animate the displacement caused by each source/target
shape pair you’ve defined. To do so, open the connectionName subtree to reveal the
connectionDisplacement parameter. Animating specific parameters can create a more
organic-looking effect.
Using the Morpher Node
The Morpher node blends two images together to create the effect of one subject
changing shape to turn into another. The Morpher node does this by performing two
warping operations, one on the source image to warp it into the shape of the target
image, and another warping operation to warp the target image from the shape of the
source back to its own shape. Once both warping operations match the shapes of the
source and target images to one another, a built-in cross-fade dissolves from the first
warp to the second, providing the illusion that the first image is changing into the
second.
Tips For Successful Morphing
Successful morphs benefit from planning ahead during the shoot. Ideally, the positions
of the source and target images match relatively well. If they need adjustment, resizing,
or repositioning to help them match better, you can make these adjustments in your
node tree prior to adding the Morpher node.
If the source and target subjects are moving, their movements should match one
another so that the warping targets you set for both can line up properly. If the
motions line up but the timing is off, you can select the offending clip’s FileIn node and
use the Timing tab parameters to remap its speed so that the motion lines up. For
more information, see “Retiming” on page 117.
Before AfterChapter 27 Warping and Morphing Images 855
Because morphing warps images the same way the Warper node does, it is essential to
isolate the subjects you’re morphing prior to adding the Morpher node. This way, the
background won’t change as the source image morphs into the target, nor will the
warp being applied to the subject of each image affect the background incorrectly.
Additional Controls and Parameters in the Morpher Node
Most of the Morpher node’s controls are identical to those of the Warper. For
instructions on how to use specific functions, consult the Warper node, above. The
Morpher node does have some additional parameters and controls.
Additional Viewing Modes in the Viewer Shelf
In addition to the Source Image and Target Image options in the Set Display Image
pop-up menu of the Viewer shelf, the Morpher provides five additional Viewer modes:
• Morphed Image: Shows the actual morph effect being created. This image is a
combination of the source warped and target warped images being dissolved
together. This is the end result of the Morpher node.
• Source Warped Image: Displays the warp effect being applied to the source image.
• Target Warped Image: Displays the warp effect being applied to the target image.
• Dissolve Mask: A grayscale image generated by the dissolve settings for each of the
shapes. Because the dissolve settings for each individual connectionName parameter
are linked to the overallDissolve parameter, this option displays a solid screen where:
• Black represents an overallDissolve value of 0, showing only the source image.
• White represents an overallDissolve value of 1, showing only the target image.
If you’ve animated the individual connection_Dissolve parameters (in the
Connection_Name subtree):
• Dissolved Image: Displays the dissolve between the source and target images,
without any warping being applied. Appears as a simple cross-dissolve.
Additional Parameters in the Morpher Node
This node displays the folowing additional controls in the Parameters tab.
overallDissolve
Controls whether the color of the morphed image is taken from the source image or
the target image. 0 represents 100 percent source image, .5 results in a blend between
both, and 1 represents 100 percent target image.
connectionDissolve
Each source/target shape pair has a corresponding connection_Name parameter.
Nested within each connectionName parameter is a pair of connection_Displacement
and connection_Dissolve parameters that allow you to independently adjust the
displacement and dissolve of each part of the morph. By default, each
connection_Dissolve parameter is linked to the overallDissolve parameter, so that they
all animate together.856 Chapter 27 Warping and Morphing Images
How to Morph Two Images
1 In the Node View, attach a Morpher node to two images.
This example creates the effect of the man’s face turning into that of the woman. The
image of the man is the source, connected to the morpher1.Source input. The
transformed image of the woman is connected to morpher1.Target input.
If the images need to be manipulated to make them line up, do this first.
2 In this example, the image of the woman is transformed with a Move2D node, to
position and rotate it to more approximately fit the image of the man.
This is essential to creating a smooth morphing effect.
Source image Target imageChapter 27 Warping and Morphing Images 857
If it is necessary to isolate the subject of the source and target images, you may want to
insert RotoShape or keying nodes prior to the Morpher node.
3 Move the Time Bar playhead to the first frame of the clip where you want the morph
effect to take place, then choose Source Image from the Select Display Image popup menu.
4 Click the Add Shapes button in the Viewer shelf, then draw shapes as necessary to
match the features of the subject.
If necessary, animate your shapes to follow the animation.
Note: You can quickly jump to the source image by pressing F9.
The more features you identify with source shapes, the more detailed the morphing
effect will be. You should remember that warping affects the entire region of the image
surrounding each source/target shape pair. While it’s important to create shapes for
each significant feature of the image, you don’t have to go overboard creating shapes
for every tiny feature of the subject image, unless it will enhance the effect you’re
trying to achieve.
When picking features to manipulate, keep in mind that the source shapes you define
are pushing and pulling the corresponding image features into the shape of the target.
Pick features that have a direct path to similar features in the target image, if at all
possible, to avoid unwanted artifacts in the image.
5 Once you’ve created all the source shapes you think you need, set the Viewer to display
the target image (using the Select Display Image pop-up menu).
Note: You can quickly jump to the target image by pressing F10.
New unassigned shapes defining the source features858 Chapter 27 Warping and Morphing Images
6 To create a set of target shapes to connect to the source shapes you created in step 4,
do one of the following:
• The easiest method is to right-click each source shape, then choose Duplicate and
Connect from the shortcut menu (or press Control-D or Command-D). Afterward, you
can hide all your source shapes by turning off the Source Shape Visibility toggle in
the Viewer shelf to avoid accidentally moving them while you adjust the target
shapes to line up with the appropriate features of the target image.
Note: It may also help to turn off Enable/Disable Shape Transform Control in the
Viewer shelf, to avoid accidentally dragging transform controls that overlap the
shape you’re trying to adjust.
• Manually draw more shapes over features in the target image that correspond to the
features you identified in the source image. When you’re done, you should make sure
that you’ve drawn a target shape to correspond to every source shape.
7 Next, connect each source/target shape pair together using the Connect Shapes
button in the Viewer shelf (see “Connecting Source and Target Shapes” on page 840).
Shapes immediately after using the
Duplicate and Connect Shapes command
Target shapes have been adjusted
to fit the target image. Connection
lines have been added for additional
adjustments.Chapter 27 Warping and Morphing Images 859
When readjusting the target shapes you’ve created, the sheer number of shapes
needed to create your morphing effect may make the Viewer a little crowded, making
it difficult to adjust individual shapes. You may find it’s easier if you hide every shape
except the one you want to work on. You can hide all of the target shapes by rightclicking in the Viewer, then choosing Shape Visibility > Hide All Shapes from the
shortcut menu. Afterwards, right-click again, then choose the name of the first target
shape you want to edit from the shortcut menu. The shape reappears in the Viewer,
ready for editing. Continue hiding and showing individual shapes as necessary until
you’ve adjusted them all.
8 If necessary, animate the target shapes you’ve just created to match any motion in the
target image.
9 Adjust the overallDisplacement parameter to control how much warp is applied to
push the pixels from the source shapes to the target shapes you’ve defined.
Note: The overallDisplacement parameter operates on all source/target shape pairs in
the node simultaneously.
10 To animate the morphing effect, keyframe the overallDisplacement parameter. To see
the morphing effect in the Viewer as you adjust the overallDisplacement slider, you
must choose Morphed Image in the Select Display Image pop-up menu in the Viewer
shelf. You can also set the Viewer to display the morphed image by pressing F11.
The morphed image with overallDisplacement
and overallDissolve of 0.5860 Chapter 27 Warping and Morphing Images
To add a new keyframe, move the playhead to a frame where you want to make an
adjustment, click the Autokey button for the overallDisplacement parameter in the
Parameters tab, then adjust the overallDisplacement slider.
A value of 0 in both the overallDisplacement and overallDissolve parameters results in
an unmorphed source image. A value of .5 produces a morph that’s halfway between
the source and target images, and a value of 1 results in the end of the morph—the
final target image.
While you adjust these parameters, enable the Show Displaced Target Shapes button in
the Viewer shelf to see the actual position of the displacement targets as defined by
the overallDisplacement and connection_Displacement parameters. These indicators
are designed to help you see what the deformation will look like, without having to
render the entire image. Displaced target shapes are pink by default.
Note: As with the Warper node, you can adjust the individual displacement of each
source/target shape pair using the connection_Displacement and connection_Dissolve
parameters nested within each connection_Name parameter in the Parameters tab.
Keyframing these parameters with separate timings creates a more sophisticatedlooking effect than if you simply animated the overallDisplacement parameter.
11 If you want, you can keyframe the overallDissolve parameter separately from the
overallDisplacement parameter to create different effects. Adjustments to the
overallDissolve parameter control how the source image fades into the target image—
this works exactly like a Mix node.
Note: By default, the overallDissolve parameter is linked to the overallDisplacement
parameter, so keyframing one will automatically keyframe the other to the same value.
Keyframing the overallDissolve parameter will break this link.
12 Test the resulting effect to see how well it works. If you see problems, toggle the Viewer
between the source warped image and target warped image to see how successfully
the source and target images are matching. Viewing each image independently makes
it easier to spot unwanted artifacts stemming from poorly placed or inadequate
numbers of source/target shape pairs.
If you see problems, either adjust the position and shape of the source and target
shapes as necessary, or identify additional features to create source/target shape pairs
in order to increase the amount of control you have over the effect.28
861
28 Filters
The filter nodes in Shake not only enable simple image
manipulation—they also provide numerous ways to
modify alpha channel data, allowing you to create useful
images for masking functions.
About Filters
While color corrections change the value of an individual pixel according to a
mathematical equation (for example, *2, -.5, and so on), filters calculate the new value
of a pixel by examining its neighbors, and passing it through what is called a spatial
filter. Classic filter functions are blur, image sharpen, emboss, and median. You can also
create your own unique filters, of any resolution, with the Convolve node. Spatial filters
are also applied when a geometrically transformed image is resampled—for example,
following a Scale or Rotate node.
Masking Filters
When you use an image to control the amount of filtering, you should use the multiinput image-based filters such as IBlur, ISharpen, IRBlur, rather than simply masking the
effect. Using image-based filter nodes instead of mask inputs will slow down
processing, but the resulting effect will be ofs higher quality.
In the following example, a Text node is attached to a Blur node. A Ramp node attached
to the Blur mask input acts as a control image to modify the blur effect.862 Chapter 28 Filters
The result—is merely a blend between sharp and blurred elements—is not very
compelling. (Note that the Ramp node has a default alpha value of 1 for both ends; you
should change the alpha1 value to 0.)
To get a better result, use the dedicated IBlur node instead, with the Ramp node as the
second input image, rather than a mask input.
Filters Within Transform Nodes
Filter operations aren’t limited simply to blurs, emboss effects, and other convolution
operations assigned to filter nodes. Filter parameters are also available in many
transform nodes.
Most filter and transform nodes allow you to use one of many different filtering
processes to perform transforms, blurs, and convolve effects. Which filter you choose
can make a big difference in the quality of the resulting effect, especially when
animated.
To maximize the quality of scaling in Shake, the “default” filter setting in transform
nodes actually switches between two different filters—the mitchell filter for nodes that
increase image scale, and the sinc filter for nodes that decrease image scale. A panel of
film professionals watched several versions of a shot that had been processed with
different filters projected onto a cinema screen. They decided that the mitchell and sinc
filters provided the best quality. Other filters, such as the box filter (the closest
operation to what is more commonly referred to as “bilinear”), may give subjectively
superior results in some cases (particularly with static images) but tend to handle highfrequency image data poorly.Chapter 28 Filters 863
To further maximize the quality of transforms, some nodes in Shake (such as CornerPin)
let you use separate filter operations for horizontal and vertical transforms. Keep in mind
that the default filter option uses mitchell for scaling up, and sinc for scaling down.
Applying Separate Filters to X and Y Dimensions
If a node does not already have separate filtering options for X and Y transforms (such
as the Resize node), you can set up independent filtering for each dimension in two
steps. For example, apply one Resize node, set the filter parameter to the desired filter
method, then adjust the xSize parameter while leaving the ySize parameter
untouched. Next, apply a second Resize node immediately after the first. Set a
different filter method, then adjust its ySize parameter. Because both nodes
concatenate, the result is computationally and qualitatively efficient.
Note: The subpixel parameter in the Resize node affects the way fractional resize is
performed. If your resize value is not an integer (for example, 512 zooming to 1024 is
an integer by a factor of 2; zooming 512 to 1023 is not an integer, because it is a factor
of 1.998), you have subpixel sampling. For Resize, if the new width or height is not an
integer (either because it was set that way, or because of a proxy scale), you have a
choice to snap to the closest integer (0 = subpixel off) or generate transparent pixels
for the last row and column (1 = subpixel on).
Filter Description
box Computationally fast, and gives a “boxy” look. Default size is 1 x 1.
default By default, mitchell is used to resize up, and sinc to resize down.
dirac Dirac and impulse are the same. Default size is 0 x 0.
gauss Gaussian lacks in sharpness, but is good with ringing and aliasing.
Default size is 5 x 5.
impulse Fast but lower quality. Default size is 0 x 0.
lanczos Similar to the sinc filter, but with less sharpness and ringing.
Default size is 6 x 6.
mitchell This is the default filter when scaling up. A good balance between
sharpness and ringing, and so a good choice for scaling up. Default
size is 4 x 4. This is also known as a high-quality Catmull-Rom filter.
quad Like triangle, but more blur with fewer artifacts. Default size is 3 x 3.
sinc This is the default filter when scaling down. Keeps small details
when scaling down with good aliasing. Ringing problems make it a
questionable choice for scaling up. Default size is 4 x 4. It can also
deliver negative values, which can be interesting when working in
float/channel bit depth.
triangle Not highest quality, but fine for displaying a scaled image on your
screen. Default size is 2 x 2. 864 Chapter 28 Filters
The Filter Nodes
The following sections describe each filter node, and include parameters, defaults, and
examples.
ApplyFilter
The ApplyFilter node applies a blur effect like the Blur node, but additionally allows you
to choose separate filters for the X and Y dimensions. You can then scale the default
base range (in pixels) of the predefined filters. For instance, if the default number of
pixels sampled on either side of the base pixel is 3 pixels, an xScale of 2 increases that
range to 6 pixels.
You can change the filter type in the much faster Blur node. The ApplyFilter node exists
only to allow absolute compatibility with images generated by other software
packages.
Note that dirac and impulse filters have no effect with ApplyFilter.
Parameters
This node displays the following controls in the Parameters tab:
xFilter, yFilter
See “Filters Within Transform Nodes” on page 862.
xScale, yScale
The amount of filtering in pixels.
spread
Tells Shake whether or not to apply the blur to areas outside of the frame. A button to
the right of the parameter name lets you set the mode.
• 0 = Compute “In Frame Only.”
• 1 = Compute “Outside Frame.”
Because of the Infinite Workspace, it is sometimes handy to compute outside of the
frame as well, for example, if the Blur is placed after a Scale command. Note that if
nothing is outside of the frame (black), you see a black edge.
Blur
The Blur node blurs the image. This is a Gaussian blur (by default), but you can change
the filter for both X and Y. Use this node instead of the similar, but slower, ApplyFilter
node.Chapter 28 Filters 865
Shake’s Blur is one of the few nodes that can deactivate the Infinite Workspace —its
“spread” parameter gives you the choice of blurring pixels inside or outside of the image
boundaries. If your final image appears clipped and you aren’t sure why (for example,
you haven’t applied any Crop commands), go back and check the Blur node spread
parameter. Toggle the spread parameters to Outside Frame (1), and the clipping should
disappear.
For more information on the Infinite Workspace, see “Taking Advantage of the Infinite
Workspace” on page 405.
Parameters
This node displays the following controls in the Parameters tab:
xPixels, yPixels
The amount of blur as described in pixels, for example, entering a value 200 blurs 200
pixels to either side of the current pixel. By default, yPixels is linked to xPixels.
spread
Tells Shake whether or not to consider outside of the frame. A button to the right of
the parameter name lets you set the mode.
• 0 = Compute “In Frame Only.”
• 1 = Compute “Outside Frame.”
Because of the Infinite Workspace, it is sometimes handy to compute outside of the
frame as well, for example, if the Blur is placed after a Scale command. Note that if
nothing is outside of the frame (black), you see a black edge.
xFilter, yFilter
Lets you pick which method Shake uses to transform the image. For more information,
see “Filters Within Transform Nodes” on page 862.
channels
Lets you set which channels Shake should blur. You can choose one or all of the red,
green, blue, or alpha channels. The default is “rgba.”
Convolve
The Convolve node allows you to define your own custom filter using a convolution
kernel. Standard filters are available in your include/nreal.h file, and appear in the kernel
pop-up menu in the Parameters tab. The included kernels define the following filters:
• blur3x3: 3 x 3-pixel blur
• blur5x5: 5 x 5-pixel blur
• sharpen
• edge3x3: 3 x 3-pixel edge detection
• edge5x5: 5 x 5-pixel edge detection
• laplace: edge detection
• smoothedge: another type of edge detection866 Chapter 28 Filters
• sobelH: horizontal embossing
• sobelV: veritcal embossing
• BabuV: another vertical edge detection
• BabuH: another horizontal edge detection
You can use these convolution matrixes as is, or as models to create your own matrixes.
Creating Custom Convolution Kernels
Convolution kernels consist of properly formatted data, which is used by the Convolve
node to produce the desired image processing effect. This data is included by default
in the include/nreal.h file, but may also be placed in other files in the same way you
would add a macro. For example, you could add convolution kernels to include/startup/
my_file.h.
A convolution kernel must have the following information. Each parameter should be
followed by a comma:
• The kernel begins with the following declaration:
DefKernel (
• The first parameter is the kernel’s name, enclosed in quotes.
• The second parameter is the size of the kernel matrix—in this case, 5 x 5.
• The third parameter is the gain factor. Each number is multiplied by this number, so 1
results in no change.
• The fourth parameter is the offset, which is added to the result before it is clamped/
quantized. 0 results in no offset.
• Next, the kernel matrix is entered as a comma-delimited list. In the case of a 5 x 5
matrix, this list would take the form of five lines of five comma-delimited values.
• The final line should be the end parenthesis and a semicolon, to end the declaration.
Here’s an example of a properly formatted kernel:
DefKernel(
“edge5x5”,
5, 5,
1,
0,
-1, -1, -1, -1, -1,
-1, -1, -1, -1, -1,
-1, -1, 24, -1, -1,
-1, -1, -1, -1, -1,
-1, -1, -1, -1, -1
);Chapter 28 Filters 867
Parameters
This node displays the following controls in the Parameters tab:
channels
Lets you set which channels Shake should blur. You can choose one or all of the red,
green, blue, or alpha channels. The default is “rgba.”
kernel
A pop-up menu that allows you to select any of the kernels included in the include/
nreal.h file, or wherever else you may have added convolution kernels of your own.
percent
A slider that lets you mix the modified image and the original image together to
produce a blend of both. By default, this parameter is set to 100 percent, resulting in
100 percent of the modified image being output.
absolute
Some filters return negative values. When absolute is enabled, these values are inverted
to produce positive values.868 Chapter 28 Filters
Defocus
The Defocus blur node is a more accurate model of the blurring that occurs through an
out-of-focus real-world camera lens. It flares out the high points, resulting in a circular,
hexagonal, or octagonal shape around the highlights.
Parameters
This node displays the following controls in the Parameters tab:
xPixels, yPixels
Defines the kernel size for the defocus, in pixels, which produces the general size of the
defocused flaring in the image. Values lower than 3 have no effect. Progressively higher
values are more processor-intensive to render.
channels
Lets you set which channels Shake should defocus. You can choose one or all of the
red, green, blue, or alpha channels. The default is “rgba.”
Original image Real camera defocus
Image blurred with normal
Gaussian blur in Shake
Image blurred with Defocus node in
Shake Chapter 28 Filters 869
percent
A slider that lets you mix the modified image and the original image together to
produce a blend of both. By default, this parameter is set to 100 percent, resulting in
100 percent of the modified image being output.
shape
A pop-up menu that lets you choose the shape of the flaring in the resulting image.
The fast modes give you low quality but process quickly. The circle, square, hexagon,
and octagon give you a superior look, but are significantly more processor-intensive to
render. The options are:
• fast gaussian
• fast box
• circle
• square
• hexagon
• octagon
boostPoint
The image value where the superwhite boosting starts. If boostPoint is set to .75, RGB
values above .75 are boosted to increase flaring effects. A high value generally
decreases flare areas, since fewer candidate pixels are flared. By default this parameter
is set to .95.
superWhite
Max value to boost image to. A value of 1 in the original will be boosted to this value.
By boosting this parameter, you increase the brightness of the flare area. Values around
50 yield a very strong boost.
DilateErode
The DilateErode node isolates each channel and cuts or adds pixels to the edge of that
channel. For example, to chew into your mask, set your channels to “a,” then set the
xPixels and yPixels value to -1.
By default, this node only affects whole pixels. Subpixel values are ignored, even when
they are set within the pixels parameters.To dilate or erode at the subpixel level, turn
on the “soften” button. Note that the soften parameter really slows the function. If you
use the soften feature, use low values for xPixels and yPixels.
To avoid affecting an image when using DilateErode to modify alpha channel data,
enter “a” as the channel, then apply a Color–MMult node to multiply the RGB by the
modified alpha.870 Chapter 28 Filters
Parameters
This node displays the following controls in the Parameters tab:
channels
Lets you set which channels Shake should blur. You can choose one or all of the red,
green, blue, or alpha channels. The default is “rgba.”
xPixels, yPixels
The number of pixels added (dilate) or taken from (erode) an edge. Positive values add
to the edge; negative values eat away at the edge.
borders
This parameter determines whether Shake considers or ignores the border pixels at the
edge of the image.
soften
This parameter lets you turn softening on or off. When this parameter is disabled,
DilateErode affects only whole pixels. When enabled, DilateErode can dilate or erode at a
subpixel level.
sharpness
The sharpness factor for the softening. A value of 0 gives a smooth gradation, whereas
2 gives you a sharp cutoff.
EdgeDetect
The EdgeDetect node is great for pulling out and massaging edges. You can control
what is detected, the strength of the edge, and the ability to expand or soften the
edge.
Parameters
This node displays the following controls in the Parameters tab:
strength
Applies a gain to the filter. Lower strength values eliminate detail in favor of the
strongest outlines, whereas higher strength values reveal more details. The slider lets
you choose a value from 0 to 10. Practically speaking, 10 is the highest useful value, but
you can enter higher values into the number field if you need to.
threshold
This parameter lets you further reduce the amount of detail in the resulting image.
Pixels below the threshold turn black. The range is 0 to 1.
binaryOutput
When binaryOutput is turned on, all pixels falling under the threshold parameter are
made black, and all pixels falling above the threshold parameter are made white. The
resulting image is only black and white, with no grayscale values.Chapter 28 Filters 871
directionFilter
Enables an effect similar to Emboss.
directionFilterangle
This parameter changes the lighting angle when the directionFilter parameter is
turned on.
despeckle
Similar to a median filter, this parameter removes isolated pixels up to the despeckle
radius (in pixels), and can be useful for eliminating noise from the resulting image.
xBlur, yBlur
Blurs the resulting image after the edge detection has been performed. By default, the
yBlur parameter is linked to the xBlur.
xDilateErode, yDilateErode
Applies the DilateErode effect to the first filter pass. By default, the yDilateErode
parameter is linked to the xDilateErode.
rgbEdges
Inherits color from the input image instead of just black and white, and applies it to the
final output image.
addEdgesToImage
Mixes the resulting image from the edge detection node to the original input image, to
create a blend of both.
method
A pop-up menu that lets you select which edge detection method to use. Your
choices are:
• Sobel
• Babu
• Fast
Note: Babu is an algorithm that is extremely slow and cannot be used on large (2K or
larger) plates. It is maintained for compatibility purposes.
babuSteps
The steps performed when using the Babu filter. More steps result in a higher quality
image, but are more processor-intensive to render.
bytes
Three buttons let you choose the output bit depth. You can choose among 8-bit, 16-bit,
and float.872 Chapter 28 Filters
Emboss
With the Emboss node, you control the gain and light direction to simulate a raised
texture over an image.
Note: The Emboss node converts your image to a BWA image (since there is no color
information).
If you use extreme gain, you may start to see terracing on your image. To correct this,
insert a Bytes node before the Emboss node, and boost your image to 2 bytes per
channel. You can get interesting patterns with a Bytes node set to 2 bytes, followed by a
Blur node, and then the Emboss node.
The elevation is set to 30 because it makes the median gray value 0.5.
Parameters
This node displays the following controls in the Parameters tab:
gain
The amount of emboss. Higher values result in a more pronounced effect.
azimuth
The apparent direction from which light is shining. 0 and 360 simulate light shining
from the right side of the image, 90 is from the top, and so on.
elevation
This is the “height” of the embossed image. 0 is parallel to the image; 90 is the same
axis as a line from your nose to the image.
FilmGrain
Use the FilmGrain node to apply grain that corresponds to real film grain to an element.
Grain is typically added to still or CG images so the images more closely match the
inherent noisiness of film plates.
You can choose to apply a preset film stock, sample grain from an existing image, or
create your own grain by adjusting the sliders.
To sample grain from an image:
1 Attach a Filter–FilmGrain node to the element to which you want to add grain.
2 Ensure that you are not in Proxy mode.
3 Load the image to be sampled into the Viewer.
Note: This should not be the image created by the FilmGrain node itself but, rather, one
generated prior to it in the node tree.
Warning: You cannot clone a FilmGrain node in the Node View using the Paste Linked
command.Chapter 28 Filters 873
4 Ensure that the FilmGrain parameters are still active.
5 In the Viewer, drag to create boxes in the areas you want to sample.
Note: The sampled areas should be very flat without detail that may disrupt the grain
analysis. Small elements are perceived as grain detail, so the best sample areas are
featureless walls, exposure cards, and so on.
You can sample as many areas as you want.
6 To undo a sample, do one of the following:
• To undo a box drawing, click the Undo Last Region button in the Viewer shelf.
• To remove all boxes and start over, click the Reset the Regions button in the Viewer
shelf.
7 Once the boxes are set, click the Analyze Grain button in the Viewer shelf.
The parameters in the FilmGrain node are set to match the plate.
Parameters
This node displays the following controls in the Parameters tab:
intensity
The intensity of the grain. Values are between 0 and 2.
grainSize
Size of the grain, between 0 and 2.
aspectRatio
This parameter inherits the current value of the defaultAspect global parameter. If
you’re working on a nonsquare pixel or anamorphic image, it is important that this
value is set correctly to avoid unwanted distortion in the image.
seed
The random generation seed. Set this to a constant value to freeze the grain. By default,
this is set to the time expression. When Shake generates a random pattern of values,
you need to make sure for purposes of compositing that you can recreate the same
random pattern a second time. In other words, you want to be able to create different
random patterns, evaluating each one until you find the one that works best, but then
you don’t want that particular random pattern to change again.874 Chapter 28 Filters
Shake uses the seed value as the basis for generating a random value. Using the same
seed value results in the same random value being generated, so that your image
doesn’t change every time you re-render. Use a single value for a static result, or use
the expression “time” to create a pattern of random values that changes over time.
For more information on using random numbers in expressions, see “Reference Tables
for Functions, Variables, and Expressions” on page 941.
filmStock
A pop-up menu that allows you to select from one of many preset film stocks. You can
also apply custom values by selecting Custom. Accepted inputs are:
• Custom
• Eastman 5245
• Eastman 5247
• Eastman 5248ac
• Eastman 5248nc
• Eastman 5287
• Eastman 5293ac
• Eastman 5293nc
• Eastman 5296
• Eastman 5298
• Kodak 5246ac
• Kodak 5246nc
• Kodak 5274ac
• Kodak 5274nc
• Kodak 5277
• Kodak 5279
Note: “ac” indicates a stock with aperture correction. “nc” indicates no aperture
correction.
stdDev
This value is multiplied by the amount parameter. A higher value indicates more
variation in the grain, making it more apparent. This parameter defaults to 0.5.
The rStdDev, gStdDev, and bStdDev subparameters let you control the variation within
each individual color channel. By default, these three parameters are locked together.
softness
Blurs the edges of the grain that’s introduced. This parameter defaults to 1.2. The
rSoftness, gSoftness, and bSoftness subparameters let you control the grain softness of
each individual color channel.Chapter 28 Filters 875
filmResponse
Determines the extent to which the grain inherits its color from the input image
instead of simply black and white. Progressively higher positive values result in the
grain matching the color of the input image more closely. Progressively higher negative
values result in the color of the grain becoming somewhat muted. This parameter
defaults to 0. The rFilmResponse, gFilmResponse, and bFilmResponse subparameters let
you customize the filmResponse of each individual color channel.
colorCorr
This parameter specifies the apparent saturation of the grain in units that represent the
color-correlation value measured by statistical analysis of a particular film sample.
The value represents how closely the grain patterns in each channel overlap. This
means that negative color-correlation values decrease the amount of overlap, which
increases the apparent saturation of the grain, while positive values decrease the
apparent saturation.
Grain
The Grain node adds grain to an image. It is used to simulate film grain for 3D-rendered
elements. The Grain node is not as accurate as the newer FilmGrain node.
This node gives you complete per-channel control over grain size, density, softness, and
so on. The controls are explained below, with visual examples after the parameter list.
Note: If you have an RGB channel image, there is no grain if obeyAlpha is enabled, as
there is an alpha value of 0.
In general, when there is a parameter followed by the same parameter on a perchannel basis, the first one acts as a multiplier on the channel parameters. For example,
a density of .5 multiplies rDensity, gDensity, and bDensity by 0.5.
Parameters
This node displays the following controls in the Parameters tab:
size
The overall size of the grain. “size” is a multiplier on the rSize, gSize, and bSize
subparameters. You can have sizes less than 1.
density
The density of the grain. 1 is maximum density. “density” is a multiplier on rDensity,
gDensity, and bDensity.876 Chapter 28 Filters
seed
The random seed for the grain. When Shake generates a random pattern of values, you
need to make sure for purposes of compositing that you can recreate the same random
pattern a second time. In other words, you want to be able to create different random
patterns, evaluating each one until you find the one that works best, but then you
don’t want that particular random pattern to change again.
Shake uses the seed value as the basis for generating a random value. Using the same
seed value results in the same random value being generated, so that your image
doesn’t change every time you re-render. Use a single value for a static result, or use
the expression “time” to create a pattern of random values that changes over time.
For more information on using random numbers in expressions, see “Reference Tables
for Functions, Variables, and Expressions” on page 941.
filmResponse
The valid range is theoretically infinite, but practically is -1 and 1. -1 is typical of
standard film, with grain applied to the entire range except the brightest whites, and
black is the most affected. 1 is the inverse of that, withholding grain from the darks,
with most grain on the whites. Default is -1.
lGain, rGain, gGain, bGain
The overall intensity of the grain. lGain (luminance) applies to all three channels
equally, while rGain, gGain, and bGain apply only to each particular color channel.
Each of these parameters has subparameters for adjusting its bias, LowOffset, and
HighOffset values.
• bias: Shifts the grain level higher or lower in intensity.
• LowOffset: From the midpoint of the Gain, squeezes the darker grain areas. This is a
useful adjustment when the highlights are good and you want to modify only the
level of the darker grain.
• HighOffset: From the midpoint of the Gain, squeezes the brighter grain areas.
aspect
The grain aspect ratio. An aspect of 2 stretches it twice as wide.
softness
A value above 0 softens the grain. A value below 0 sharpens the grain. The global
softness is an additive factor, so it is added to the values of the per-channel parameters.
obeyAlpha
When enabled, grain is applied to the image through its alpha channel. When disabled,
grain is applied to the entire image, and the alpha is ignored. Useful for applying grain
to premultiplied CG images without contaminating the background black.Chapter 28 Filters 877
Grain Example
In the following example, the first node tree consists of a Ramp node and a PlotScanline
node. The PlotScanline node is added to analyze the image.
Because the ramp is black to white, a linear line appears in the plot scanline from left to
right when no grain is applied.
When a Grain node is inserted between the Ramp node and the PlotScanline node,
noise is introduced that disturbs the line. There is more noise (grain) near the lower,
dark area of the plot scanline. This is due to the filmResponse of -1, which concentrates
grain on the lower areas. 878 Chapter 28 Filters
The next image is the result of increasing the lGain (or rGain, gGain, and bGain on a
per-channel basis), and increasing the range of the grain. This results in making the
diagonal line both lighter and darker simultaneously.
The following two images show the result of modifying the Bias and Gain. In the first
image, the rBias is lowered (looking only at the red channel) to -.5. The grain shifts
downward from the diagonal line, making it darker. This affects both the lighter grain
(the dots above the original diagonal line) and the darker grain (the dots below the
original diagonal line). To adjust only the lighter or darker grains, use the offset sliders.
The second image shows a rLowOffset of .5, squeezing only the darker grains, leaving
only lighter grains.
In the following images, the left image shows the standard softness. The middle image
has a softness of 10, and the right image has a softness of -10.Chapter 28 Filters 879
IBlur
The IBlur node blurs the image, with the amount of blur set by a second control image.
Maximum blur occurs in the white areas of the second image, and no blur occurs in the
black areas.
In the following example, the first node tree uses a Ramp that is approximately one-half
of the height of the image as a mask for a blur. Some bad blending occurs in the lower
portion of the image.
In the second node tree, a Ramp is used as a second input into IBlur, and the result is a
nice blend of the blurred and non-blurred areas.
The above script is saved as doc/html/cook/iblur_example.shk.
Note: The IBlur node is much slower than the normal Blur node.880 Chapter 28 Filters
Parameters
This node displays the following controls in the Parameters tab:
xPixels, yPixels
The amount of blur as described in pixels, for example, 200 blurs 200 pixels to either
side of the current pixel. By default, yPixels is linked to xPixels.
spread
Tells Shake whether or not to consider areas outside of the frame. A button to the right
of the parameter name lets you set the mode.
• 0 = Compute “In Frame Only.”
• 1 = Compute “Outside Frame.”
Because of the Infinite Workspace, it is sometimes handy to compute outside of the
frame as well, for example, if the Blur is placed after a Scale command. Note that if
nothing is outside of the frame (black), you see a black edge.
xFilter, yFilter
Lets you pick which method Shake uses to transform the image. For more information,
see “Filters Within Transform Nodes” on page 862.
steps
The amount of steps. The intensity of the control image is divided up X amount of
zones, with X equal to steps.
stepBlend
Controls the blending between the amount of regions (see below). If you set this
parameter to 0, each step has a constant blur value. If the setting is 1, there is a
continuous blend between the different regions.
controlChannel
The channel of the controlling image to use to control the amount of blur.
channels
The channels of the input image to blur.
invert
Inverts the controlChannel.
IDefocus
The IDefocus node is the Defocus node with a second image input to control the size of
the defocused flaring. For more information, see “Defocus” on page 868.
xPixels, yPixels
Defines the kernel size for the defocus effect, in pixels, which produces the general size
of the defocused flaring in the image. Values lower than 3 have no effect. Progressively
higher values are more processor-intensive to render. Chapter 28 Filters 881
channels
Lets you set which channels Shake should blur. You can choose one or all of the red,
green, blue, or alpha channels. The default is “rgba.”
percent
A slider that lets you mix the modified image and the original image together to
produce a blend of both. By default, this parameter is set to 100 percent, resulting in
100 percent of the modified image being output.
shape
A pop-up menu that lets you choose the shape of the flaring in the resulting image.
The fast modes give you low quality but process quickly. The circle, square, hexagon,
and octagon give you a superior look, but are significantly more processor-intensive to
render. The options are:
• fast gaussian
• fast box
• circle
• square
• hexagon
• octagon
boostPoint
The image value where the superwhite boosting starts. If boostPoint is set to .75, RGB
values above .75 are boosted to increase flaring effects. A high value generally
decreases flare areas, since fewer candidate pixels are flared. By default this is set to .95.
superWhite
Max value to boost image to. A value of 1 in the original will be boosted to this value.
By boosting this value, you increase the brightness of the flare area. Values around 50
yield a very strong boost.
steps
The amount of steps. The intensity of the control image is divided up X amount of
zones, with X equal to steps.
stepBlend
Controls the blending between the amount of regions (see below). If you put this at 0,
each step has a constant blur value. If this is 1, there is a continuous blend between the
different regions.
controlChannel
The channel of the controlling image to use to control the amount of blur.
channels
The channels of the input image to defocus.882 Chapter 28 Filters
invert
Inverts the controlChannel.
IDilateErode
The IDilateErode node isolates each channel and cuts or adds pixels to the edge of that
channel. For example, to chew into your mask, set your channels to “a,” and then set
the xPixels and yPixels values to -1. By default, you work on whole pixels. To switch to
subpixel chewing, enable “soften.” Note that the soften parameter really slows the
node. If you use the soften feature, use low values for xPixels and yPixels.
Parameters
This node displays the following controls in the Parameters tab:
channels
Lets you set which channels Shake should blur. You can choose one or all of the red,
green, blue, or alpha channels. The default is “rgba.”
xPixels, yPixels
The number of pixels added or taken from an edge. Positive values add to the edge;
negative values eat away at the edge.
borders
This parameter determines whether Shake considers or ignores the border pixels at the
edge of the image.
soften
This parameter lets you turn softening on or off, or affects the subpixel. If enabled, this
node becomes considerably more processor-intensive with high xPixel and yPixel
values.
sharpness
The sharpness factor for the softening. A value of 0 gives a smooth gradation, whereas
2 gives you a sharp cutoff.
steps
The amount of steps. The intensity of the control image is divided up X amount of
zones, with X equal to steps.
stepBlend
Controls the blending between the amount of regions (see below). If you set this
parameter to 0, each step has a constant value. If the setting is 1, there is a continuous
blend between the different regions.Chapter 28 Filters 883
controlChannel
The channel of the controlling image to use to control the amount of the effect.
invert
Inverts the controlChannel.
IRBlur
The IRBlur node is an image-based version of the RBlur node, using the alpha mask of a
second image (by default) to control the amount of radial blurring on an image. This is
useful for faking motion blur effects.
In this example, the foreground objects (cubes) are rendered on a beach in a 3D
software package with the depth information. The 3D image is composited over a
background photo of the beach. The DepthKey node was used to extract a matte of the
depth supplied by the 3D render. The matte is then fed into the second image of the
IRBlur to give a zooming effect.
Parameters
This node displays the following controls in the Parameters tab:
xCenter, yCenter
The center point of the blur. By default, these parameters are set to width/2, height/2.
iRadius
The distance from the center that contains the blur sample area.
oRadius
The outer edge for the blur area.
aspectRatio
This parameter inherits the current value of the defaultAspect global parameter. If
you’re working on a nonsquare pixel or anamorphic image, it is important that this
value is set correctly to avoid unwanted distortion in the image.
damp
A gamma value on the blur.
amplitude
The overall amount of blur. This number can also be negative, to blur away from the
viewer, rather than toward the viewer.884 Chapter 28 Filters
blurQuality
The amount of samples. A quality of 1, the maximum, is 64 samples.
mirror
Considers points beyond the center area if your amplitude is high enough when
enabled.
steps
The number of steps. The intensity of the control image is divided up X amount of
zones, with X equal to steps.
stepBlend
Controls the blending between the number of regions (see below). If you put this at 0,
each step has a constant blur value. If this is 1, there is a continuous blend between the
different regions.
controlChannel
The channel of the controlling image to use to control the amount of blur.
channels
The channels of the input image to blur.
invert
Inverts the controlChannel.
ISharpen
The ISharpen node is identical to the Sharpen node, except it uses a second control
image to set the amount of sharpening across the first image. Un-sharp masking is
used for the sharpening filter. This process blurs the image slightly, takes the difference
between the blurred result and the input image, and adds that back over the input
image. High values of x and yPixels (for example, greater than 3 percent of the image
size) return undesirable results. You can also use the sharpen filter in the Convolve
node, but ringing may result.
For an example of this process, see “IBlur” on page 879.
Parameters
This node displays the following controls in the Parameters tab:
percent
The amount of blend between the non-sharpened image (0) and the sharpened image
(100).
xPixels, yPixels
The amount of sharpening as described in pixels. Chapter 28 Filters 885
steps
The number of steps. The intensity of the control image is divided up X amount of
zones, with X equal to steps.
stepBlend
Controls the blending between the amount of regions (see below). If you set this
parameter to 0, each step has a constant blur value. If the setting is 1, there is a
continuous blend between the different regions.
controlChannel
The channel of the controlling image to use to control the amount of blur.
channels
The channels of the input image to sharpen.
invert
Inverts the controlChannel.
Median
The Median node applies a 3 x 3 median filter. It is good for the removal of individual
dots and noise.
Parameters
This node displays the following controls in the Parameters tab:
channels
Lets you set which channels the Median filter should affect. You can choose one or all
of the red, green, blue, or alpha channels. The default is “rgba.”
threshold
The change is allowed if the change is above or below this threshold value.
thresholdType
Defines whether the result is below (0) or above (1) the threshold to allow the change.
PercentBlur
The PercentBlur node blurs the image according to a percentage factor, rather than a
pixel size. Therefore, 100 percent blurs with the entire image in the blur calculation.
Parameters
This node displays the following controls in the Parameters tab:
xPercent, yPercent
Percent of the image taken into consideration for the blur. 100 = the entire image, or a
flat color. Default is 0.886 Chapter 28 Filters
spread
Tells Shake whether or not to consider outside of the frame. A button to the right of
the parameter name lets you set the mode.
• 0 = Compute “In Frame Only.”
• 1 = Compute “Outside Frame.”
Because of the Infinite Workspace, it is sometimes handy to compute outside of the
frame as well, for example, if the Blur is placed after a Scale command. Note that if
nothing is outside of the frame (black), you see a black edge.
channels
Lets you set which channels Shake should blur. You can choose one or all of the red,
green, blue, or alpha channels. The default is “rgba.”
Pixelize
The Pixelize node filters the images into square blocks, giving a mosaic look by
averaging all of the pixels within the block. It produces an effect similar to the Blur
node with the xFilter and yFilter parameters both set to impulse.
Parameters
This node displays the following control in the Parameters tab:
xPixels, yPixels
The block size, in pixels, to be filtered.
RBlur
The RBlur node blurs the image radially from a center point, creating flare effects.
Although this is a processor-intensive process, it is extremely accurate. The entire image
is blurred, based on the range from the inner radius to the outer radius. A pixel outside
of the oRadius distance is blurred by the oRadius amount. A pixel inside of the iRadius
amount is not blurred. A pixel between iRadius and oRadius is blurred by a percentage
of its position between the two Radius parameters.You can also use the mirror
parameter to blur beyond the center point, as well as use a negative amplitude to
sample pixels away from the center rather than toward the center.
A point outside of the oRadius is
blurred by the amount of oRadius.
This is modified by amplitude. If
amplitude is negative, the blur is
calculated outwards. If mirror is
turned on, the blur will extend past
the center point.Chapter 28 Filters 887
Parameters
This node displays the following controls in the Parameters tab:
xCenter, yCenter
The center point of the blur. By default, these parameters are set to width/2, height/2.
iRadius
The distance from the center that contains the blur sample area.
oRadius
The outer edge for the blur area.
aspectRatio
This parameter inherits the current value of the defaultAspect global parameter. If
you’re working on a nonsquare pixel or anamorphic image, it is important that this
value is set correctly to avoid unwanted distortion in the image.
damp
A gamma value on the blur.
amplitude
The overall amount of blur. This number can also be negative, to blur away from the
viewer, rather than toward the viewer.
quality
The amount of samples. A quality of 1, the maximum, is 64 samples.
mirror
Considers points beyond the center area if your amplitude is high enough when
enabled.
seed
When Shake generates a random pattern of values, you need to make sure for
purposes of compositing that you can recreate the same random pattern a second
time. In other words, you want to be able to create different random patterns,
evaluating each one until you find the one that works best, but then you don’t want
that particular random pattern to change again.
Shake uses the seed value as the basis for generating a random value. Using the same
seed value results in the same random value being generated, so that your image
doesn’t change every time you re-render. Use a single value for a static result, or use
the expression “time” to create a pattern of random values that changes over time.
For more information on using random numbers in expressions, see “Reference Tables
for Functions, Variables, and Expressions” on page 941.888 Chapter 28 Filters
Sharpen
Un-sharp masking is used for the sharpening filter. This process blurs the image slightly,
takes the difference between the blurred result and the input image, and adds that
back over the input image. High values of x and yPixels (for example, greater than 3
percent of the image size) return undesirable results. You can also use the sharpen filter
in the Convolve node, but ringing may result.
Parameters
This node displays the following controls in the Parameters tab:
percent
The amount of mixing between the sharpened and the original image. With a value of
100, none of the original image is present.
xPixels, yPixels
The amount of sharpening as described in pixels.
channels
The channels of the input image to sharpen.
ZBlur
The ZBlur node is a dedicated version of IBlur used to simulate depth-of-field blurring
based on the Z channel. You can set a center point of focus (focusCenter) and the
range of drop-off to the maximum amount of blur. If you do not have a Z channel, you
can copy a channel from another image. Therefore, if you are working in 8 bit or 16 bit,
your ranges are between 0 and 1. For example, your focusCenter could not be set to 10,
but is somewhere between 0 and 1.
The ZBlur node works best with gradual changes of Z, for example, looking down a
hallway, represented in the following images.
In cases where foreground objects overlap blurred background objects, ringing results
on the background. It is recommended that you separate the foreground from the
background into two separate elements. For example, do two renders if you are
generating elements in 3D. Chapter 28 Filters 889
The following example is from a 3D render. Ringing appears around the background
letters of the text when normally passed into a ZBlur. Instead, separate the circle into
foreground and background elements with the 3D render. These are then blurred and
composited in Shake.
Parameters
This node displays the following controls in the Parameters tab:
amount
The maximum amount of blur.
near
The value of distance toward the camera at which maximum blur occurs.
far
The value of distance away from the camera at which maximum blur occurs.
focusCenter
The distance from the camera at which there is no blur. By default this is set to (farnear)/2+near.890 Chapter 28 Filters
focusRange
The distance away from the focusCenter, both toward and away from the camera, that
remains unblurred.
steps
The number of steps that the total range is divided between.
stepBlend
The mixing of the different steps. 0 is no mixing and good for getting a feel for your
step ranges. 1 is complete, linear blending.
ZDefocus
The ZDefocus node is identical to the Defocus blurring node, except you can create a
realistic depth of field by using the image’s Z channel. The Z channel usually comes
from a Z-depth 3D render, but of course can also be placed with Shake channelswapping tools (Reorder and Copy).
Like ZBlur, this node works best when there is no abrupt overlapping of foreground
objects over background. The foreground elements work fine, but ringing occurs on
the background. If you are developing a shot of this nature, try to split your foreground
into a separate element.
Parameters
This node displays the following controls in the Parameters tab:
xPixels, yPixels
Defines the kernel size for the defocus, in pixels, which produces the general size of the
defocused flaring in the image. Values lower than 3 have no effect. Progressively higher
values are more processor-intensive to render.
channels
Lets you set which channels Shake should defocus. You can choose one or all of the
red, green, blue, or alpha channels. The default is “rgba.”
percent
A slider that lets you mix the modified image and the original image together to
produce a blend of both. By default, this parameter is set to 100 percent, resulting in
100 percent of the modified image being output.Chapter 28 Filters 891
shape
A pop-up menu that lets you choose the shape of the flaring in the resulting image.
The fast modes give you low quality but process quickly. The circle, square, hexagon,
and octagon give you a superior look, but are significantly more processor-intensive to
render. The options are:
• fast gaussian
• fast box
• circle
• square
• hexagon
• octagon
boostPoint
The image value where the superwhite boosting starts. If boostPoint is set to .75, RGB
values above .75 are boosted to increase flaring effects. A high value generally
decreases flare areas, since fewer candidate pixels are flared. By default this is set to .95.
superWhite
Max value to boost image to. A value of 1 in the original will be boosted to this value.
By boosting this, you increase the brightness of the flare area. Values around 50 yield a
very strong boost.
near
The value of distance toward the camera at which maximum blur occurs.
far
The value of distance away from the camera at which maximum blur occurs.
focusCenter
The distance from the camera at which there is no blur. By default this is set to (farnear)/2+near.
focusRange
The distance away from the focusCenter, both toward and away from the camera, that
remains unblurred.
steps
The amount of steps that the total range is divided between.
stepBlend
The mixing of the different steps. 0 is no mixing and good for getting a feel for your
step ranges. 1 is complete, linear blending.III
Part III: Optimizing, Macros, and
Scripting
This section covers advanced techniques and tips that
allow you to streamline your workflow in Shake.
Chapter 29 Optimizing and Troubleshooting Your Scripts
Chapter 30 Installing and Creating Macros
Chapter 31 Expressions and Scripting
Chapter 32 The Cookbook 29
895
29 Optimizing and Troubleshooting
Your Scripts
This chapter provides tips and techniques for optimizing
your Shake scripts, to maximize image quality and
minimize render times. Additional information on
troubleshooting frequently encountered issues is also
provided.
Optimization
This section contains information about how to improve your scripts—maximizing
image quality and processing efficiency.
Use Only the Color Channels You Need
In Shake, you can combine images that use different channels. For example, you can
composite a two-channel image over a four-channel image. Shake is optimized to work
on a per-channel basis—a one-channel image usually calculates about three times
faster than a three-channel image.
For this reason, if you read in masks that have been generated by a different
application, it’s a good idea to turn them into one-channel images (using a
Monochrome node) to save on disk space and processing time. If, later in the node tree,
you apply an operation that changes channel information, Shake automatically adds
back the necessary channels.
For example, if you place a Monochrome or an Emboss node after an RGB image, that
image becomes a BW image at that point, speeding the processing of subsequent
nodes. If you later composite the image over an RGB image, or change its color (for
example, Mult with values of 1.01, 1, 1), it becomes an RGB image again.
Image Conversion Prior to Shake Import
Shake is “channel agnostic”—you can pipe any channel image into any other. When
you generate or save mask images, you can save the images using a format that
supports one-channel images (RLA or IFF, for example) to reduce disk space and
network activity. You can quickly strip channels out using the command line:896 Chapter 29 Optimizing and Troubleshooting Your Scripts
To strip out the RGB channels, leaving the alpha:
m
Enter the following command-line function:
shake mymask.#.tif -bri 0 -fo mynewmask.#.iff
To strip out the alpha channel, force the RGB as a monochrome 1 channel:
m
Enter the following command-line function:
shake mymask.#.tif -mono -setAlpha 0 -fo mynewmask.#.iff
Note: Shake automatically optimizes itself to read only the channel it needs.
To save out a BW image as RGB for compatibility with other applications:
m Do one of the following:
• Set the RGB output in the FileOut.
• Use the command line function -forcergb:
shake myBWimage.#.iff -forcergb -fo myRGBimage.#.iff
Concatenating Color-Correction and Transform Nodes
Many nodes in the Color and Transform tabs concatenate with nodes of the same type
(that is, Color nodes with Color nodes, Transform nodes with Transform nodes). The
nodes compile several connected nodes into one render pass, preserving quality and
decreasing processing time. Nodes that concatenate are marked with a “C.”Chapter 29 Optimizing and Troubleshooting Your Scripts 897
To take advantage of this feature, try not to mask or insert non-concatenating nodes
between two or more concatenating nodes. In the following example, the second tree
is more efficient because the color-correction and transform nodes have been grouped
together, allowing them to concatenate. The effect of the second tree is identical to
that of the first, but it’s more computationally efficient.
Pre-Rendering Segments of Your Node Tree
If you’ve finished making adjustments to a particular segment of your node tree and
don’t anticipate making any further changes, you can pre-render it to save time. This
can be an especially important timesaver when you have extremely processor-intensive
operations occurring at earlier points of expansive node trees.
Prime candidates for pre-rendering include:
• Images that aren’t animated, but have filtering and color correction applied
• Looping series of frames with processor-intensive adjustments being made to them
• Early branches of a node tree that no longer need to be adjusted
Because of their arrangement,
these nodes cannot
concatenate.
Rearranging these nodes
allows them to concatenate,
but the effect is unchanged.898 Chapter 29 Optimizing and Troubleshooting Your Scripts
In the following example, one of the balloon images has a color correction, Defocus
operation, and a Move2D node with a high motion-blur setting.
The Defocus and motion-blur settings are processor-intensive, so once the first balloon
image’s settings have been finalized, that portion of the node tree above the Over1
node at the top can be rendered with a FileOut as a self-contained file. The rendered
output can be read back into the script with a FileIn, and that image inserted into the
Over1 node in place of the original branch of the node tree. The original branch can be
preserved, off to the side, in case you want to make any future adjustments.Chapter 29 Optimizing and Troubleshooting Your Scripts 899
Use the SetDOD Node to Reduce Rendering Time
This node limits the portion of the image the renderer needs to concentrate on, as well
as quickly masks off a large portion of the image. SetDOD optimizes memory, IO
activity, and render times. In this example, even though the only interesting portion of
the image is the ball in the middle, Shake inefficiently has to consider the entire image.
To limit the area Shake must consider, apply a Transform–SetDOD node to optimize the
render.
Problems With Premultiplication
Most noticeable problems with premultiplication fall into two basic categories:
• Unwanted fringing appears around keyed or masked subjects.
• Color-correction or filtering nodes affect parts of an image they’re not supposed to.
To resolve these issues, you must follow two specific rules about premultiplication.900 Chapter 29 Optimizing and Troubleshooting Your Scripts
The Unbreakable Rules of Premultiplication
If you don’t read the full explanation of the mathematics of premultiplication in “About
Premultiplication and Compositing” on page 421, here are the two rules you must
always follow when creating a composition in Shake:
• Rule Number 1: Always color correct unpremultiplied images. To unpremultiply an
image, use an MDiv node.
• Rule Number 2: Always filter and transform premultiplied images. To premultiply an
image, use an MMult node.
Combine Image and Alpha Channels Prior to Filtering
If you need to mask, rotoscope, key, or otherwise add an alpha channel to an image,
make sure you do it in such a way that the result is premultiplied prior to adding
filtering nodes.
Unwanted Gamma Shifts During FileIn and FileOut
Shake and Final Cut Pro display and process the gamma of QuickTime movies and RGB
image files differently.
Shake makes no automatic changes to the gamma of QuickTime or RGB image files
and sequences. Users must make sure that their monitor is properly calibrated for
their production environment, and that the viewer lookup parameters are set to the
values required for images to display properly in the Shake Viewer. In particular, the
default viewerGamma value is 1, which leaves the gamma of images displayed in the
Viewer unchanged.
Final Cut Pro, on the other hand, makes some assumptions about the gamma of
QuickTime and RGB image files that are imported into a project. The gamma of
imported QuickTime and RBG image files is treated differently in sequences set to
render in 8-bit or 10-bit YUV.
Note: While it is possible to recalibrate Apple displays via the Display Calibrator
Assistant in Displays preferences, users should leave the gamma of their monitors to
the 1.8 Standard Gamma setting when working in Final Cut Pro. ColorSync settings are
not used by either Shake or Final Cut Pro for automatic color calibration or
compensation of any kind.
Gamma in QuickTime Movies
When importing a QuickTime movie created with Shake into Final Cut Pro, users may
notice a difference in the displayed gamma of the image. This is because Final Cut Pro
automatically lowers the gamma of sequences playing in the Canvas on your
computer’s display. The gamma of QuickTime images remains untouched when the
sequence is output to video or rendered as a QuickTime movie.Chapter 29 Optimizing and Troubleshooting Your Scripts 901
Solution
You can load Shake's viewer lookup controls into the Parameters tab, then change the
viewerGamma parameter to .818 to preview how your composition will look in the Final
Cut Pro Canvas. This only changes how your image is displayed in the Shake Viewer,
and does nothing to change the gamma of the script’s final rendered image.
What causes this?
Final Cut Pro assumes that QuickTime movies for codecs that support the YUV color
space (including DV, DVCPRO 50, and the 8- and 10-bit Uncompressed 4:2:2 codecs) are
created with a gamma of 2.2. This is generally true of movies captured from both NTSC
and PAL sources. When you eventually output the sequence to video, or render it as a
QuickTime movie, the gamma of the output is identical to that of the original, unless
you’ve added color-correction filters of your own.
However, during playback on your computer’s monitor, Final Cut Pro automatically lowers
the gamma of a sequence playing in the Canvas to 1.8 for display purposes. This is to
approximate the way it will look when displayed on a broadcast monitor. This onscreen
compensation does not change the actual gamma of the clips in your sequence.
Gamma in RGB Image Files and Sequences
When importing a still image file or sequence from Shake into Final Cut Pro, the
gamma may be incorrectly boosted when the sequence is output to video or rendered
as a QuickTime movie.
Solution
Convert image sequences to QuickTime movies using a FileOut node in Shake for Mac
OS X, prior to importing them into Final Cut Pro. This makes them easier to import,
and also ensures that their gamma won’t be changed. For the highest quality, use
either the Uncompressed 8- or 10-bit 4:2:2 codec when performing this conversion,
depending on the bit depth of the source image files. QuickTime Player is not
recommended for this operation, as it may perform an unwanted bit-depth
conversion with greater than 8-bit images.
What causes this?
Final Cut Pro assumes that all RGB image files are created with a gamma of 1.8. When
RGB image files are imported into Final Cut Pro and edited into a sequence set to 8- or
10-bit YUV rendering, the gamma is automatically boosted to 2.2 in an attempt to
match the other video files in your project. This boosted gamma is then used when the
sequence is output to video or rendered as a QuickTime movie.
During playback on your computer’s monitor, Final Cut Pro lowers the gamma of the
sequence playing in the Canvas to 1.8 for display purposes. This is to approximate the
way it will look when displayed on a broadcast monitor. The still image clips in your
sequence are still boosted when the sequence is output to video or rendered as a
QuickTime movie.902 Chapter 29 Optimizing and Troubleshooting Your Scripts
Important: QuickTime movies compressed using the Animation codec (which only
supports the RGB color space) are also assumed to have been created with a gamma of
1.8. As a result, these clips are also boosted to 2.2 when edited into a sequence set to 8-
or 10-bit YUV rendering.
Note: For more information on setting the rendering options of a sequence in the
Video Processing tab of the Sequence Settings dialog, refer the Final Cut Pro User
Manual.
Avoiding Bad Habits
The following are common problems that come up, and are things that should
generally be avoided when you’re putting together scripts from scratch.
Don’t Mask Layer Nodes
The side-input masks for layer nodes should not be used, as they behave counterintuitively and will not produce the result you might expect. If you want to mask a
layering node, mask the input nodes, or use the KeyMix node.
Don’t Reorder Images Before Masking
This pops up a lot in client scripts, which is a Reorder node applied to an image before
it is fed into a mask. This is unnecessary because mask inputs, as well as the
SwitchMatte and KeyMix nodes, have channel selectors. There is probably no
computational difference, but it is one more node in the tree.Chapter 29 Optimizing and Troubleshooting Your Scripts 903
Don’t Mask Concatenating Nodes
Masking a node breaks concatenation. This is bad. It slows your render and decreases
quality, adds possible ringing on the mask edges, and forces multiple mask mixes.
Instead, feed the tree into a KeyMix node.
Bad Good904 Chapter 29 Optimizing and Troubleshooting Your Scripts
Don’t Apply the Same Mask to Multiple Successive Nodes
Even if the nodes do not normally concatenate, but appear one after the other along
the same branch of the node tree, you get cleaner edges with the use of a KeyMix
node. In the example below, a circular mask is applied to three filter effects. Each filter
works on the previous node, so problems appear on the edges. The solution is again to
use a KeyMix node. This yields a faster render (does not mix the mask multiple times)
and a clean edge.
Bad Good30
905
30 Installing and Creating Macros
If there’s a particular image-processing tree you’ve
created that you would like to save for future use, you
can turn it into a macro. Macros act like other nodes
within Shake, except that you create them yourself using
Shake’s other nodes as the initial building blocks. You can
then add your own expressions, scripting, and user
interface elements to extend their functionality.
How to Install Macros
This section covers how to install macros on your computer, whether they’re macros
you’ve created, or macros provided by your facility, or downloaded from the web.
When you’re provided with a macro, you may receive as many as three files:
• The macro itself
• A custom user UI (user interface) file for that macro
• A custom icon
Each of these files needs to be installed into a special directory for Shake to find and
load it successfully.
Where to Install Macros
Macros all have a .h file extension, and are located in:
$HOME/include/startup
If this directory does not already exist, you’ll need to create a series of nested
directories with those exact names. For example, if you were to install the AutoDOD.h
macro from the cookbook, a common location is:
/Users/myAccount/nreal/include/startup/AutoDOD.h906 Chapter 30 Installing and Creating Macros
This is referred to as the startup directory, and is used for both .h preference files, and
for the installation of macros.
Where to Install Custom Interface Settings
The macro only contains the code necessary to perform the actual function. A custom
UI file allows that macro to display controls for manipulating that macro’s parameters.
The accepted convention is that a macro’s UI file name ends with the letters “UI.” This
makes it easier for people to whom you give your macro to know what files go where.
As an example, for the AutoDOD.h macro, the accompanying UI file should be named
AutoDODUI.h.
Custom UI files belong in a ui directory in the following location:
$HOME/nreal/include/startup/ui
Following the previous example, a common location would be:
/Users/myAccount/nreal/include/startup/ui/AutoDODUI.h
This is referred to as the ui directory, or the startup/ui directory. Files inside it are
referred to as ui .h files.
Files that change additional default settings or add extra controls should be located in
the templates directory, which is always within a ui directory:
/Users/myAccount/nreal/include/startup/ui/templates/defaultfilter.h
Where to Install Icons
If you’re really slick, you can create your own icons to represent your new macro in the
Tool tabs. Icons generally end in .nri.
Icons for custom macros are placed within an icons directory. This can be located
alongside the include directory in the following location:
$HOME/nreal/icons/
Following the previous example, a common location would be:
/Users/my_account/nreal/icons/Layer.CopyDOD.nri
Installing Macros Within a Script
You can also place macros inside of a script itself by copying and pasting it with a text
editor. This guarantees that your macro is found by Shake when rendering that script
on any machine. However, when you do this, you do not have access to any interfacebuilding functions. Also, if you load the script back into the interface and save it out
again, the macros are lost.Chapter 30 Installing and Creating Macros 907
Preference File Load Order
Sometimes, macros have to be loaded in a specific order. This is mainly true if one
macro uses another macro to perform its task. If you need to explicitly control the order
in which macros are loaded, this can be accomplished in a variety of ways.
To explicitly control macro load order, do one of the following:
m Add an include statement at the beginning of the file. For example, if macros.h relies on
common.h being loaded before, start macros.h with:
#include
m
Put all the files you want to load in a directory (for example include/myMacros) and
create a .h file in startup that contains only include statements:
#include
#include
#include
Include files are never loaded twice, so it is okay if two .h files contain the same
#include statement.
Creating Macros—The Basics
The MacroMaker is an interactive tool to help you create new nodes by combining
previously existing nodes. Shake’s scripting language is essentially C-like, so you have
access to an entire programming language to manipulate your scripts. Because the
MacroMaker cannot account for all possibilities, consider the MacroMaker a tool to help
you get started in using Shake’s language. Use the MacroMaker to build the initial files,
and then modify these files to customize your macros.
Note: For online portions of this section, choose Help > Customizing Shake.
For more information, such as about macro structure, common errors when making
macros, and several examples, see “Creating Macros—In Depth” on page 914. For a
tutorial on making macros, see Tutorial 8, “Working With Macros,” in the Shake 4
Tutorials.
Opening Scripts That Use Uninstalled Macros
If you open a Shake script that contains macros that you do not have on your system,
you have the option to load the script using a substitute node, or not to load the
script at all. For more information, see Chapter 2, “Setting a Script’s Global
Parameters,” on page 91.908 Chapter 30 Installing and Creating Macros
Creating the Node Structure
First, create the node structure for the function (what you want to occur in the node).
This can be very simple or very complex. To help illustrate macro building, the
following example creates a function that randomly scales, pans, and rotates your
image, similar to CameraShake, but with more moving parameters.
Important: The QuickPaint, ColorCorrect, HueCurves, and Lookup nodes should not be
used inside of macros.
To create the node structure:
1 Click the Image Tool tab, then click the Text command.
The Text node is added to the Node View.
2 Click the Transform Tool tab, then add a Move2D node.
3 Enter the following parameters:
In these examples, the turbulence function generates continual noise between 0 and 1
(see Chapter 31, “Expressions and Scripting,” on page 935). Since you do not want all
fluctuations to have the same pattern, offset the seed each time you call the function
(time, time+100, time+200, time+300). Then, simple math is used to get the values in an
appropriate range. For example, in angle, .5 is subtracted to adjust the value to a range
of -.5 to .5. The result is then multiplied by 10, and yields a range between -5 and 5
degrees of rotation.
4 Drag in the playhead in the Time Bar to test the animation.
Parameter Value
xPan (turbulence(time,2)-.5)*20
yPan (turbulence(time+100,2)-.5)*20
angle (turbulence(time+200,2)-.5)*10
xScale turbulence(time+300,2)/2+.75Chapter 30 Installing and Creating Macros 909
Making a Macro
Since the above steps are tedious to manually recreate, create a macro.
To create a macro:
1 In the node tree created above, select the Move2D node, and press Shift-M (or rightclick and choose Macro > Make Macro from the shortcut menu).
The MacroMaker is launched.
In the top portion of the Shake MacroMaker window, you specify the file name, the
save location, and the tab where the node appears.
2 In the Macro Name field, enter RandomMove.
3 In the Macro Toolbox field, enter Transform.
Note: Case sensitivity is always important.
4 Leave the “Store macro in” selection at the default “User directory”setting.
5 Leave the “Macro’s output is set” pop-up menu set to Move2D1 (there are no other
choices in this example).
Setting Value
Macro Name The name of the macro you create. It is also the name of the file
you save (see below).
Macro Toolbox The Tool tab that stores the macro. If a tab does not exist, it creates
a new one.910 Chapter 30 Installing and Creating Macros
The lower portion of the Shake MacroMaker window lists all of the nodes and the
parameters that can be exposed in the created node. For example, since the image is
fed into a Move2D node, the image In parameter is enabled.
Since you already have most of the parameters needed in this example, you only want
to expose motionBlur, shutterTiming, and shutterOffset values.
6 Click the V buttons for motionBlur, shutterTiming, and shutterOffset.
Store Macro in • User directory: Saves the macro in your $HOME/nreal/include/
startup as MacroName.h and a second ui file in $HOME/nreal/
include/startup/ui as MacroNameUI.h.
• Shake Directory: Saves the macro in the Shake distribution
directory, as /include/startup/MacroName.h and
/include/startup/ui/MacroNameUI.h.
For more information on these directories and their functions, see
“Creating and Saving .h Preference Files” on page 355.
Macro’s output is Presents a list of all nodes that are included in the macro (just one
for the Move2D example). Select the node to pass out of the macro.
Shake usually does a correct guess for this node if you have only
one terminating branch.
Setting Value
Parameter Value
Parameter name The name of the slider (in the case of float and int values) or the
knot input (in the case of image inputs). Arbitrarily renaming your
parameters is not recommended (such as eggHead), as you have
the benefit of Shake’s default interface behaviors (pop-up menus,
subtrees, color pickers, on/off switches, and so on) if they have the
same name.
Default value All current values of the nodes are fed into value fields. To set a
default value for exposed parameters (that is, parameters with the
visible Status light on), set the value here.
Status Enable Status to expose this parameter. If you expose an image, it
adds an input to the top of the node. If you expose anything else, it
gives you a slider or value field in the Parameters tab.
Minimum For slider-based parameters, sets the lower slider limit.
Maximum For slider-based parameters, sets the upper slider limit.
Granularity Sets how much the slider jumps between values. Chapter 30 Installing and Creating Macros 911
7 Click OK.
The new push-button node appears in the Transform tab.
8 Add the new node to the Node View.
In the RandomMove parameters, only the motionBlur settings are available, and have
automatically collapsed into a subtree (due to the default Shake behavior).
To modify the macro:
1 If you placed your macro in your User Directory, go into your $HOME directory, and
then into the nreal/include/startup subdirectory.
In the startup directory, a new file called RandomMove.h appears.
2 Open the RandomMove.h file in a text editor:
image RandomMove(
image In=0,
float motionBlur=0,
float shutterTiming=0.5,
float shutterOffset=0
)
{
Move2D1 = Move2D(
In,
(turbulence(time,2)-.5)*20,
(turbulence(time+100,2)-.5)*20,
(turbulence(time+200,2)-.5)*10,
1,
turbulence(time+300,2)/2+.75,
xScale,
0, 0,
width/2, height/2,
“default”, xFilter,
“trsx”, 0,
motionBlur,
shutterTiming,
shutterOffset,
0,
);
return Move2D1;
}912 Chapter 30 Installing and Creating Macros
Edit the macro in this file. The parameters motionBlur, shutterTiming, and shutterOffset
are declared in the first few lines, and then the assigned default values. The image
input In is also assigned a value of 0, so there is no expected image input when the
macro is created.
image RandomMove(
image In=0,
float motionBlur=0,
float shutterTiming=0.5,
float shutterOffset=0
)
...
3 Modify the behavior of the macro. Each turbulence function has a frequency of 2, for
example, turbulence(time,2). To create a new parameter to modify the frequency, add
the following line (in bold):
image RandomMove(
image In=0,
float frequency = 2,
float motionBlur=0,
float shutterTiming=0.5,
float shutterOffset=0
)
...
4 Now that you have a new parameter, you must substitute it into the macro body
wherever you want that value to apply. Insert the variable into the macro body:
image RandomMove(
image In=0,
float frequency = 2,
float motionBlur=0,
float shutterTiming=0.5,
float shutterOffset=0
)
{
Move2D1 = Move2D(
In,
(turbulence(time,frequency)-.5)*20,
(turbulence(time+100,frequency)-.5)*20,
(turbulence(time+200,frequency)-.5)*10,
1,
turbulence(time+300,frequency)/2+.75,
xScale,
0, 0,
...
5 Save the file.
6 To see the results of the modification, restart Shake.Chapter 30 Installing and Creating Macros 913
Modifying the Macro Interface
The macro file in the startup directory merely creates the function. The interface is built
by Shake each time it is launched. Therefore, the MacroMaker also creates a second file
in the startup/ui subdirectory that creates a button and sets slider ranges for the node.
For example, if you created the new frequency slider in the above example, you may
have noticed that the slider only goes from 0 to 1. You can modify this in the ui file.
To modify the macro interface:
1 In the text editor, open the RandomMoveUI.h file (created in the above example) in the
nreal/include/startup/ui subdirectory.
nuiPushMenu(“Tools”);
nuiPushToolBox(“Transform”);
nuiToolBoxItem(“@RandomMove”,RandomMove());
nuiPopToolBox();
nuiPopMenu();
nuiDefSlider(“RandomMove.motionBlur”,0,1,0,0,0);
nuiDefSlider(“RandomMove.shutterTiming”,0,2,0,0,0.01);
nuiDefSlider(“RandomMove.shutterOffset”,-1,1,0,0,0.01);
The first line opens the Tool tabs. The second line opens the Transform tab. To place the
macro in a different tab, change the word “Transform” to a different name. The third
line creates the button. The first occurrence of the word “RandomMove” is preceded by
an @ sign, indicating that there is no icon (not related to Shake’s unpadded frame
wildcard) for the button, so therefore the text “RandomMove” should be printed on the
button. The second listing of the word “RandomMove” is the function that is called
when you click the button. Because you have default arguments already supplied in
the macro, you do not need to supply any arguments. If you did supply arguments,
they are placed between the parentheses: (). The last lines basically say “For the
RandomMove function, set the slider ranges for the shutterOffset parameter between
-1 and 1, with some extra granularity settings.” Since this sets the slider range, you can
copy it and adapt it for the new frequency parameter.
2 Copy the last line of code and paste in a copy. Change the word “shutterOffset” to
“frequency,” and adjust the values (based on the line in bold below):
...
nuiDefSlider(“RandomMove.shutterTiming”,0,2,0,0,0.01);
nuiDefSlider(“RandomMove.shutterOffset”,-1,1,0,0,0.01);
nuiDefSlider(“RandomMove.frequency”,0,10,0,0,0.01);
To add an icon to the button:
1 Create a 75 x 40 pixel image.
2 Save the image as TabName.Name.nri to your $HOME/nreal/icons directory. In the above
example, it would be called Transform.RandomMove.nri.914 Chapter 30 Installing and Creating Macros
Note: You must strip out the alpha channel of the image. You can do this in Shake with
a SetAlpha node set to a value of 0. Set the FileOut to your $HOME/nreal/icons directory,
with the name TabName.Name.nri, and render the file.
3 In the RandomMoveUI.h file, remove the @ sign on line 3, and save the text file.
4 Restart Shake.
The RandomMove node appears with the icon.
Note: In all cases of the above code, case sensitivity is important. If the macro name is
in all capital letters, then all occurrences of the name must also be in all capital letters.
The same restriction applies to parameter and icon names.
For more information on customizing Shake, see Chapter 14, “Customizing Shake.”
Creating Macros—In Depth
This section discusses additional information, such as basic macro structure, common
errors when creating macros, and sample macros.
The macro files can be saved into the script that uses them, or can be saved in a file
with a .h file extension. These files are saved in your startup directory, located either in
/include/startup, $HOME/nreal/include/startup, or a startup directory under a
directory specified by the environment variable $NR_INCLUDE_PATH. For more
information, see “Setting Preferences and Customizing Shake” on page 355.
Basic Macro Structure
The following is what a macro looks like:
dataType MacroName(
dataType parameterName=defaultParameterValue,
dataType parameterName=defaultParameterValue,
...
)
{
Macro Body
return variable;
}
For example, the following function, called “angle,” calculates the angle between two
points, returns a float, using the atan2d function. (For more information, see the “Trig
Functions (in degrees)” section of the table in “Reference Tables for Functions, Variables,
and Expressions” on page 941.) Chapter 30 Installing and Creating Macros 915
Notice that the default parameter value is optional, so if you use this particular
function, all four values must be supplied:
float angle(
float x1,
float y1,
float x2,
float y2
)
{
return atan2d(y2-y1,x2-x1);
}
Because the macro is so simple (one function), there is no Macro Body per se; the
function is attached to the return statement, which indicates what is spit out of the
macro. To use this function, use something like the following:
myAngle = angle(0,0,100,100);
that returns 45.0.
The following is an example of an image function. It adds that “vaseline-on-the-lens”
effect. The following represents the node tree, with a split-screen example of the effect.916 Chapter 30 Installing and Creating Macros
The LumaKey node is used to extract only the highlights. The highlights are blurred,
and then applied back on the original image with the Screen node, which is nice for
glows and reflections. The Mix node is used to control how much of the original image
shows through. The example image shows the original image on the left, and the
macro results on the right.
The following are the nodes reformatted as a macro. The macro parameters are bold.
image SoftGlow(
image In=0,
float blur=0,
float lowClip=.3,
float hiClip=.9,
float percent=100
)
{
LumaKey1 = LumaKey(In, lowClip, hiClip, 0, 0, 1);
Blur1 = Blur(LumaKey1, blur, xPixels, 0, “gauss”, xFilter,
“rgba”);
Screen1 = Screen(In, Blur1, 1);
Mix1 = Mix(In, Screen1, 1, percent, “rgba”);
return Mix1;
}
The macro name is SoftGlow, and it outputs an image (line 1). The parameters are
expecting one image input named In (line 2). This gives you one input at the top of the
node. If you want two image inputs, there would be two image parameters, and so on.
The other values are all float, controlling the macro settings (lines 3-6). These are
assembled in the macro body, and the return statement indicates that Mix1 is output.
The low and hiClip parameters determine the levels of the highlights.
If you save this into a startup .h file, for example, $HOME/nreal/include/startup/
SoftGlow.h, it is immediately available on the command line. You can also find a copy of
this called SoftGlow.h in doc/html/cook/macros.
Photo courtesy of PhotronChapter 30 Installing and Creating Macros 917
Type:
shake -help softglow
to return:
-softglow [blur] [lowClip] [hiClip] [percent]
Loading Image Macros Into the Interface
When you start the Shake interface, the macros do not appear in the interface. A
separate file and set of functions are required to load the macros in the interface. These
ui functions are saved in a subdirectory of startup called ui. The following ui code
creates a button in the Filter tab with no icon that is labeled “SoftGlow,” and calls up
the SoftGlow function when clicked. Save it in $HOME/nreal/include/startup/ui or
$NR_INCLUDE_PATH/startup/ui. You can also find a copy of this in doc/html/cook/
macros called SoftGlowUI.h.
nuiPushMenu(“Tools”);
nuiPushToolBox(“Filter”);
nuiToolBoxItem(“@SoftGlow”,SoftGlow());
nuiPopToolBox();
nuiPopMenu();
The @ sign indicates that no icon is associated with the button. If you have an icon,
omit the @ sign (do not add the tab prefix or image type extension) and save an icon
image, 75 x 40 pixels in size, in your icons directory (/icons or $HOME/nreal/
icons or $NR_ICON_PATH), naming it Filter.SoftGlow.nri. To see this process listed step-bystep, see Tutorial 8, “Working With Macros,” in the Shake 4 Tutorials.
The images for the icons have the following characteristics:
• 75 x 40 pixels
• No alpha channel
• Named as TabName.FunctionName.nri
When calling the icon in the ui.h file, omit the TabName. and .nri.
• Font: 1-point Arial
• Saved in /icons or $HOME/nreal/icons or $NR_ICON_PATH
File Name Versus Macro Name
Names of files have nothing to do with names of macros. Only the function name is
important when called in the script. You can also have multiple macros per file.918 Chapter 30 Installing and Creating Macros
Typical Errors When Creating Macros
The following table contains a list of typical errors in macro creation. When diagnosing
a macro, first run the macro in the command line: shake -help myFunction. If nothing
appears, there is a problem with your startup .h file. If it works fine, move on to the
interface, and read any error messages in the Console tab. The Console tab is your
number-one diagnostic tool.
Setting Default Values for Macros
There are two places to set default values for your macros. The first is in the startup .h
file when you declare the parameters. The following is from the example above:
image SoftGlow(
image In=0,
float blur=0,
float lowClip=.3,
float hiClip=.9,
float percent=100
)
...
Error Behavior Probable Cause
In the command line, the
function doesn’t appear when
you type the following:
shake -help myFunctionName
• The file is not saved in a startup directory. See above.
• The file does not have a .h file extension, or it has an improper
extension.
• The name of the macro (the second word in the file) is not the
same as what you have called in the help command.
Function does not appear in the
interface.
• The ui file is not saved in a ui directory. See above.
• The ui file does not have a .h file extension, or it has a .txt
extension.
• The name of the macro (the second word in the file) is not the
same as what you have called in the ui file, possibly because of
capitalization errors.
In the interface, the button
appears without an icon.
You have not removed the @ sign in your ui.h file. Otherwise,
follow Tutorial 8, “Working With Macros,” in the Shake 4 Tutorials.
The icon appears as a dimple. The icon cannot be found.
• Make sure the icon is saved in an icons directory. See above.
• The icon should be named TabName.FunctionName.nri.
• The ui code should say:
nuiToolBoxItem(“FunctionName”,Function());
NOT
nuiToolBoxItem(“TabName.FunctionName.nri”,Function());
• Check capitalization.
The icon is fine, but nothing
happens when you click it.
• Check capitalization errors.
• Check that the correct function is called in the ui file and that the
function exists. (For example, type shake -help functionname in
the command line.)
• Check that default arguments have been specified.Chapter 30 Installing and Creating Macros 919
Each of these has a default value assigned. Note that the image has 0, which indicates
“no input.”
These values are applied to both the command line or the graphical user interface
defaults. If you do not supply a default argument, you must enter a value when you call
the function. It is therefore recommended that you enter defaults.
The second location is in the ui.h file when the function is called. To override the
startup defaults, enter your own in the ui.h file. Normally, you have something like the
following in your ui.h file:
nuiPushMenu(“Tools”);
nuiPushToolBox(“Filter”);
nuiToolBoxItem(“@SoftGlow”,SoftGlow());
nuiPopToolBox();
nuiPopMenu();
This sets new default values just for the interface:
...
nuiToolBoxItem(“@SoftGlow”,SoftGlow(0,100,.5,1, 95));
...
Attaching Parameter Widgets
If you take a look at “Using Parameters Controls Within Macros” on page 379, and
experiment with different nodes in the interface, you see that you can attach many
behaviors to parameters. This section takes a raw macro and shows different examples
of behaviors to change the parameters sliders to more efficient widgets.
Changing Default Settings
Most of Shake’s functions (third-party development excepted) are stored in two files
in /include in nreal.h and nrui.h. nreal.h is the complete list of all functions
and settings. The nrui.h file builds the interface. You can modify these files to suit your
needs, but it is strongly recommended that you make a backup of these files before
you begin. Errors in the file may result in unexpected problems with Shake.
These files are also an excellent source for examples. 920 Chapter 30 Installing and Creating Macros
The macro that is created in the following example is called VidResize. It takes an image
of any size and resizes it to video resolution. There are slider controls to specify NTSC or
PAL (vidformat), to maintain the aspect ratio (keepAspect), and if you do keep the
aspect ratio, the color of the exposed background area. The following image represents
the original node tree.
To create the VidResize macro:
1 Quit Shake.
2 To load the macro, copy the VidResize.h file from doc/html/cook/macros to your $HOME/
nreal/include/startup directory.
3 Copy the VidResizeUI.h file from doc/html/cook/macros into your $HOME/nreal/include/
startup/ui directory.
4 Start the Shake interface.
5 Create a Grad node with the following parameter settings:
• Set the width to 400.
• Set the height to 200.
6 Attach the Transform–VidResize node and test each slider. To have any effect, the
keepAspect parameter needs to jump to 2.
image VidResize(
image In=0,
int keepAspect = 1,//keeps aspect ratio or not
int vidFormat = 0, //This select NTSC (0) or PAL (1) res
float bgRed = 0,//if keeping aspect ratio, determines
float bgGreen =0,//the bg color
float bgBlue =0
)
{
curve int yRes = vidFormat ==0?486:576;
Fit1 = Fit(In, 720, yRes, “default”, xFilter, 1);
Resize1 = Resize(In, 720, yRes, “default”, 0);
SetBGColor1 = SetBGColor(Fit1, “rgbaz”, Chapter 30 Installing and Creating Macros 921
bgRed, bgGreen, bgBlue, 0, 0
);
Select1 = Select(keepAspect, Resize1, SetBGColor1, 0, 0);
return Select1;
}
The ui file looks like this:
nuiPushMenu(“Tools”);
nuiPushToolBox(“Transform”);
nuiToolBoxItem(“@VidResize”,VidResize());
nuiPopToolBox();
nuiPopMenu();
The only tricky portions in the macro so far are the Select function, and the logic to
choose the height. When the branch of Select equals 1, the first image input is passed
through. When it equals 2, the second branch is fed through. The keepAspect
parameter is connected to Select to choose the branch you want. The other tricky
portion is the height logic, the first line of the macro body. It declares an internal
variable (it cannot be seen outside the macro) named yRes. It tests vidFormat to see if it
equals 0. If so, it sets yRes to equal 486, the height of an NTSC image. Otherwise, it sets
yRes to 576, the standard PAL image height.
Setting Slider Ranges
The first inconvenience about the macro, when used in the interface, is that the
keepAspect slider goes from 0 to 1. The values you want are 1 or 2. Since you do not
want to have to manually enter the format, it is easier to set the slider range from 1 to
2. This is done in the ui file VidResizeUI.h. To find the format, see “Using Parameters
Controls Within Macros” on page 379. If you are reading this document electronically,
you can copy and paste the command from the document into the text file.
To set the slider range:
1 Load the ui/VidResizeUI.h file into the text editor.
2 Add the nuiDefSlider function to set a slider range:
nuiPushMenu(“Tools”);
nuiPushToolBox(“Transform”);
nuiToolBoxItem(“@VidResize”,VidResize());
nuiPopToolBox();
nuiPopMenu();
nuiDefSlider(“VidResize.keepAspect”, 1, 2);
The word “VidResize” of VidResize.keepAspect indicates that you only want to modify
the keepAspect slider in the VidResize function. If you do not specify this, for example,
you just have nuiDefSlider(“keepAspect”, 1, 2); and it tries to apply that slider range to all
parameters named keepAspect, no matter what function they are in.
3 Save VidResizeUI.h and start Shake again.
4 Create the VidResize node (there is no need to test it on an image).922 Chapter 30 Installing and Creating Macros
The keepAspect slider now goes from 1 to 2.
Creating an On/Off Button
Rather than using a value (1 or 2) to indicate what the slider does, you can create an
on/off button. When the button is on, you want to maintain the aspect ratio. When off,
you do not want to maintain the aspect ratio. Again, the format information can be
found in “Using Parameters Controls Within Macros” on page 379. If you are reading
this document electronically, you can copy and paste the command.
To create an on/off button:
1 Replace the slider range function in the VidResizeUI.h file:
nuiPushMenu(“Tools”);
nuiPushToolBox(“Transform”);
nuiToolBoxItem(“@VidResize”,VidResize());
nuiPopToolBox();
nuiPopMenu();
nuxDefExprToggle(“VidResize.keepAspect”);
The problem here is that it always returns a value of 0 or 1—0 is off and 1 is on. You
want values of 1 or 2 because that is what the macro is counting on. Therefore, you
must edit the macro.
2 In the startup file VidResize.h, add 1 to the test of keepAspect in the Select function:
...
SetBGColor1 = SetBGColor(Fit1, “rgbaz”,
bgRed, bgGreen, bgBlue, 0, 0
);
Select1 = Select(keepAspect+1, Resize1, SetBGColor1, 0, 0);
return Select1;
}
3 Save the file and start Shake.
Inappropriate Behavior in All the Wrong Places
If you start to see controls on parameters that you have not created, or if you see
other functions that have odd behaviors, make sure you have specified what function
receives the control. If you set:
nuiDefSlider(“depth”, 1, 2);
anytime Shake sees a parameter named “depth” (for example, if somebody makes a
macro to set bit depth to a certain value), it takes a range of 1 to 2. Therefore, ensure
that you preface the depth with a function name:
nuiDefSlider(“MyFunction.depth”, 1, 2);Chapter 30 Installing and Creating Macros 923
The keepAspect parameter has an on/off button.
Attaching Color Pickers and Subtrees
Using a slider to select a color is not nearly as impressive to your arch foes as using the
Color Picker, so attach bgRed, bgGreen, and bgBlue to a color control to interactively
pick your color. For the format information, see “Using Parameters Controls Within
Macros” on page 379.
Since this is an interface change, edit the ui VidResizeUI.h file:
nuiPushMenu(“Tools”);
nuiPushToolBox(“Transform”);
nuiToolBoxItem(“@VidResize”,VidResize());
nuiPopToolBox();
nuiPopMenu();
nuxDefExprToggle(“VidResize.keepAspect”);
nuiPushControlGroup(“VidResize.Background Color”);
nuiGroupControl(“VidResize.bgRed”);
nuiGroupControl(“VidResize.bgGreen”);
nuiGroupControl(“VidResize.bgBlue”);
nuiPopControlGroup();
nuiPushControlWidget(“VidResize.Background Color”,
nuiConnectColorTriplet(kRGBToggle,kCurrentColor,1)
);
This file is interesting because the first five new lines are the code to make a subtree.
Even if you do not add the last lines to attach the color control, the three color sliders
are still organized in a subtree named Background Color.
The last three lines attach the color picker to the subtree called Background Color, and
sets the picker to select an RGB (as opposed to HSV, HLS, or CMY) color control that
uses the Current (as opposed to the Average, Maximum, or Minimum scrub) color. 924 Chapter 30 Installing and Creating Macros
The Color control is added to the interface.
Attaching Button Toggles
Next, attach a button to toggle the vidFormat. Since the 0 and 1 settings are not very
intuitive for the video format selection, create buttons labeled “NTSC” and “PAL.” The
following examples show two ways to attach a button toggle.
To attach a button toggle:
1 Create a directory called icons/ux in your $HOME/nreal directory:
mkdir -p $HOME/nreal/icons/ux
2 Copy all of the icons that begin with vr_ from doc/html/cook/macro_icons to your
$HOME/nreal/icons/ux directory. There are a total of eight files.
When clicked, the first button toggles between PAL (vr_pal.off.nri) and NTSC
(vr_ntsc.off.nri). Two additional buttons, vr_pal.off.focus.nri and vr_ntsc.off.focus.nri,
indicate when the pointer is over the button. These are called focus buttons. To view
the code, see “Using Parameters Controls Within Macros” on page 379.
3 To add the toggle button function, edit the VidResizeUI.h file:
nuiPushMenu(“Tools”);
nuiPushToolBox(“Transform”);
nuiToolBoxItem(“@VidResize”,VidResize());
nuiPopToolBox();
nuiPopMenu();
nuxDefExprToggle(“VidResize.keepAspect”);
nuiPushControlGroup(“VidResize.Background Color”);
nuiGroupControl(“VidResize.bgRed”);
nuiGroupControl(“VidResize.bgGreen”);
nuiGroupControl(“VidResize.bgBlue”);
nuiPopControlGroup();
nuiPushControlWidget(“VidResize.Background Color”,
nuiConnectColorTriplet(kRGBToggle,kCurrentColor,1)
);
nuxDefExprToggle(“VidResize.vidFormat”,
“ux/vr_ntsc.off.nri|ux/vr_ntsc.off.focus.nri”,
“ux/vr_pal.off.nri|ux/vr_pal.off.focus.nri”
);Chapter 30 Installing and Creating Macros 925
The new lines list the normal button, followed by the focus button. The icons directory is
automatically scanned, but notice you have specified the ux subdirectory. The value
returned is always 0 for the first entry, 1 for the next entry, 2 for the third entry, and so on.
You can have as many entries as you want. Each button click moves you to the next choice.
4 Save the file and start Shake again.
5 Create the VidFormat node again.
The vidFormat parameter has a PAL/NTSC toggle.
The button created in the above steps is a single button that toggles through multiple
choices. This is fine for binary (on/off) functions, but less elegant for multiple choice
toggles. In the next example, create radio buttons. Radio buttons are similar to toggle
buttons, except that you simultaneously see all available buttons. This code lists only
one button name. Shake automatically assumes there is an on, on.focus, off, and
off.focus version of each button in the directory you specify. If you copied the vr_
buttons earlier, you indeed have all of these buttons. The code looks like the following
in “Using Parameters Controls Within Macros” on page 379.
6 Again, edit the VidResizeUI.h file:
nuiPushMenu(“Tools”);
nuiPushToolBox(“Transform”);
nuiToolBoxItem(“@VidResize”,VidResize());
nuiPopToolBox();
nuiPopMenu();
nuxDefExprToggle(“VidResize.keepAspect”);
nuiPushControlGroup(“VidResize.Background Color”);
nuiGroupControl(“VidResize.bgRed”);
nuiGroupControl(“VidResize.bgGreen”);
nuiGroupControl(“VidResize.bgBlue”);
nuiPopControlGroup();
nuiPushControlWidget(“VidResize.Background Color”,
nuiConnectColorTriplet(kRGBToggle,kCurrentColor,1)
);
nuxDefRadioBtnControl(
“VidResize.vidFormat”, 1, 1, 0,
“0|ux/vr_ntsc”,
“1|ux/vr_pal”
); 926 Chapter 30 Installing and Creating Macros
The numbers immediately to the left of the icon listing, for example, the 0 in “0|ux/
vr_ntsc”, show the value returned when that button is clicked. The nice thing about
radio buttons is that they can return strings, float, or int. For example, the channel
parameter in the KeyMix node selects the channel to do the masking, with R,G,B, or A
returned, all strings. Because your macro VidResize only understands 0 or 1 to be
meaningful, use 0 and 1 as your return values.
7 Save the text file and start Shake again.
The radio buttons appear in the VidFormat parameters.
Attaching Pop-Up Menus
Although the radio buttons are pretty cool, another frequent option is a pop-up menu.
The pop-up menu is good when there are more choices than space in a standard
Parameters tab. The only catch is that they return strings (words), not numbers, so you
have to add some logic in your macro to interpret the strings. The following is the ui
code, which can also be found in “Using Parameters Controls Within Macros” on
page 379.
Making Radio or Toggle Buttons
There is an unofficial function to create these buttons. That’s right, you can use a
macro to help make your macros.
In doc/html/cook/macros, copy radiobutton.h and relief.h to your startup directory.
In the command line, type something like the following:
shake -radio “NTSC size” 79 vr_ntsc $HOME/nreal/icons/ux -t 1-4 -v
This creates four buttons named vr_ntsc.on.nri, vr_ntsc.on.focus.nri, vr_ntsc.off.nri, and
vr_ntsc.off.focus.nri in $HOME/nreal/icons/ux. Each button is 77 pixels wide and each
one says “NTSC size.“ You must do this in the command line because at the moment
the FileOut does not work in a macro unless it’s the command line, and frames 1-4
create the four different files. Chapter 30 Installing and Creating Macros 927
To attach a pop-up menu:
1 Add the following code to the ui file to create a pop-up menu:
nuiPushMenu(“Tools”);
nuiPushToolBox(“Transform”);
nuiToolBoxItem(“@VidResize”,VidResize());
nuiPopToolBox();
nuiPopMenu();
nuxDefExprToggle(“VidResize.keepAspect”);
nuiPushControlGroup(“VidResize.Background Color”);
nuiGroupControl(“VidResize.bgRed”);
nuiGroupControl(“VidResize.bgGreen”);
nuiGroupControl(“VidResize.bgBlue”);
nuiPopControlGroup();
nuiPushControlWidget(“VidResize.Background Color”,
nuiConnectColorPControl(kRGBToggle,kCurrentColor,1)
);
nuxDefMultiChoice(“VidResize.vidFormat”, “NTSC|PAL”);
You can add as many entries as you want, and each entry is separated by the “|”
symbol. There is a problem with pop-up menus, however. As mentioned above, pop-up
menus only return the word you entered. Therefore, you must change the startup
macro file. In VidResize.h in the startup directory, change vidFormat to be a string
instead of an integer, placing its default parameter as either NTSC or PAL, both in
quotes. You also use an if/else statement below it to evaluate vidFormat and assign the
proper yRes value based on the vidFormat value.
2 Edit the VidResize.h file in startup to change the vidFormat logic to deal with strings
instead of integers:
image VidResize(
image In=0,
int keepAspect=1, //keeps aspect ratio or not
string vidFormat=“NTSC”, //This select NTSC or PAL res
float bgRed=0, //if keeping aspect ratio, determines
float bgGreen=0, //the bg color
float bgBlue=0
)
{
if (vidFormat==“NTSC”){
yRes=486;
} else {
yRes=576;
}
// curve int yRes = vidFormat==0?486:576;
Fit1 = Fit(In, 720, yRes, “default”, xFilter, 1);
Resize1 = Resize(In, 720, yRes, “default”, 0);
SetBGColor1 = SetBGColor(Fit1, “rgbaz”,
bgRed, bgGreen, bgBlue, 0,0928 Chapter 30 Installing and Creating Macros
);
Select1 = Select(keepAspect+1, Resize1, SetBGColor1, 0, 0);
return Select1;
}
3 Save the file and start Shake again.
The pop-up menu appears in the parameters.
You can alternatively avoid the use of the if/else statement and use something similar
to what you had before:
curve int yRes = vidFormat==“NTSC”?486:576;
The reason the if/else is used is that you usually have multiple entries on your pop-up
menu, and the conditional expression gets a bit unwieldy, so it’s better to create a long
if/else statement. It’s unwieldy and long, like that run-on sentence.
Standard Script Commands and Variables
The following tables include the standard script commands and variables.
Standard Script Commands
Script Controls Command-Line Equivalent Description
SetTimeRange(“1-100”); -t 1-100 The time range, for example,
frames 1-200. Note this is always
in quotes.
SetFieldRendering(1): -fldr 1 Field Rendering.
• 0 = off
• 1 = odd - PAL
• 2 = even - NTSC
SetFps(24); -fps 24 Frames per second.
SetMotionBlur(1, 1,0); -motion 1 1 , -shutter 1
1
Your three global motion blur
parameters.
SetQuality(1); -fast 0 or 1 Low (0) or high (1) quality.
SetProxyFilter(“default”); -proxyfilter The default filter, taken from the
“Filters Within Transform Nodes”
on page 862.
SetProxyScale(1,1); -proxyscale 1 1 Proxy scale and ratio for the
script. See Chapter 4, “Using
Proxies,” on page 137.Chapter 30 Installing and Creating Macros 929
Variables
Macro Examples
The following are several examples of macros. Many of these macros can be found in
Chapter 32, “The Cookbook,” on page 963, but they are not installed by default.
Parameter Testing: AutoFit
This macro resizes an image to fit a specified size along one dimension. For example,
suppose the images in a document are 300, 250, or 166 pixels wide. No matter what
size the screen snapshot, you can quickly resize it to one of the standard sizes and
easily keep the aspect ratio.
SetPixelScale(1,1): -pixelscale 1 1 Pixel scale and ratio for the
script. See Chapter 4, “Using
Proxies,” on page 137.
SetDefaultWidth(720);
SetDefaultHeight(480);
SetDefaultAspect(1);
SetDefaultBytes(1);
SetDefaultViewerAspect(1);
If a node goes to black or if you
create an image node such as
RGrad, it takes this resolution by
default.
SetFormat(“Custom”) Sets your defaults automatically
for resolution and aspect from
the precreated list of formats.
These formats are stored in your
startup .h file in the following
format:
DefFormatType(“Name”, width,
height, aspectRation,
framesPerSecond, fieldRendering);
Script Controls Command-Line Equivalent Description
Common Variables Description
width Returns the width of the current node.
height Returns the height of the current node.
bytes Returns the bit depth in bytes, either 1, 2, or 4.
in, outPoint Returns the in and out frames of the clip in time.
time The current frame number.
width/2 The center of the image in X.
height/2 The center of the image in Y.
dod[0], dod[1], dod[2], dod[3] The left, bottom, right, and top edges of the Domain of Definition
(DOD) of the current image.
parameterName The value of a parameter within the same node.
NodeName.parameterName The value of a parameter in a separate node named NodeName.930 Chapter 30 Installing and Creating Macros
shake uboat.iff -autofit 166 w
This calculates an image that is 166 x 125 pixels in size. It is not necessary to calculate
the height on your own.
Here is the code:
image AutoFit(image img=0, int size=166, int sizeIs=1)
{
curve w=0;
curve h=1;
return Resize(img,
sizeIs ? size*width/height :size ,
sizeIs ? size : size*height/width
);
}
The first line creates the parameters. Note that calculate is expecting an integer. The
next two lines assign a value to both w and h. This way, the user can type in 0 or 1 to
calculate width or height, or enter w or h, which is easier to remember. Since Shake
assigns values to w and h, it is able to determine what axis to calculate. The Resize
function uses embedded if/then statements to test the sizeIs parameter. Also, you do
not change the value of w or h during processing, so they are not declared as the curve
int type of data.
Text Manipulation I: RandomLetter
This function generates a random letter up to a certain frame, at which point a static
letter appears. You can select the color, switch frame, position, and size of the letter, as
well as the font. This function uses the standard C stringf function to pick a random
letter, and assigns it to the variable rdLetter. This is pumped into the Text function,
which uses a conditional expression to evaluate if the current time is before the
staticFrame time (set to frame 10 by default).
image RandomLetter(
int width=720,
int height=486,
int bytes=1,
const char * letter=“A”,
int staticFrame=10,
float seed=time,
const char * font=“Courier”,
float xFontScale=100,
float yFontScale=100,
float xPos=width/2,
float yPos=height/2,
float red=1,
float green=1,
float blue=1,
float alpha=1
)Chapter 30 Installing and Creating Macros 931
{
curve string rdLetter = stringf(“%c”,
’A’+(int)floor(rnd1d(seed,time)*26));
return Text(
width, height, bytes,
time < staticFrame?“{rdLetter}”:“{letter}”,
font,
xFontScale, yFontScale, 1, xPos, yPos,
0, 2, 2, red, green, blue, alpha, 0, 0, 0, 45
);
}
Text Manipulation II: RadioButton
This is an excerpt from the RadioButton function that is used to generate radio buttons
for the interface. The radio button code requires four icons to support it: name.on.nri,
name.on.focus.nri, name.off.nri, and name.off.focus.nri. Since it is tedious to write out
four separate files, automatically change the file extensions over time with this excerpt
from the macro:
image RadioButton(
const char *text=“linear”,
...
string fileName=text,
string filePath=“/Documents/Shake/icons/ux/radio/”,
int branch=time,
..
)
{
curve string fileState =
branch == 1 ? “.on.nri” :
branch == 2 ? “.on.focus.nri” :
branch == 3 ? “.off.nri” : “.off.focus.nri”;
curve string fileOutName = filePath + “/”+
fileName + fileState;
...
return FileOut(Saturation1, fileOutName);
Since you typically calculate this with a command such as:
shake -radio “HelloWorld” -t 1-4
it generates HelloWorld.on.nri at frame 1, HelloWorld.on.focus.nri at frame 2,
HelloWorld.off.nri at frame 3, and HelloWorld.off.focus.nri at frame 4.932 Chapter 30 Installing and Creating Macros
Text Manipulation III: A Banner
This little trick takes a string of letters and prints it, one letter at a time. It declares a
variable within the string section of a Text node:
Text1 = Text(720, 486, 1,
{{ string logo = “My Logo Here”;
stringf(“%c”, logo[(int) clamp(time-1, 0,strlen(logo))])
}},
“Courier”, 100, xFontScale, 1, width/2, height/2,
0, 2, 2, 1, 1, 1, 1, 0, 0, 0, 45);
This uses strlen to determine the length of the string and extract the letter that
corresponds to the current frame.
Text Manipulation IV: Text With a Loop to Make a Clock Face
This eager little example lays out a clock face. A for loop is used to count from 1 to 12,
and prints the number into a Text function with a stringf function. (You should just be
able to print the value of count with {count}, but it doesn’t work. Go figure.) The cosd
and sind functions are also used to calculate the position of the number on the clock
face. Keep in mind that zero degrees in Shake points east, as it does in all Cartesian
math. The falloffRadius serves no purpose in the function except to complete the
onscreen control set to give a center and a radius widget:
image Clock(
image In=0,
float xCenter=width/2,
float yCenter=height/2,
float radius=150,
float falloffRadius=0
)
{
NumComp=Black(In.width,In.height,1);
for (int count=1;count<=12;++count)
{
NumComp = Over(
Text(In.width, In.height, 1, {{ stringf(“%d”,count) }},
“Courier”, radius*.25, xFontScale, 1,
cosd(60+(count-1)*-30)*radius+xCenter,
sind(60+(count-1)*-30)*radius+yCenter,
0, 2, 2, 1, 1, 1, 1, 0, 0, 0, 45),
NumComp
);
}
return Over(NumComp,In);
}Chapter 30 Installing and Creating Macros 933
Text Manipulation V: Extracting Part of a String
This function can be used to extract the file name from a FileOut or FileIn node so you
can print it on a slate. Use it in a Text function.
const char *getBaseFilename(const char *fileName)
{
extern const char * strrchr(const char *, char);
const char *baseName = strrchr(fileName, ’/’);
return baseName ? baseName+1 : fileName;
}
To use it, place a line in the text parameter of a Text function, such as:
{getBaseFilename(FileIn1.imageName)}
Tiling Example: TileExample
This allows you to take an image and tile it into rows and columns, similar to the Tile
node in the Other tab. However, this one also randomly moves each tile, as well as scales
the tiles down. The random movement is generated with the turbulence function (see
Chapter 31, “Expressions and Scripting,” on page 935). Because of this, it is less efficient
than the Tile function, which can be viewed in the /include/nreal.h file.
image TileExample(
image In=0,
int xTile=3,
int yTile=3,
float angle=0,
float xScale=1,
float yScale=1,
float random=30,
float frequency=10
)
{
result = Black(In.width, In.height, In.bytes);
curve float xFactor=(In.width/xTile)*.5;
curve float yFactor=(In.height/yTile)*.5;
for (int rows=1; rows<=yTile; ++rows){
for (int cols=1; cols<=xTile; ++cols){
Move2D1 = Move2D(In,
-width/2+xFactor+((cols-1)*xFactor*2)+
(turbulence(time+(cols+1)*(rows+1),frequency)-.5)*random,
-height/2+yFactor+((rows-1)*yFactor*2)+
(turbulence(time+(cols+1)/(rows+1),frequency)-.5)*random,
angle, 1,
1.0/xTile*xScale, 1.0/yTile*yScale
);
result=Over(Move2D1,result);
}
}
return result;31
935
31 Expressions and Scripting
One of the more powerful aspects of Shake is its ability to
use a wide variety of expressions and script code directly
within any parameter of the application.
What’s in This Chapter
This chapter covers a variety of advanced topics relating to expressions and scripting
directly within Shake.
The following topics are covered:
• “Linking Parameters” on page 935.
• “Variables” on page 937.
• “Expressions” on page 939.
• “Reference Tables for Functions, Variables, and Expressions” on page 941.
• “Using Signal Generators Within Expressions” on page 947.
• “Script Manual” on page 951.
Linking Parameters
By linking variables and parameters, you can access information in any node and use it
in a different node.
To link to a parameter that’s within the same node:
m
Type the parameter name into a parameter field, then press Return (or Enter).936 Chapter 31 Expressions and Scripting
For example, the Move2D node links the yScale parameter as equal to the xScale
parameter by default. To show the expression editor, enter any letter into the value
field, then press Return. A plus sign appears next to the parameter. Click the plus sign
(+) to expand the parameter and enter your expression.
To link to a parameter within a different node:
m
Enter the node name, followed by a period, followed by the parameter name. For
example:
node.parameter
In the following example, the value parameter is linked to another value parameter
from the Fade1 node, and multiplied by 2.
You can declare a variable anywhere within a script. However, to make the variable
available in the interface, you must precede it with the curve declaration:
curve parameter_name = expression;
For example, use the following to declare the my_val variable for the above example:
curve my_val = 1;
To clone a node in the Node View:
1 Copy a node.
2 Do one of the following:
• Right-click in the Node View, then choose Edit > Paste Linked from the shortcut
menu.
• Press Shift-Command-V or Shift-Control-V.
This links all parameters in the cloned node to those of the original. The cloned
parameter is named with the original node’s name, followed by “_clone” and a number
if there is more than one clone in the node tree.
Important: When you modify any parameter in a cloned node, its link to the original
node is broken, so be careful when making adjustments with onscreen controls.Chapter 31 Expressions and Scripting 937
Viewing Links in the Node View
To help you make sense of what’s happening in the node tree, you can view the links
connecting one node to another. Links are indicated for cloned nodes, as well as for
nodes that use expressions referencing a parameter within another node.
To view the links between nodes in the Node View, do one of the following:
m
Right-click in the Node View, then choose Enhanced Node View from the shortcut
menu.
m
Press Control-E.
The links appear as lines with arrows pointing to the node that’s being linked to.
Linking to a Parameter at a Different Frame
Ordinarily, links to another parameter produce that parameter’s value at the current
frame. If the parameter is animated and you want to obtain a value at a specific frame,
or at a frame that’s offset from the current time, you can use the following syntax to
view a value at a different frame:
parameterName@@time
In the above syntax, time could be substituted with a specific frame number, or with an
expression that calculates an offset from the time variable that references the current
position of the playhead. For example, the following expression obtains the value from
the yPan parameter of the Move2D1 node at the previous frame:
Move2D1.yPan@@(time-1)
Variables
You can declare your own variables, as shown above. However, each image node also
carries other information about that node, using the width, height, and bytes variables.
When you refer to these, they work exactly the same as above.
For example, the default center of rotation on Move2D is set to:
width/2
This places the center at the midpoint of the current image.938 Chapter 31 Expressions and Scripting
When referring to a variable from a different node, place the node name before the
variable:
node_name.width
In some cases, problems may occur. For example, in a Resize node, if you set the Resize
equal to width/2, you potentially cause a loop because it is changing width based upon
a value that is looking for the width. To solve this, Shake always refers to the input
width, height, and bit depth when you refer to these from inside of that node.
Therefore, width/2 takes the incoming width and divides it by 2. When you refer to the
width, height, and bit depth of a different node, you use that node’s output values.
At any time, you can also use the variable time, which refers to the current frame
number. For example, enter cos(time/5)*50 in the angle parameter value field in a
Rotate node for a nice “rocking” motion.
Creating and Using Local Variables
You can create additional parameters of your own, with extra sliders and text entry
fields, in order to build more complex expressions with interactive input.
To create a local variable within a node in your script:
1 Pick the node you want to add a local variable to, and open its parameters into the
Parameters tab.
2 Right-click anywhere within the Parameters tab (not directly on a field), then choose
Create Local Variable from the shortcut menu.
3 When the Local Variable Parameters window appears, define the settings that variable
will be created with.
• Variable name: The name for that variable that appears in the Parameters tab.
• Variable type: Whether the variable is a float (decimal places of precision), a string
(alpha-numeric text), or an integer (whole numbers).
Warning: When renaming a node, don’t use a name that’s also used by a local
variable within that node.Chapter 31 Expressions and Scripting 939
• Slider Low Val: For float and int variables, the lowest value the slider will represent.
• Slider Hi Val: For float and int variables, the highest value the slider will represent.
4 When you’re done, click OK to create the variable and go back to your project, cancel to
close the window without creating a new variable, or next to continue creating new
variables.
New local variables that you create appear within a subtree at the bottom of the other
node parameters. This subtree appears only after at least one new variable has been
created, and is named localParameters.
Once created, your own local variables can be used and referenced by expressions just
like any other parameter in Shake.
To remove a local variable:
1 Right-click anywhere within the Parameters tab (not directly on a field), then choose
Delete Local Variable from the shortcut menu.
2 When the Delete local variable window appears, choose a local variable to delete from
the pop-up menu, then click OK.
For a tutorial on using local variables in expressions, see Tutorial 4, “Working With
Expressions,” in the Shake 4 Tutorials.
Expressions
The following section provides examples of using expressions (which can help out the
lazy compositor by doing your work for you). In any parameter, you can combine any
value with a math expression, trigonometry function, an animated curve, a variable, or
even a conditional expression.
For example, as mentioned above, the center of an image can be found by using:
xCenter = width/2
yCenter = height/2
These take the per-image width and height variables and divide them by 2.940 Chapter 31 Expressions and Scripting
You can type an expression in any field. Some nodes, such as ColorX, WarpX, and TimeX,
even support locally declared variables. For more information and a list of examples,
see “ColorX” on page 647.
If you are using the command-line method, you may have to enclose your expressions
in quotes to avoid problems with the operating system reading the command. For
example, don’t use:
shake my_image.iff -rot 45*6
Instead, use:
shake my_image.iff -rot “45*6”
Precedence
The above operators are listed in order of precedence—the order Shake evaluates each
operator, left to right. If this is difficult to keep up with (and it is), make liberal use of
parentheses to force the order of evaluation. For instance:
a = 1 + 2 * 4 -2
This expression does “2*4” first, since the “*” has precedence over “+” and “-” which
gives you “a=1+8-2.”Then from left to right, Shake does “1+8,” giving “a=9-2,” finally
resulting in “a=7.” To add and subtract before multiplying, use parentheses to control
the evaluation.
a = (1 + 2) * (4 - 2)
This results in “a=3*2” or “a=6.”
Note: In any expression, parentheses have the highest precedence.
Examples Explanation
1/2.2 1 divided by 2.2. Gives you the inverse of 2.2 gamma.
2*Linear(0,0@1,200@20) Multiplies the value of an animated curve by 2.
2*my_curve Multiplies a variable by 2.
sqrt(my_curve-my_center)/2 Subtracts my_center from my_curve, takes the result square root,
and then divides by 2.
time>20?1:0 If time is greater than 20, then the parameter is 1, otherwise it
equals 0.
cos(time/5)*50 Gives a smooth ping-pong between -50 and 50. Chapter 31 Expressions and Scripting 941
Reference Tables for Functions, Variables, and Expressions
All of the math functions available in Shake can be found in the include/nreal.h file. You
can declare your own functions in your own .h file.
To set an expression on a string (text) parameter, you need to add a : (colon) at the
start of the expression; otherwise, it is treated as text rather than compiled and
evaluated.
The following table shows variables that are carried by each node.
Arithmetic Operators Definition
* Multiply
/ Divide
+ Add
- Subtract
Relational Operators Definition
< Less than
> Greater than
<= Less than or equal to
>= Greater than or equal to
== Equal to
!= Not equal to
Logical Operators Definition
&& And
|| Or
! Not
Conditional Expression Definition
expr1?expr2:expr3 If expr1 is true (non-zero), then to expr2, else do expr3.
Global Variables Definition
time Current frame number.
Image Variables Definition
parameterName Value of parameterName from inside of that node.
nodeName.parameterName Value of parameterName in nodeName from outside of that node.
parameterName@@time Allows you to access a value at a different frame. For example:
Blur1.xPixel@@(time-3) looks at the value from 3 frames earlier.942 Chapter 31 Expressions and Scripting
The following table shows channel variables used in nodes such as ColorX, LayerX,
Reorder, etc. Check the documentation for specific support of any variable.
bytes The number of bytes in that image. This takes the input bit depth
when called from inside of the node, and the output bit depth
when called from outside of the node.
width Width of the image. Takes the input width when called from inside
of the node, and the output width when called from outside of the
node.
height Height of the image. Takes the input height when called from
inside of the node, and the output height when called from
outside of the node.
_curImageName Returns the name of the actual file being used for the current
frame. Useful when plugged into a Text node:
{FileIn1._curImageName}
dod[0], dod[1], dod[2], dod[3] The variable for the Domain of Definition (DOD): xMin, yMin, xMax,
yMax, respectively.
Image Variables Definition
In-Node Variables Definition
nr, ng, nb, na, nz New red, green, blue, alpha, Z channel.
r, g, b, a, z Original red, green, blue, alpha, Z channel.
l Luminance channel for Reorder.
n Null channel. Strips out the alpha in Reorder when used like
this: rgbn
r2, g2, b2, a2, z2 Second image’s channel for LayerX.
Math Functions Definition
abs(x) Integer absolute value. abs(-4) = 4. Be careful, as this returns an
integer, not a float. Use fabs for float.
biasedGain(value, gain, bias) Gives a ContrastLum-like curve that gives roll-off between two
values.
cbrt(x) Cubic root. cbrt(8) = 2
ceil(x) Truncates to next integer. ceil(5.3) = 6
clamp(x, lo, hi) Clamps x to between lo and hi.
clamp(1.5,0,1) = 1
exp(x) Natural exponent. exp(0) = 1
fabs(x) Float absolute value. fabs(-4.1) = 4.1
floor(x) Truncates to next lowest integer. floor(5.8) = 5
fmod(x,y) Float modulus. Returns the remainder in float. fmod(11.45,3) = 2, for
example, (3x3+2.45 = 11.45)
log(x) Natural log. log(1) = 0Chapter 31 Expressions and Scripting 943
log10(x) Returns base 10 log. log10(10) = 1
M_PI A variable set to pi at 20 decimal places.
max(a,b) Returns maximum between a and b.
max(5,10) = 10
max3(a,b,c) Returns maximum between a, b, and c.
max3(5,2,4) = 5
min(a,b) Returns minimum between a and b.
min(5,10) = 5
min3(a,b,c) Returns minimum between a, b, and c.
min3(5,2,4) = 2
a%b Modulus. 27%20 = 7
pow(x,y) Returns x to the y power. pow(2,4) = 16
round(x) Rounds number off. Values below x.5 are rounded to x, values
equal to or above x.5 are rounded to x+1. round(4.3) = 4
sqrt(x) Square root. sqrt(9) = 3
Noise Functions These are ideal for WarpX and ColorX.
noise(seed) 1-dimensional cubic spline interpolation of noise.
noise2d(seed,seed) 2d noise.
noise3d(seed,seed,seed) 3d noise.
noise4d(seed,seed,seed,seed) 4d noise.
lnoise(seed) 1d linear interpolation of noise.
lnoise2d(seed,seed) 2d noise.
lnoise3d(seed,seed,seed) 3d noise.
lnoise4d(seed,seed,seed,seed) 4d noise.
fnoise(x,xScale) 1d fractal noise based on noise().
fnoise2d(x,y,xScale,yScale)
fnoise3d(x, y, z, xScale, yScale,
zScale)
turbulence(x, xScale) A cheaper, rougher version of fnoise().
turbulence2d(x, y, xScale, yScale) Continuous 2d noise.
turbulence3d(x, y, z, xScale,
yScale, zScale)
Continuous 3d noise.
rnd(seed) Hash-based pseudo-random numbers. Non-hash based RNG (like
rand() or drand48()) should not be used in Shake because they
cannot be reproduced from one machine to another. Also, even on
the same machine, repeated evaluations of the same node at the
same time would produce different results.
rnd1d(seed, seed) 1d random value.
Math Functions Definition944 Chapter 31 Expressions and Scripting
rnd2d(seed,seed,seed) 2d random value.
rnd3d(seed,seed,seed,seed) 3d random value.
rnd4d(seed,seed,seed,
seed,seed)
4d random value.
Trig Functions (in radians) Definition
M_PI A variable set to pi at 20 decimal places.
acos(A) Arc cosine in radians.
asin(A) Arc sine.
atan(A) Arc tangent.
atan2(y,x) Returns the radian verifying sin(a) = y and cos(a) = x.
cos(A) Cosine.
sin(A) Sin.
Trig Functions (in degrees) Definition
Welcome back to trigonometry! For those of you who may have
forgotten, here is a helpful chart for some commonly used
equations.
acosd(A) Arc cosine in degrees.
asind(A) Arc sine in degrees.
atand(A) Arc tangent in degrees.
atan2d(y,x) Returns the angle verifying sin(a) = y and cos(a) = x.
cosd(A) Cosine in degrees.
distance(x1,y1,x2,y2) Calculates the distance between two points, (x1,y1) and (x2, y2).
sind(A) Sin in degrees.
tand(A) Tangent in degrees.
Noise Functions These are ideal for WarpX and ColorX.Chapter 31 Expressions and Scripting 945
Curve Functions
The curve functions with implicit time (Linear, CSpline, and so on.) all assume that time
is the first argument, so the following statements are identical:
LinearV(time,0,1@1,20@20)
Linear(0,1@1,20@20)
You can, however, adjust the time value explicitly with the V version of each curve type.
For more information on spline types, see “More About Splines” on page 316.
These are the cycle type codes:
• 0 = KeepValue
• 1 = KeepSlope
• 2 = RepeatValue
• 3 = MirrorValue
• 4 = OffsetValue
String Functions Definition
stringf( “xyz”, ...) Since you basically can write books on this, here is an example.
Otherwise, it is recommended to purchase a book on C. There are
also several examples in Chapter 30, “Installing and Creating
Macros,” on page 905. This example takes the scriptName
parameter and uses the system function echo to print it:
extern “C” int system(const char*);
const char *z= stringf(“echo %s”,scriptName);
system(z);
printf( “xyz”, ...)
strlen(“mystring”) Returns the length of the string.
strsub(
const char *string,
int offset,
int length
Extracts a string from another string.
Curve Functions Definition
biasedGain(x,gain,bias) Gives a smoothly-ramped interpolation between 0 and 1, similar to
Shake’s contrast curve. Gain increases the contrast, and bias offsets
the center.
Linear(cycle,
value@key1,
value@key2,
...)
Linear interpolation from value at keyframe 1 to value at keyframe
2, and so on.
LinearV(time_value, cycle,
value@key1,
value@key2,
...)
Linear interpolation from value at keyframe 1 to value at keyframe
2, and so on. 946 Chapter 31 Expressions and Scripting
The following expressions provide functions for curve analysis.
CSpline(cycle,
value@key1,
value@key2,
...)
Cardinal-spline interpolation, also known as Catmull-Rom splines.
CSplineV(time_value, cycle,
value@key1,
value@key2,
...)
Cardinal-spline interpolation, also known as Catmull-Rom splines.
JSpline(cycle,
value@key1,
value@key2,
...)
Jeffress-spline interpolation.
JSplineV(time_value, cycle,
value@key1,
value@key2,
...)
Jeffress-spline interpolation.
NSpline(cycle,
value@key1,
value@key2,
...)
Natural-spline interpolation.
NSplineV(time_value, cycle,
value@key1, value@key2,...)
Natural-spline interpolation.
Hermite(cycle,
[value,tangent1,tangent2]@key1,
[value,tangent1,tangent2]@key2,
...)
Hermite-spline interpolation.
HermiteV(time_value, cycle,
[value,tangent1,tangent2]@key1,
[value,tangent1,tangent2]@key2,
...)
Hermite-spline interpolation.
Curve Functions Definition
Function Definition
getCurveMinMax(int minOrMax,
int begFrame, int endFrame,
float curveCurrentValue, const
char *curveName);
Float. Returns the min or max value of the specified curve (plug)
over the specified frame range. If minOrMax is set to 0, then it
returns the min value. If minOrMax is set to 1, then it returns the
max value.
getCurveAvg(int begFrame, int
endFrame, float
curveCurrentValue, const char
*curveName);
Float. Returns the average value of the specified curve over the
specified frame range.Chapter 31 Expressions and Scripting 947
Using Signal Generators Within Expressions
This section illustrates the use of the various signal generators that are available for
Shake expressions. They can be used to create either predictable or random patterns of
values, and mathematically customized to adjust their offset, frequency, and amplitude.
Signal Generators
The following noise and trig functions all generate changing values over time. To
animate a parameter using these functions , you supply the variable time for the
function to operate upon.
Note: You can copy and paste most of the expressions found in this section into your
own scripts for easy use.
cos(time) sin(time)
noise(time) lnoise(time)
fnoise(time,1) turbulence(time,1) 948 Chapter 31 Expressions and Scripting
fnoise() and turbulence() have additional frequency factors to the noise.
Offsetting a Generator Function
To offset a function’s starting value, add a value to time.
fnoise(time,2) fnoise(time,5)
turbulence(time,2) turbulence(time,5)
cos(time) cos(time+10)Chapter 31 Expressions and Scripting 949
Changing the Frequency of a Generator Function
To change a function’s frequency, multiply or divide time by a value.The exceptions are
the noise functions fnoise() and turbulence()—both of which have frequency controls of
their own (values are not modified below 1, so you may still have to modify time).
Frequency and Continuous Versus Discontinuous Noise
The functions noise, lnoise, fnoise, and turbulence are continuous noise generators,
meaning you can draw a continuous smooth curve between the values. The following
two expressions provide examples:
As a result, you can safely predict that noise(1.05) is near and probably between these
two values (in fact, it equals .5358).
The rnd() function creates discontinuous noise. The following two expressions provide
examples:
cos(time) cos(time/3)
noise(time) noise(time/2)
Function Result
noise(1) Is equal to .554
noise(1.1 is equal to .5182
Function Result
rnd(1) Is equal to .4612
rnd(1.1 is equal to .4536950 Chapter 31 Expressions and Scripting
You might have guessed that rnd(1.05) is between those, but it in fact equals .0174, not
.458. This is why it is called discontinuous noise. Examining the neighboring values
does not help you to arrive at a safe guess for the in-between values. For this reason,
frequency changes have no practical effects on the curve.
Setting Ranges for Expressions
The noise generators return values between 0 and 1. sin() and cos() return values
between -1 and 1. To adjust the range, use addition and multiplication. For example,
suppose you want to spit out values between 100 and 400. To make a noise generator
do that, subtract the low value from the high value. This is your multiplier. Then add the
lower value of your range. Thus, you have:
noise(time)*300+100
As cos and sin return values between -1 and 1, you have to offset the output by 1 and
multiply by half of the difference between the two:
(sin(time)+1)*150+100
Modifying Noise
You can use other functions like ceil(), floor(), or round() to help you break a curve into
steps. Ceiling pushes a number to the next highest integer, floor drops it to the next
lowest, and round rounds off the value.
rnd(time)
noise(time) noise(time)*300+100Chapter 31 Expressions and Scripting 951
In this example, to break a noise() function into 5 steps between 0 and 1, multiply the
value by 6 (float values of 0 to 6), knock off the decimal places with a floor() function
(returning values of 0, 1, 2, 3, 4, 5), and then divide by 5, returning values of 0, .2, .4, .6,
.8, and 1.
Another helpful expression is modulus, written as a%b. This divides a by b and returns
only the remainder. This is helpful to fit an infinite range of values into a repeating limit.
A good application of modulus is if you have an angle of any potential amount (for
example, 4599) but you need to fit it into a range of 0 to 360. You would use
angle%360, which equals 279.
Script Manual
When Shake saves a script, it creates a file that essentially uses the C programming
language. This makes the product open and flexible, as you can add your own
functions, or use programming structures to procedurally perform what would
otherwise be tedious operations. You can also quickly make minor modifications that
might be cumbersome to do in the Shake interface (for example, changing a FileIn to
read BG2.iff instead of BG1.rla). This section of the guide explains the basic principles of
this scripting language, and how they can be manipulated to create your own macros.
The examples in “Attaching Parameter Widgets” on page 919, provide step-by-step user
interface scripting examples.
noise(time) floor(noise(time)*6)/5
time%10:952 Chapter 31 Expressions and Scripting
See Tutorial 8, “Working With Macros,” in the Shake 4 Tutorials for information on
making macros interactively or in a script.
Scripting Controls
To generate a test script, go to the Tutorial_Misc/truck/ directory within the
Tutorial_Media directory. Enter the following command to create a script and save it as
start.shk:
shake truck.iff -outside sign_mask.iff -over bg.iff -savescript
start.shk
The truck is composited over the background, with the sign mask as a holdout mask,
and a script named start.shk is saved. The composite itself is unimportant, but the tree
structure is important. The following image shows how the script appears in the
interface.
To test the script result from the command line, type the following:
shake -script start.shk
The image of the truck is composited over the street image. The following is the code
for the script itself:
SetTimeRange(“1”);
SetFieldRendering(0);
SetFps(24);
SetMotionBlur(1, 1, 0);
SetQuality(1);
SetUseProxy(“Base”);
SetProxyFilter(“default”);
SetPixelScale(1, 1);
SetUseProxyOnMissing(1);
SetDefaultWidth(720);
SetDefaultHeight(486);
SetDefaultBytes(1);
SetDefaultAspect(1);
SetDefaultViewerAspect(1);
SetTimecodeMode(“24 FPS”);
SetDisplayThumbnails(1);
SetThumbSize(15);
SetThumbSizeRelative(0);
SetThumbAlphaBlend(1);Chapter 31 Expressions and Scripting 953
// Input nodes
bg = SFileIn(“bg.iff”, “Auto”, 0, 0);
sign_mask = SFileIn(“sign_mask.iff”, “Auto”, 0, 0);
truck = SFileIn(“truck.iff”, “Auto”, 0, 0);
// Processing nodes
Outside1 = Outside(truck, sign_mask, 1);
Over1 = Over(Outside1, bg, 1, 0, 0);
The first section contains controls for the script. The controls are all optional or can be
overridden on the command line. The following sections discuss the body of the script,
the sections listed under the “Input nodes” and “Processing nodes” comments.
Variables and Data Types
Shake assigns types of data to a variable name. A variable works as a sort of shopping
cart to carry around your information to be used again somewhere else. The variables
in the following code (excerpted from above) are bold.
The comment lines (lines starting with //) are omitted:
bg = SFileIn(“bg.iff”, “Auto”, 0, 0);
sign_mask = SFileIn(“sign_mask.iff”, “Auto”, 0, 0);
truck = SFileIn(“truck.iff”, “Auto”, 0, 0);
Outside1 = Outside(truck, sign_mask, 1);
Over1 = Over(Outside1, bg, 1, 0, 0);
The above code assigns three variables (bg, sign_mask, truck) to three SFileIn nodes.
These are connected to an Outside node and then to an Over node, which are also
assigned variable names, Outside1 and Over1.
A good reason to use a script is that you can quickly set the path for the images by
using copy and paste functions in a text editor.
Why “SFileIn” and Not “FileIn” Node?
The SFileIn node is an improvement on the older FileIn node. The interface button
FileIn is linked to SFileIn, but the older name is retained. The SFileIn node allows extra
subfunctions for timing to be attached.
Two Ways to Load a Script Into the Interface:
• Save the script and load it into the interface with the Terminal command shake
start.shk, or click the Load button in the interface.
• Copy the script as text from the HTML browser/text editor and paste it into the
Node View (press Command-V or Control-V). When you paste it into the interface, it
points out that the filepaths are local, and probably not correct in terms of the
location of the images and where the interface expects to find the images.
Therefore, browse to the directory that contains the images, then click OK.954 Chapter 31 Expressions and Scripting
Because the above script was created with a local filepath (in the Tutorial_Media/
truck directory), the images can only be found if the script is
run from the truck directory. For example, a local directory for the truck image might be:
/myMachine/documents/truck/truck.iff
In order to run the script from any location—so the images can be found regardless of
the location of the saved script—the absolute path of the images is required. The
following is an example of an absolute directory for the same image (truck):
/Server02/VolumeX/Scene12/truck/truck.iff
To reset the filepaths to an absolute path:
1 To determine the absolute path of the images, go into the directory into which you
copied the tutorial media (using the command line) and type the Present Working
Directory command:
pwd
2 Enter the filepath before the image names to make the path absolute. Shake does not
prompt you to find the absolute path each time you paste it into the interface.
Changes are in bold.
bg = SFileIn(“/Server02/VolumeX/Scene12/truck/
bg.iff”, “Auto”, 0, 0);
sign_mask = SFileIn(“/Server02/VolumeX/Scene12/truck/
sign_mask.iff”, “Auto”, 0, 0);
truck = SFileIn(“/Server02/VolumeX/Scene12/truck/
truck.iff”, “Auto”, 0, 0);
Outside1 = Outside(truck, sign_mask, 1);
Over1 = Over(Outside1, bg, 1, 0, 0);
The left side is the variable name. You can change the variable name, but you must
change the names wherever they appear.
3 Change the variable names:
Gilligan = SFileIn(“/Server02/VolumeX/Scene12/truck/
bg.iff”, “Auto”, 0, 0);
Skipper = SFileIn(“/Server02/VolumeX/Scene12/truck/
sign_mask.iff”, “Auto”, 0, 0);
Lovey = SFileIn(“/Server02/VolumeX/Scene12/truck/
truck.iff”, “Auto”, 0, 0);
Thurston = Outside(Lovey, Skipper, 1);
Ginger = Over(Thurston, Gilligan, 1, 0, 0);Chapter 31 Expressions and Scripting 955
Therefore, the left side of the above lines is the variable that you assign a value. The
right side is the value, usually coming from a function such as SFileIn, Outside, or Over.
The function is called by its name, followed by its arguments between parentheses. The
arguments, called parameters, are very specifically organized and always specify the
same thing. For example, the third parameter of Outside is 1—a numeric code to
determine whether you take the foreground or background resolution (see “Outside”
on page 466). The line ends with a semicolon.
4 Feed the modified script back into the Shake interface.
The following steps illustrate how variables can help you. Insert the Mult colorcorrection node to change the truck color. The script format for Mult (in “Mult” on
page 644) looks like the following:
image Mult(
image In,
float red,
float green,
float blue,
float alpha,
float depth
);
To add the Mult node, assign a variable name (here, MaryAnn) and then the parameters.
Since Lovey is the variable name of the truck, it is fed in as the image. The numbers turn
the truck yellow. Since Thurston (the Outside node) previously loaded the Lovey node,
switch it to MaryAnn, the new Mult node.
Note: The premultiplied state of the truck image is ignored for this example.
5 Insert the Mult node into the script:
Gilligan = SFileIn(“/Server02/VolumeX/Scene12/truck/
bg.iff”, “Auto”, 0, 0);
Skipper = SFileIn(“/Server02/VolumeX/Scene12/truck/
sign_mask.iff”, “Auto”, 0, 0);
Lovey = SFileIn(“/Server02/VolumeX/Scene12/truck
truck.iff”, “Auto”, 0, 0);
MaryAnn = Mult(Lovey, 1, 1, .2);
Thurston = Outside(MaryAnn, Skipper, 1);
Ginger = Over(Thurston, Gilligan, 1, 0, 0);956 Chapter 31 Expressions and Scripting
The result appears as follows in the interface:
Only four parameters are entered for the Mult node—the alpha and depth parameters
are omitted. Therefore, the alpha and depth parameters default to a value of 1. You can
also see that Mult looks for two types of data: An image (labeled In) and floats (labeled
red, green, blue, and so on.). A float is any number that contains a decimal place, for
example, 1.2, 10.00, .00001, or 100,000.00. The following table lists the five types of data.
Shake is usually smart enough to convert an integer into a float, so when the following
is entered:
MaryAnn = Mult(Lovey, 1, 1, .2);
Shake does not freak out that you enter two integers (1, 1) for the red and green
multipliers. However, this is an infrequent case concerning the conversion of floats and
integers, as you cannot put in a string or an image for those arguments. For example:
MaryAnn = Mult(Lovey,“1”, Gilligan, .2);
Data Type Description
image An image.
float A number with a decimal place, such as .001, 1.01, or 10,000.00.
int Integer. A number with no decimal place, such as 0, 1, or 10,000.
string “this is a string”
curve float or curve int A special case of float or int that designates that the number has
the possibility of being changed during script execution or when
tuning it in the interface. Chapter 31 Expressions and Scripting 957
does not work because you are drastically mixing your data types (plugging an image
into a float). There is one exception to this: when you enter 0 as an image input,
indicating that you do not want to designate any image input.
So far, you have only assigned variables to image types. In this next example, create a
float variable so you can multiply the truck image by the same amount on the red,
green, and blue channels. Call this variable mulVal.
Note: For these examples, you must save the script and use Load or Reload Script—the
Copy/Paste does not work properly.
To create a float variable to multiply an image by equal amounts in the red,
green and blue channels:
1 Create a script variable and plug it into the Mult node:
float mulVal = .6;
Gilligan = SFileIn(“/Server02/VolumeX/Scene12/
truck/bg.iff”, “Auto”, 0, 0);
Skipper = SFileIn(“/Server02/VolumeX/Scene12/truck
sign_mask.iff”, “Auto”, 0, 0);
Lovey = SFileIn(“/Server02/VolumeX/Scene12/truck
truck.iff”, “Auto”, 0, 0);
MaryAnn = Mult(Lovey, mulVal, mulVal, mulVal);
Thurston = Outside(MaryAnn, Skipper, 1);
Ginger = Over(Thurston, Gilligan, 1, 0, 0);
Mult now takes .6 for its red, green, and blue parameters. To change all three, modify
mulVal on the first line of the script and execute the script again. This is swell. However,
here is a problem. The variable mulVal is only applied when you load the script—Shake
does not retain it for later use. When the script is loaded (with Load Script, not Copy/
Paste) into the interface, the Mult parameters all read .6, not mulVal. There is no place in
the interface that you can find the mulVal parameter and modify it and have the Mult
take the argument. If you are immediately executing the script, this is not a problem. If
you are loading the script and are going to interactively edit it, this is a problem. You
therefore must declare mulVal as a curve float, a special type of float that tells Shake to
wait until frame calculation to resolve the variable.
2 Convert the mulVal variable to a changeable curve type:
curve float mulVal = .6;
Gilligan = SFileIn(“/Server02/VolumeX/Scene12/doc/pix
truck/bg.iff”, “Auto”, 0, 0);
Skipper = SFileIn(“/Server02/VolumeX/Scene12/truck/
To Designate That a Function Has No Image Input
Place a 0 in the image position. This example has no image input for the background
image, the second argument:
MaryAnn = Over(Thurston, 0);958 Chapter 31 Expressions and Scripting
sign_mask.iff”, “Auto”, 0, 0);
Lovey = SFileIn(“/Server02/VolumeX/Scene12/truck/
truck.iff”, “Auto”, 0, 0);
MaryAnn = Mult(Lovey, mulVal, mulVal, mulVal);
Thurston = Outside(MaryAnn, Skipper, 1);
Ginger = Over(Thurston, Gilligan, 1, 0, 0);
3 Load the script into the interface with Load Script.
4 Open the local Parameters subtree in the Globals tab to reveal the mulVal slider.
In short, if you load a script to be modified in the interface, you probably want to
declare it as a curve type of data. If you are going to execute the script on the
command line and it is not going to change (that is, it is not animated), you can omit
the curve data type. There are some examples in the following sections.
Functions
Shake is a collection of functions. Some functions modify images, such as Blur. Some
return a number, like distance() (calculates the distance between two points) or cosd() (a
cosine function in degrees, not radians). Other functions build the interface, such as
nuiToolBoxItem. The nuiToolBoxItem function loads a button into the interface and
attaches a function to the button. When you call a function, you assign a variable to it.
The following example reads the angle parameter for the Rotate1 node, and returns the
cosine in degrees, assigning it to the variable named myCos:
curve float myCos = cosd(Rotate1.angle);
You can also use functions inside of other functions. In this example, the cosd function
is inside of a Rotate function:
Rotate1 = Rotate(FileIn1, cosd(45));
When you place a function inside of another, it is called a “nested function.” In both
cases, you call the function, and then enclose its parameters in parentheses. When you
assign a variable to a function, terminate the line with a semicolon, as shown in both of
the examples above. Everything in between the parentheses can be formatted
however you want, so the following two examples are identical:
Mult1 = Mult(FileIn1, 1,1,1);
and
Mult1 = Mult(
FileIn1,
1,
1,
1
);Chapter 31 Expressions and Scripting 959
Script Comments
To temporarily comment out lines in a script, use the following symbols:
# This line is commented out
// This line is also commented out
This is not commented out //But this is
/*
All of
these lines
are
commented out
*/
Conditional Statements
Like in C, you can use conditional statements in your scripts. These are particularly
useful in macros, where a conditional parameter is input by the user. Keep in mind that
the conditional statements require you to work in the script and cannot be built with
interface tools.
Function Formats
So, where are all of these functions and how do you find their formats? Typically, they
are organized by the type of data they manipulate.
Here is a handy way to get a function format: Create the node(s) in the interface,
select and copy the node(s) (press Command-C or Control-C), and paste the nodes
into a text editor.
For more information:
• For image functions, see their relative chapters. For example, for information on the
Rand function, see “Rand” on page 602.
• For mathematical functions, see“Expressions” on page 939.
• For Shake settings, see “Setting Preferences and Customizing Shake” on page 355.
• For interface building functions, see “Setting Preferences and Customizing Shake”
on page 355.
• Examples abound in /include/nreal.h and /include/nrui.h, as
well as in Chapter 32, “The Cookbook,” on page 963.
Conditional Expression
To simply switch a parameter, use the in-parameter conditional expression
expr1?expr2:expr3, which reads as, “if expr1 is true, use expr2; otherwise, use expr3.” For
example, time>10?0:1 reads as, “if time is greater than 10, then set the value to 0;
otherwise, set it to 1.”960 Chapter 31 Expressions and Scripting
In the interface, you can also use the Select node to switch between any number of
input nodes. (For more information on the Select node, see “Select” on page 471.) This
strategy allows you to stay within the interface without resorting to the script.
However, the difference is that Shake only builds the part of the script that it needs for
conditional statements listed below. Select has the overhead of all of its input nodes.
Finally, ++i or --i is supported for increments. i++ is as well, but returns a warning
message.
If/Else
This is used to express decisions. If the test expression is true, then the first statement is
executed. If false (and an “else” part exists), then the second statement is executed. If
“else” is left off, Shake does nothing and moves on.
if (expression evaluates to non-zero) {
do_this
} else if {
do_this
} else if {
do_this
} else {
do_this
}
or just
if (expression evaluates to non-zero) {
do_this
} else if {
do_this
}
For
Normally, the three expressions of the “for” loop are an initialization (a=1), a relational
test (a<10), and an increment (a++): for (a=1; a<10; a++). So “a” is set to one, checked if
it is less than 10, then if that is true, a statement performed, and “a” is incremented by
one. The test is checked again and this cycle continues until “a” is found to be not less
than 10 (untrue).
for (initialize; test; increment)
{
do_this
}
While
If the given expression is true or non-zero, the following statements are performed, and
the expression is reevaluated. The cycle continues until the expression becomes false.
while (this_expression_is_true)
{
do_this
}Chapter 31 Expressions and Scripting 961
If there is no initialization or reinitialization, “while” often makes more sense than “for.”
Do/While
This variation of “while” is different in that it tests at the bottom of the loop, so the
statement body is done at least once. In the “while” above, the test may be false the
first time and so nothing is done. Note on all of the other statements, a semicolon was
not needed. This expression does need it at the end.
do {
do_this
} while (this_expression_is_true);32
963
32 The Cookbook
“The Cookbook” contains tips and techniques for Shake
that don’t fit neatly into other categories.
Cookbook Summary
The Cookbook contains a wide variety of techniques that have been accumulated over
the years, and is provided to give you some shortcuts and ideas for different
approaches to different kinds of shots. None of the methods covered in this chapter are
intended to be the only way to perform these tasks—Shake’s very nature makes it
possible to accomplish the same things in many different ways.
Coloring Tips
The following topics cover different ways for performing color correction in Shake.
Tinting
The following five techniques demonstrate the versatility of the color-correction nodes.
Each one applies a tint to an image, meaning the midtones are pushed toward a
certain color while leaving the blacks and whites alone. None of these techniques has
any particular superiority over the others—they just illustrate several ways to do the
same thing.
The following images are courtesy of Skippingstone from their short film Doppelganger.964 Chapter 32 The Cookbook
• Brightness + Mult: These nodes concatenate in the following node tree. This setup
does not work well if you use a pure color in a single Mult node (in this case, pure
blue with a value of 0,0,1) because the zeroes drop the midtones out completely on
the red and green channels. The Brightness node is set to approximately 3, helping to
maintain the red and green channels when the blue multiplier brings them back
down (.3, .3, .8 in Mult1).
• Monochrome + Brightness + Mult: This is identical, except you get a purer blue color
since you have made a monochrome image before applying the color correction.
Note that the Monochrome node does not concatenate with the Brightness node.
• Lookup: This example uses a Lookup curve, setting the midpoints of the curve to .3,
.32, .8. Chapter 32 The Cookbook 965
The curve looks like the following (the curves are Linear to mimic a Tint function from
a different package):
• ColorMatch: This is similar in theory to the Lookup node, as it allows you to push the
lows, mids, and highs. However, the internal math helps reduce solarization (hills and
peaks in the curves), so you maintain a little bit more of the input color.
You can get interesting adjustments of your values if you adjust the source points on
the ColorMatch. (For example, hold down O and drag left on the highSource color
control.)966 Chapter 32 The Cookbook
• ColorCorrect: This is an Add on the Mid areas using -.2, -2, .5.
• Using Mix: You may end up with several nodes to achieve a particular color
correction. This may be awkward to tune. Therefore, a convenient way to quickly
adjust a result is to mix it back into the input image with a Layer–Mix node. Naturally,
it is always faster to process by adjusting the original color nodes, but using a Mix
may be easier for you to keep a handle on things.Chapter 32 The Cookbook 967
Filtering Tips
This section illustrates creative uses of Shake’s filtering nodes.
Volumetric Lighting
This script can be found in doc/html/cook/scripts/volumetric.shk. This simple hack gives
you fake volumetric lighting by using Filter–RBlur. One of the key principles is that RBlur
is dog-slow, so it is better, if you can, to apply a radial blur on a low-resolution element
and then scale it up. To get the volume effect, subtract one RBlur from another. You
should also drop your quality down (to .25, for example) when testing.
In this tree, RBlur3 is generating the main cone of light. Move2D1 is used to scale it up,
saving you a little rendering time. CornerPin2 is used to generate the “shadow” element
and place it on the ground in perspective. RBlur1 is at full resolution to maintain the
crispness of the rays.968 Chapter 32 The Cookbook
Keying Tips
This section covers keying techniques.
Keying Clouds
This script can be found in doc/html/cook/scripts/clouds.shk. The images used are the
moon.iff and sky.iff files found in the Tutorial_Media directory.
This is one potential technique for keying clouds, but may also be useful for flames. It
discusses several approaches.This script puts the moon behind the clouds:
You might think a color-based key would work, but in this image nearly everything is a
shade of blue, so that won’t work the way you want.
Another tactic would be to use a LumaKey node. In this attempt, the moon is corrected
(using a Compress node, an Other–AddShadow node to add the white glow, and then
positioned with a Move2D node) to adjust the color and position of the moon. A key is
pulled on the clouds with LumaKey, and a Layer–Atop is used to layer the moon only
where there is alpha in the background. This displays two typical problems with this
sort of keying—there’s a black edge, and there is not complete opacity in the thick part
of the clouds. This is not what you want. Chapter 32 The Cookbook 969
In this next attempt, there are three main branches. The first, identical to the second
attempt, manipulates the moon. The second branch, terminating in a KeyMix node,
works on the RGB of the clouds. The KeyMix mixes a darkened cloud image with a
brighter cloud image through the RGrad node, giving it the “glow” through the clouds.
The third branch works on the key of the clouds. This is divided into two sub-branches,
one of which pulls a soft key on the clouds, and the second of which pulls a hard key to
act as a garbage mask.
Move2D1 KeyMix1970 Chapter 32 The Cookbook
A luminance key is used here. The blue channel has less contrast than the red channel,
so first insert a Color–Monochrome node, then boost the rWeight up and the g- and
bWeight down before you key.
A Color–LookupHLS node manipulates the luminance, then switches that into the alpha
with Color–Reorder (a macro begging to happen) to key the deep part of the clouds in
the lower-right corner. The curve looks like the following:
The LumaKey gets the edges of the cloud, and is enhanced by a DilateErode, and then is
subtracted from the soft key.
Red channel Blue channel
DilateErode3 LumaKey1Chapter 32 The Cookbook 971
You still get black edges, so sprinkle Filter–DilateErode nodes liberally. For the nodes
attached to ISub, the first chews into the edge, and the second DilateErode softens it by
activating the soften parameter.
As a final touch, the x/yCenter of RGrad is linked to the Move2D x/yPan, adding an
offset for the center of the moon, Move2D1.xPan+155, Move2D1.yPan+160. You can
modify the position of the moon and the glow on the clouds follows.
Vignette
This script can be found in doc/html/cook/scripts/vignette.shk.
This example keys out the night sky, turns the scene to daylight, and adds a vignette to
the borders.972 Chapter 32 The Cookbook
This tree involves a lot of masking. Remember, rotoscoping is your friend. (Just not a
very fun friend.)
The first branch, down to KeyMix1, pulls a key on the sky. The first Keylight pulls a hard
key to be fed into the second Keylight. This is the core key for the bridge. This is
keymixed with a Primatte-based key for the blue area through RotoShape1.
Once the key is pulled, a Monochrome is applied and composited over a Ramp. This is
then tinted with ColorMatch. A good trick is to drag the color controls while holding T
down—this is for temperature and can be used to warm up or cool down a color.Chapter 32 The Cookbook 973
The image is then defocused with a mask. This result is masked by the node Inside1
with a square mask (Blur2) to get the black frame.
Add1 Blur2974 Chapter 32 The Cookbook
Layering Tips
The following examples illustrate tips for layering.
Bleeding Background Color Into the Foreground
This script can be found in doc/html/cook/scripts/edgelight.shk.
This script, which has an exaggerated effect for purposes of illustration, helps blend in
some of the background color onto the foreground material. Note this is one
variation—there are several ways to do this.
Normal composite Composite with exaggerated rim Chapter 32 The Cookbook 975
• Keylight: Extracts an unpremultiplied plate.
• Reorder: Places the alpha into the RGB.
• Bytes: Boosts it up to 16 bits—you are sure to get banding on the Blur+Emboss in
8 bits.
• Emboss: Extracts a sense of lighting direction. The elevation is set to 0.
• Blur2: If you do not blur the background, it makes her look transparent.
• IMult1: Blends the color into the “lit” areas. Note it is assumed that Blur2 has an alpha
of 1. If not, you must insert a Color–SetAlpha.
• Screen: You can also use an IAdd.
Background Flare
This script can be found in doc/html/cook/scripts/backlight2.shk.
This script translates D.W. Kim’s fine backlighting page from http://highend3d.com into
Shake terms. There are other ways to do this.
Emboss1 IMult1976 Chapter 32 The Cookbook
The script begins with a Color–Reorder that puts the alpha into the RGB channels. It
then Filter–Blurs it, inverts that, removes the alpha channel with a Color–SetAlpha set to
0, and then is added back onto the plate. The Color–MDiv is used because the IAdd
disrupts the premultiplication status. The Color–Gamma can be used to tune the
intensity of the flare. Finally, preMultiply is activated in the Layer–Over.Chapter 32 The Cookbook 977
Transform Tips
This section covers advanced techniques for transforming images.
Spiral Down
This animates something in a spiral pattern, running ever smaller (or larger) concentric
rings. First, right-click in the parameters area, choose Create Local Variable from the
shortcut menu, then name the local variable “mul.”
You control the speed toward the center and the direction by animating the mul value.
To slow down the degrees in each frame, multiply time by a second localParameter, for
example, freq:
sin(time*freq)*mul
If you animate freq from near 0 to 1, it looks something like the following:
The path does not show up by default, as there are no curves.978 Chapter 32 The Cookbook
To burn the path in:
1 Set timeRange in the Globals tab (1-50, for example).
2 Right-click over the xPan expression, then choose Save Expression from the shortcut
menu.
3 In the lower-right corner of the Browser is a setting for Auto or Raw. Set it to Raw.
4 Enter a name for your curve, for example, “curveX.txt.”
5 Do the same thing for yPan, saving it as “curveY.txt.”
6 Create a second Pan node.
7 Right-click, then select Load Expression on the xPan parameter.
8 Set your format in the Browser as Raw.
9 Do the same for yPan.
Auto Orient
This script can be found in doc/html/cook/scripts/autoorient.shk.
This script demonstrates how to set up a transform so that an element automatically
rotates according to the tangent of the move. Although you can create this effect with
one node, it is better to use two so that you can continue to modify the position
without accidentally eliminating the expression to do the rotation. If you want to
animate or modify the rotation, insert a second Rotate node—the transforms
concatenate.
The rotation is determined by obtaining the position just before and just after the
current node. This is done by using the @@ time notation: (xPan@@time-.5) returns the
xPan position half a frame earlier. These coordinates are then fed into the atan2d
function which returns the angle between two points.Chapter 32 The Cookbook 979
Creating Depth With Fog
The uboat.iff image can be found in the Tutorial_Media directory. The background is a
simple Ramp. The trick is to apply depth-cueing to the U-boat:
The first composite, without any fog, is not so stellar:
The second approach is to use Key–DepthKey (distance of 45) to pull a transparency
key. Not so good, as you get artifacts along the edges:980 Chapter 32 The Cookbook
This more complex approach uses the DepthKey as a mask for a color correction, in this
case, Compress, which has identical hi and lo colors. A KeyMix is used to get the
concatenation with Mult and Compress. The Mult is used to tint the boat green;
Compress gives the feeling of depth:
Text Treatments
The following series of scripts plays with text treatments, and is stored as doc/html/
cook/scripts/car_ad_text.shk, numbers 1 through 8.
Script 1
• Blur1: Only yPixels is set.
• Mult1: A blue color is applied.
• Blur2: A small x/yBlur to help the text glow white.
• As with every script here, Transform–CameraShake is your friend.Chapter 32 The Cookbook 981
Script 2
This script depends on Ramp2D, a macro stored in doc/html/cook/macros. The script will
not load without this macro loaded into your $HOME/nreal/include/startup directory. It
creates an animated mask traveling across the text, driving both a IDilateErode and an
IBlur node. The IDilateErode has a value of -4 for X and Y. The Solarize is used to boost up
the middle of the Ramp2D and to drop the outer ends to black. Like script 1, you only
set the yPixel blur in IBlur1.
Here is the Ramp2D:
Script 3982 Chapter 32 The Cookbook
This uses the noise() function to randomize the xCenter of the RGrads. The text is then
held Inside of these two animated shapes, and a process similar to Script 1 is applied:
• RGrad1 xCenter expression: noise(time/4)*Text1.width
• RGrad2 xCenter expression: noise(time/4+100)*Text1.width
By adding 100 to the noise function’s seed, you do not have an overlap in the
animation. You can also change time/4 to increase or decrease the frequency, that is,
time/10 or time/2. See “Expressions” on page 939.
Script 4
This script uses a heavily motion-blurred CameraShake (frequency is set to 6, amplitude
is set to 50 and 100 for x and y). This then drives an expanding IDilateErode and IBlur to
create an interesting interaction with the text.
Script 5
The same CameraShake from Script 4 is used to feed an IDisplace. The Blur helps soften
the warping image.Chapter 32 The Cookbook 983
Script 6
This is the same as Script 5, except the Screen is replaced with the Relief macro, found in
doc/html/cook/macros. The script does not open without the Relief macro. The affect
parameter of Relief is set to image 2.
Script 7
This is the same as Script 6, but the result is fed back over the motion-blurred text.984 Chapter 32 The Cookbook
Script 8
This script is driven off of the position of the RGrad. The center of the RBlur is set to the
center of the RGrad, and the right parameter of SetDOD1 is set to RGrad1.xCenter+10,
which unmasks the text as the RGrad moves to the right.
Installing and Using Cookbook Macros
All Shake macros can be found installed on your hard drive in the Shake directory,
inside the doc/html/cook/macros directory. These macros can also be found in the
Documention/Cookbook Extras/macros folder of the installation disc. Each macro
consists of a .h file containing the actual macro, an optional user interface (UI) file, and
an even more optional icon (.nri) file. To be installed, each of these files must be copied
to the appropriate location on your hard drive for Shake to see them.
Note: Not every macro will have .h, UI, and .nri files. Some macros only have a .h file,
while most usually have both .h and UI files.
To install a macro:
1 If there isn’t already a $HOME/nreal/include/startup directory on your hard drive, create
this directory structure.
If necessary, create additional $HOME/nreal/include/startup/ui and $HOME/nreal/icon
directories as well.
2 Place the .h file of the macro you want to install into the $HOME/nreal/include/startup
directory.
3 Place the files containing a UI into the $HOME/nreal/include/startup/ui directory.
4 If the macro has one or more icon files, (which end with .nri) place them in the $HOME/
nreal/icon directory.Chapter 32 The Cookbook 985
Command-Line Macros
The following macros are designed to make some quick fixes. They are available for use
from the command line.
FrameFill Macro
This is used in an emergency to fill in a missing frame when you have to go to film in,
say, two minutes. It takes the frames next to the missing frame and averages them
together.
For example:
shake -framefill bus2.#.jpg -t 41, 45
UnPin Macro
This is used to extract a texture map from an image. List out the four corner points
(lower left first, counterclockwise). You can also set the antialiasing. This is a macro of
the CornerPin node with inverseTransform activated. You usually use two Terminals to
do this, one to test your coordinates, the second to test the command. You can get the
coordinates by scrubbing the image—the coordinates appear in the title bar.
For example:
shake bus2.0040.jpg -unpin 91 170 417 154 418 2 42 94 274986 Chapter 32 The Cookbook
Image Macros
The following macros add Shake-generated image nodes to the Image tab.
Flock Macro
The following bird clip can be found in doc/html/cook/macros/Flock/bird. This takes a
cycling clip and propagates copies of it offset in time, position, and scaling. There is a
clip of birds provided as an example that you can use, royalty free.
You can use the box to roughly position the birds. The birds are also positioned with
two extra movements, a gentle cosine function and some random noise. The freqX and
freqY parameters control the frequency of the cosine movement (an up and down
wave). The vibrationX and vibrationY parameters, along with the vibFreqX and vibFreqY
parameters, control the random movement.
Manga Macro
This is an example of using NGLRender.
The second image has a Twirl applied with a very low antiAliasing value. Many kudos to
Tom Tsuchiya of VPJ in Tokyo for information on Concentration Lines (“Syuuchyuu-sen”).Chapter 32 The Cookbook 987
Rain Macro
This macro can be used to generate rain to throw into a background. The rain is divided
into three sheets, fg, mid, and bg. The lighting controls affect the height of the sheets.
By using a low value, you can get a fake feeling of depth. Sort of.
Ramp2D Macro
This uses the NGLRender drawing routines to draw a polygon on the screen to give you
a ramp between any two points. The wedgeSize parameter should be brought down if
you start to see artifacts along the edges.988 Chapter 32 The Cookbook
RandomLetter Macro
This generates a random letter up until the staticFrame number, at which point it
becomes a letter of your choosing.
Slate Macro
This generates a slate giving information on the show, animator, frame range, and so
on. Although you can use it to generate a frame, typically you attach it to the end of
your script before the FileOut. If you create the FileOut first, you can link the Slate to
print the FileOut file. You must precede it with a : (colon), for example:
:MyFileOut.imageName
The slate appears up to the markFrame parameter, which is 0 by default. It also loads
the script name and the frame range into the slate, adds the current date, and gives
you the option to stamp the current frame onto the output images.
Frame 5 Frame 6Chapter 32 The Cookbook 989
Color Macros
The following macros give you additional ways to manipulate color in your scripts.
AEPreMult Macro
This macro is intended to be used on an image that has a solid non-black background
color which is still considered “premultiplied.” By applying this function, you turn the
background color to black. This may not work with all images.
ColorGrade Macro
This macro allows you to pick three color levels from a source image (typically shadows,
midtones, and highlights) and match them to a target image. This is similar to
ColorMatch, except it does not have the special math to protect it from solarizing. As a
consequence, it is more accurate. 990 Chapter 32 The Cookbook
In this example by Richard Liukis for his short film, Taste It All, the plates were scanned
with an unfortunately strong green cast. The shots were telecined down to video, color
corrected with DaVinci, and edited. This same color correction needed to be applied to
the film plates, so the video version and the logarithmic plates were both read in to
Shake. A Color–LogLin was applied to the 2K plates, followed by a ColorGrade. You can
see the green is even more pronounced in the default LogLin. The ColorGrade was used
to match to the telecined version and then a second LogLin was applied to return it to
log space.
Input log 2K plate Linearized 2K plate
Telecined reference footage Color-graded plateChapter 32 The Cookbook 991
This example has a slightly high saturation, a slight blue cast, and punchier whites (but
then again, 30 seconds were spent on it). Note because the tree is made of three
concatenating color corrections, it was not necessary to convert up to float bit depth
before the LogLin.
Deflicker Macro
This macro is helpful for reducing the flicker on an image. The macro takes two
inputs: The first input is your reference frame, which should be a single frame from the
clip you want to affect; the second input is the sequence you want to remove flickering
from (a bluescreen image in this example). To use the macro, place the crop box on an
area that is representative of an area that does not change its content, that is, over a
portion of sky, rather than on the moving traffic below.
The first frame is usually a still reference frame. The second input is the flickering
sequence. Position the box while looking at Input1 (SingleFrame in this example).992 Chapter 32 The Cookbook
Temp Macro
This macro slides the midtones to warmer or cooler colors. Color-temperature-cool, not
Fonzie-cool.
Original Warmer CoolerChapter 32 The Cookbook 993
Relief Macro
In the following, Example 1 has affect set to image1; the other two are set to image2.
Key Macros
These macros help you further manipulate keyed images.
AlphaClamp Macro
This macro clamps the alpha channel to 0 and 1. This is helpful for float images and
when pulling LumaKeys, as you can arrive at negative values that can have unwanted
effects in your RGB channels. This macro is unnecessary for 8- or 16-bit images.
Example 1: affect = image1
Example 2: affect = image2
Example 3: Relief + IDisplace994 Chapter 32 The Cookbook
DeSpill Macro
The HueCurves node has a problem with processing float values. This macro mimics
Keylight’s spill suppression, allowing you to pick a spill color.
KeyChew Macro
This is intended to give a more natural chewing or expansion of the matte edge than
the result from DilateErode. This macro works only on the alpha channel. It also
eliminates any mid-range alpha areas (reflections and shadows).
Note: This clamps your alpha channel to between 0 and 1, so be careful with float
images. It maintains your float RGB channels properly.
Transform Macros
The following macros give you additional options when transforming images.
AutoFit Macro
This macro resizes an element if you only know one dimension of the output and you
have to figure out the second one. You can use it in the command line. The second
parameter determines what the first parameter specifies. You can provide either w, h, or
0 (width) or 1 (height):
shake woman.sgi -autofit 2048 w
Chewing in, DilateErode Chewing in, KeyChew
Expanding out, DilateErode Expanding out, KeyChewChapter 32 The Cookbook 995
This resizes the woman.sgi image from the Tutorial_Media directory to 2048 x 1383,
maintaining its aspect ratio.
PreTrack Macro
This macro helps when tracking noisy film plates. It drops the blue channel out, which
tends to have the thickest grain, and also applies a slight blur to the footage. Once the
track has been done, disable or remove the PreTrack node.
Note: The Tracker nodes have the ability to add blur while tracking.
RotateFit Macro
This macro resizes the frame to fit the new boundaries of the image. This version uses
math to figure out the boundary, but you can also put the variables dod[0] - dod[3] into
a Crop node.996 Chapter 32 The Cookbook
Warping With the SpeedBump Macro
This macro creates a nifty bump with a shadow on your title.
Utility Macros
The following macros let you address Maya’s Z-depth output, float elements used with
the screen node, and let you manipulate the DOD.
MayaZ Depth Macro
The MayaZ macro converts the -1/z values that Maya outputs for the Z channel.
ScreenFloat Macro
This macro lets you use the Screen function with float elements. You have to set the
high value. If you use the normal float, you get negative artifacts, both aesthetically
and numerically.
CopyDOD Macro
This macro copies, adds, or intersects the DOD between two images, and lets you set
the background color. Chapter 32 The Cookbook 997
Candy Macro
With this macro, the drop shadow appears only on the background image’s alpha
plane. It just seemed like a good idea at the time. If you do not prefer that, disable
shadow generation with the useShadow parameter and use the normal AddShadow or
DropShadow node.998 Chapter 32 The Cookbook
MakeNodeIcon Macro
This macro is used to make the icons for the function tabs. Typically, you insert an
image that is 700 x 300 pixels and the macro fits everything inside. Have fun.
AltIcon Macro
This macro is used to make the alternative icons. You insert a roughly 250 x 250 image
into the macro. It requires the Relief macro to be installed. It also calls on the font Arial.
If you do not have this font, search for the word in the macro file and substitute an
appropriate font.
VLUTButton Macro
This is the macro used to make the VLUT buttons. It requires the ViewerButton.h,
WallPaper.h, and RoundButton.h files to be installed. None of these need the UI files,
just the .h startup files. Additionally, Roundbutton.h makes a call to the
round_button.iff file, also included in the doc/html/cook/macros/VLUTbutton directory.
Place the image somewhere and point the macro file to that new location inside of
the Roundbutton macro.
Finally, it calls on the Arial font. If you do not have this font, search for the word and
substitute an appropriate font.
The default arguments are:
text = “vlut {time}”
and
focus = 0 (off)Chapter 32 The Cookbook 999
RadioButton Macro
This macro is used to create those swell radio buttons. You typically use this only in
command-line mode, as it does some automatic output file naming for you. The first
step is to create and specify a directory where you want to place the icons. This is
typically $HOME/nreal/icons/ux/radio, as they tend to pile up, but you can place them
anywhere.
Open the radiobutton.h file and look for the filePath declaration, line 6 below:
image RadioButton(
const char *text=“linear”,
// lengths are 74, 37, 53
int length = 74,
string fileName=text,
string filePath=“$HOME/nreal/icons/ux/radio/”,
int branch=time,
...
Make sure that directory exists. The standard image lengths are 37, 53, and 74 pixels
long. The shortest you can practically do is 19.
Finally, it calls on the font Arial. If you do not have this font, search for the word and
substitute an appropriate font.
The parameters are:
• “text”: You can type it, or if you want more than one word, enclose it in “quotation
marks.”
• length: Output pixel width. Standard lengths are 37, 53, and 74 pixels.
• fileName: “text” by default. The macro appends .on.nri, on.focus.nri, .off.nri, and
.off.focus.nri for frames 1, 2, 3, and 4.
• filePath: Redirect the output directory without editing the file.
• xScale: Scale the text on the X axis if you have to squeeze the letters a bit. The
default is 1.
• zoom: Creates a button 19 pixels high by default, but you can scale it up. The default
value is .25.
• branch: Specifies the state you want the macro. Normally you do not have to touch
this, you just change the frame number, to which this parameter is set.
• 1 = on
• 2 = on + focus
• 3 = off
• 4 = off + focus1000 Chapter 32 The Cookbook
So, an example:
shake -radiob “Not A Dufus” 53 NotADufus -t 1-4 -v
This creates four files, NotADufus.on.nri, NotADufus.on.focus.nri, NotADufus.off.nri, and
NotADufus.off.focus.nri, that are all 53 pixels wide and say “Not A Dufus” with different
states of being illuminated and focused.
Wallpaper Macro
This helps with button icons, but it is also an interesting way to quickly generate
animated backgrounds. It takes one vertical line from the input image and copies it
across the image. By animating the line, you get a continuous-noise generation (of sorts).
Wedge Macro
This command helps pull an exposure wedge on logarithmic files. You pick an initial
exposure setting, and then how far you want the wedging bracket to step (22 points,
45 points, 90 points, and so on). It then goes through 48 steps of color, brightness, and
contrast. You can do this on the command line:
shake mycineonframe.cin -wedge -t 1-48 -fo mywedge.#.cin
or
shake mycineonframe.cin -wedge -10 15 -9 -t 1-48 -fo mywedge.#.cinChapter 32 The Cookbook 1001
Using Environment Variables for Projects
You can set up projects using environment variables to better manage your different
shots. Environment variables point Shake to the right directories without too much
difficulty even if you move your project to different drives or machines. What is an
environment variable? It is a word that is known by the computer to have a certain
value. For example, the environment variable $HOME (environment variables are
recognized by the $ in front of the word) is /Users/mylogin on Mac OS X. For Linux, it is
typically /usr/people/mylogin. In both Mac OS X and Linux, you have variables such as
$TEMP and $TMP to point to directories where temporary files are stored. Software can
simply be coded to dump temporary data into $TEMP, rather than have to find a
specific directory.
You can use these in Shake by setting a variable for your project and its directory
location. For example, you have a project on //MyMachine/BigDrive/MonsterFilm/Shot8. If
you set a variable, for example, $myproj, to point to that directory, Shake can always open
to that directory. If you later move the project to //MyBackUpMachine/OtherBigDrive/
MonsterFilm/Shot8, you do not have to go into each Shake script and change your FileIn/
Out paths—just change the environment variable before you run Shake.
To set an environment variable in the Terminal, open either your .tcshrc, your .aliases, or
another file that is read when you open a shell. Enter a line similar to the following:
setenv myproj /Documents/shot1
Once you have saved the file, type:
source .tcshrc
You have now set your environment variable. All shell windows that you have
already created, and any open applications must be relaunched to read the new
variable. For variables you have set using the environment.plist file, you have to log
out and log in again.
For the Shake portion:
1 Go to your $HOME/nreal/include/startup/ui directory and create a text file called paths.h.
Enter the following lines. Note the / at the end of the second and third entries:
nuiFileBrowserAddFavorite(“$myproj”);
gui.fileBrowser.lastImageDir= “$myproj/” ;
gui.fileBrowser.lastScriptDir= “$myproj/” ;
To Test Your Environment Variable
There is a simple way to test if your environment variable exists. In a Terminal, type
echo $myproj
and the proper value should be returned. 1002 Chapter 32 The Cookbook
2 Create a directory that $myproj points to, that is, if you set it to /Documents/shot1, then
create /Documents/shot1.
3 When you relaunch Shake, it should launch to $myproj. In the Directories pull-down
menu, you also see $myproj listed.
4 When you read an image from this location, Shake keeps the environment variable
($myproj) in the path. Therefore, whenever you move the entire project, reset the
environment variable to point to the new path.
5 If you batch-render your project, the background machines must also understand the
environment variable.
You can take further advantage of environment variables and projects by adding your
own startup directory into the project area. Specify a shot-specific cache directory,
assuming you have disk space to burn. This is only useful if you are working on several
shots at once, as it keeps cached files around on a per-shot basis. Be extremely careful
with this, however, because you can end up with gigs of data that you do not need if
you do not clean up after yourself. Kind of like real life.Chapter 32 The Cookbook 1003
To set per-project settings for Shake:
1 As an example, in your project directory, create startup/ui directories:
/usr/shot1/startup/ui
2 In your startup directory, place a file to relocate your cache. Create a text file called
cache.h and add these lines, obviously changing MyShotName:
diskCache.cacheLocation=“/var/tmp/Shake/MyShotName”;
diskCache.cacheSize = 500;
The second line indicates the size in MB of your cache. Set it to something appropriate
according to how much disk space you have to spare. Shake automatically creates that
directory the first time it needs it. Remember to remove this cache directory when
cleaning up your project.
3 To have per-shot macros or settings like Formats, add them into the startup and ui
directories.
4 Return to set another environment variable, following the steps outlined above. You
want to set NR_INCLUDE_PATH to your project directory:
setenv NR_INCLUDE_PATH $myproj 1005
Ap
A pendix
A Keyboard Shortcuts and Hot Keys
Keyboard Shortcuts in Shake
In some instances, the keyboard shortcuts vary on different platforms. In the following
tables, the Mac OS X commands appear first, followed by the equivalent Linux
commands.
Note: In many instances, both platform options work on Mac OS X.
General Application Commands
The following keyboard shortcuts are for general project management.
Command Description
Command-N or Control-N Create New script.
Command-O or Control-O Open script.
Command-S or Control-S Save script.
Shift-Command-S or ShiftControl-S
Save script as.
Shift-Command-O or ShiftControl-O
Recover script.
Command-Z or Control-Z Undo, by default up to 100 steps.
Command-Y or Control-Y Redo your steps unless you have changed values after several
undos.
Command-F or Control-F Find nodes. Opens the Select Nodes by Name window, which lets
you select nodes that match criteria in a search string.
Command-X or Control-X Cut.
Command-C or Control-C Copy.
Command-V or Control-V Paste.1006 Appendix A Keyboard Shortcuts and Hot Keys
Navigating in Time
The following keyboard shortcuts let you move the playhead backward and forward in
time while you work.
General Windowing Keyboard Shortcuts and Modifiers
The following keyboard shortcuts and modifiers are available for any region in Shake.
Command Description
Forward Arrow Move forward one frame.
Back Arrow Move back one frame.
. Begin forward playback.
Up Arrow Jump to next keyframe.
Down Arrow Jump to previous keyframe.
Home Fit the current time range into the Time Bar.
Shift-. Begin cached playback.
T Toggle the Time Bar between timecode and frame display.
Command Description
Option-click and drag, or middle
mouse button and drag
Pan within that region of the window.
Space bar Expand interface region to occupy the full Shake window area, and
collapse again.
Command-Option-click or
Control-Alt-click and drag, or
Control-middle mouse button
and drag
Zoom into or out of the Curve Editor, Node View, and Time Bar.
Esc Stop processing.
U Update the Viewer.
Shift-middle click and drag a
tab, or Shift-Option-click or
Shift-Alt-click and drag a tab
Tear off a tab to use as a floating window.
T Toggle between Time Code and Frame View in the Curve Editor,
Time View, and Time Bar.Appendix A Keyboard Shortcuts and Hot Keys 1007
Saving and Restoring Favorite Views
The following keyboard shortcuts let you define and restore favorite views in the
Viewer, Node View, Curve Editor, Time View, or Parameters tab.
The Viewer
The following keyboard shortcuts help you use the Viewer.
Command Description
Shift-F1-5 Save favorite view of any interface area (Viewer, Node View, Curve
Editor, Time View, or Parameters tab) to be accessable via the F1,
F2, F3, F4, or F5 key.
F1-5 Restore favorite view framing. Results in that area being panned
and zoomed to the saved position.
Option-F1-5 Restore favorite view framing and state. Results in the restoration
of additional state information, depending on the region of the
interface. This can include the node being viewed, the currently
loaded curves in the Curve Editor, or the parameters being
tweaked in the Parameters tab.
Command Description
N Create/Copy New Viewer.
F Fit frame to image.
Shift-F Fit frame to window.
Control-F Fit Viewer to image.
Option-drag or Alt-drag Pan image.
+ or – Zoom image in Viewer.
Home Reset view.
R, G, B, A, C Toggle red, green, blue, alpha, and color channel views.
1 (top number row) Toggle between the A/B image buffers.
2 (top number row) Toggle color channels. Shift-2 toggles in reverse.
3 (top number row) Toggle update mode. Shift-3 toggles in reverse.
4 (top number row) Toggle among Viewer lookup tables (VLUTs). Shift-4 toggles in
reverse.
5 (top number row) Toggle DOD mode.
6 (top number row) Toggle through Viewer scripts.
7 (top number row) Toggle compare mode.
8 (top number row) Toggle onscreen transform controls.1008 Appendix A Keyboard Shortcuts and Hot Keys
Flipbook Keyboard Shortcuts
The following keyboard shortcuts are available for any open Flipbook.
Tool Tab Keyboard Modifiers
The following keyboard modifiers help you create new nodes in the Node View in
different ways.
Command Description
. (period; consider it as >) Play forward.
, (comma; consider it as <) Play backward.
Shift-> Ping-pong playback.
Shift-drag Shuttle playback.
Option-drag or Alt-drag Reveal compare buffer, if set.
Control-> Play through once.
Space bar Stop playing and rendering.
? Continue rendering.
R, G, B, A, C View a specific channel.
L Toggle numeric representation of channel values.
Home Reset zoom to 1.
– or = (near Delete key or
Backspace key)
Zoom in and out.
+ or – (on numeric keypad) Increase playback rate.
T Toggle real-time playback.
D Double/single buffer toggle, SGI only.
Left Arrow or Right Arrow keys Advance playhead forward or backward through frames.
H If in compare mode, set to horizontal split.
V If in compare mode, set to vertical split.
S If in compare mode, switch split.
Esc Close window.
Command Description
Click node in Tool tab Insert node into tree.
Shift-click node in Tool tab Create new branch in tree.
Control-click node in Tool tab Replace currently selected node in tree.
Shift-Control-click in Tool tab Create a new node tree.
Option-drag or Alt-drag Pan window.Appendix A Keyboard Shortcuts and Hot Keys 1009
Node View
The following keyboard shortcuts and modifiers help you work within the Node View.
Command Description
Shift When you move a node and the grid is enabled, holding down
Shift allows the node to move freely. When the grid is disabled,
holding down Shift locks it to the grid. Grid width and height
parameters are in the guiControls subtree of the Globals tab.
Del (near Home/End keys) or
Delete
Delete the selected nodes. If the branching is not complicated, the
noodles between the parent(s) and children automatically reattach
to each other.
+ Zoom into the Node View (also use Command-Option-click or
Control-Alt-click and drag, or Control-middle click and drag).
– Zoom out of the Node View (also use Command-Option-click or
Control-Alt-click and drag, or Control-middle mouse button and
drag).
Home Center all nodes.
O Turn on the Overview window to help navigate in the Node View.
Command-E or Control-E Toggle enhanced Node View on and off.
F Frame all selected nodes into the Node View.
Shift-F Frame all selected nodes into the Node View, but maintain zoom
level.
Command-F or Control-F Activate nodes according to what you enter in the Search string
field in the Select Nodes by Name window.
• Select by name: Enter the search string, and matching nodes are
immediately activated. For example, if you enter just “f,” FileIn1
and Fade are selected. If you enter “fi,” just the FileIn1 is selected.
• Select by type: Selects by node type. For example, enter
“Transform,” and all Move2D and Move3D nodes are selected.
• Select by expression: Allows you to enter an expression. For
example, to find all nodes with an angle parameter greater than
180, enter “angle >180.”
Match case: Sets case sensitivity.
L Perform an automated layout on the selected nodes.
Shift-L Stack nodes closely to each other vertically.
X Snap all selected nodes into the same column.
Y Snap all selected nodes into the same row.
I Turn off selected nodes when activated. Select the nodes again and
press I to reactivate them. You can also load the parameters into
the Parameter View and enable ignoreNode.
E Pull the active nodes from the tree, and reconnect the remaining
nodes to each other.
R Activate/refresh the thumbnails for selected nodes.1010 Appendix A Keyboard Shortcuts and Hot Keys
Selecting Nodes
The following keyboard shortcuts let you select different ranges of nodes in the Node
View.
Grouping Nodes
The following keyboard shortcuts let you create and manage groups in the Node View.
Macro Shortcuts in the Node View
The following shortcuts let you create, open, and close macros in the Node View.
T Turn on/off selected node thumbnails. If you haven’t yet created a
thumbnail (R), this does nothing.
C Display the RGB channels for thumbnails.
A Display the alpha channel of thumbnails.
Command Description
Command Description
Command-A or Control-A Select all nodes.
Shift-A Select all nodes attached to the current group.
! All selected nodes are deactivated, all deactivated nodes are
activated.
Shift-U Add all nodes upstream from the currently active nodes to the
active group.
Shift-D Add all nodes downstream from the currently active nodes to the
active group.
Shift-Up Arrow Add one upstream node to the current selection.
Shift-Down Arrow Add one downstream node to the current selection.
Command Description
G Visually collapse selected nodes into one node. When saved out
again, they are remembered as several nodes. To ungroup the
nodes, press G again.
Shift-G Temporarily activate Grid when moving nodes, or consolidate two
or more groups into a larger group.
Command-G or Control-G Ungroup nodes.
Command Description
Shift-M Launch the MacroMaker with the selected nodes as the macro
body.Appendix A Keyboard Shortcuts and Hot Keys 1011
QuickPaint
The following keyboard shortcuts are available in the QuickPaint node.
Note: In Mac OS X, Exposé is mapped to F9-F12 by default. To use these keys in Shake,
disable the Exposé keyboard shortcuts in System Preferences.
The Curve Editor
The following keyboard shortcuts and modifiers help you make adjustments in the
Curve Editor.
B Open a macro into a subwindow so you can review wiring and
parameters. You cannot change the nodes inside of the
subwindow.
Option-B or Alt-B Close the macro subwindow when the pointer is placed outside of
the open macro.
Command Description
Key Function
F9 Use last brush.
F10 or P Pick color.
F11 Toggle between hard/soft brush.
Z Magnet drag in Edit mode.
Command Description
Option-drag or Alt-drag Pan window.
Command-Option-drag or
Control-Alt-drag
Zoom window.
+ or – (near Delete or Backspace
key)
Zooms in and out.
Option-drag or Alt-drag
numbered axis
Pan only in that direction.
Drag on numbered axis Scale that direction.
S Sync time to current keyframe.
T Toggle time code/frame display.
Shift-drag Select keyframes.
Command-drag or Control-drag Deselect keyframes.
Command-A or Control-A Select all curves.
Shift-A Select all control points on active curves.
B When drag-selecting keys, this keeps the Manipulator Box active to
enable transform controls.
Q Allows you to pan the selected points.1012 Appendix A Keyboard Shortcuts and Hot Keys
Parameters Tab Shortcuts and Modifiers
The following keyboard shortcuts help you to make adjustments in the Parameters
tabs.
W Scales the selected points, with the initial click point the center of
scaling.
E A non-linear scaling of the points.
K Insert a key at the frame the pointer is at over the selected spline.
V Toggle visibility of selected curves.
X, Y Allow movement only on X or Y axis.
H Flatten tangents horizontally.
Command-click or Control-click
tangent
Break tangent.
Shift-click tangent Rejoin broken tangents.
Del (near Home/End keys) or
Delete
Delete active keyframes.
Delete or Backspace Remove curves from Editor (does not delete the curves).
F, Control-F Frame selected curves.
Shift-F Frame selected control points.
Home Frame all curves.
Command Description
Command Description
Control-drag Activate virtual sliders.
Tab Advance to next value field.
Shift-Tab Go to previous value field.
Shift-Left/Right Arrow Increase value field value by 10 x normal increment.
Control-Left/Right Arrow Increase value field value by normal increment.
Option or Alt-Left/Right Arrow Increase value field value by 1/10 x normal increment.
Right-click Access pop-up menus.
Right-click Color Picker Pop up color palette.
Drag parameter name Copy parameter to target parameter.
Shift-drag parameter name Link parameter from target parameter. Appendix A Keyboard Shortcuts and Hot Keys 1013
MultiPlane Node Keyboard Shortcuts
The following keyboard shortcuts let you choose angles from the multi-pane Viewer
interface presented by the MultiPlane node. To switch a pane’s angle, position the
pointer within that pane, and press one of these keys on the numeric keypad.
Keyboard Modifiers for Transform and MultiPlane Nodes
The following keyboard modifiers let you make adjustments to images in transform
nodes, and to layers and cameras in the MultiPlane node.
Numeric Keypad Description
0 (numeric keypad) Cycle through every angle in the Viewer pane with the pointer
positioned within it.
1 (numeric keypad) Display the currently selected renderCamera angle in the Viewer
pane with the pointer positioned within it.
2 (numeric keypad) Display the Front angle in the Viewer pane with the pointer
positioned within it.
3 (numeric keypad) Display the Top angle in the Viewer pane with the pointer
positioned within it.
4 (numeric keypad) Display the Side angle in the Viewer pane with the pointer
positioned within it.
5 (numeric keypad) Display the Perspective angle in the Viewer pane with the pointer
positioned within it.
Keyboard Description
Control-drag Drag the center point of selected layer to a new location.
Press Q or P and drag Pan selected layer around global axis.
Press W or O and drag Rotate selected layer.
Press E or I and drag Scale selected layer.
V-drag Rotate the camera about its Z axis.
S-drag Rotats the camera about the X and Y axes, about the camera’s own
center point, changing the position of the camera target.
Z-drag Pan the camera in and out along the Z axis.
D-drag Pan the camera and camera target together along the X and Y axes.
X-drag Pivot the camera about the camera target’s orbit point. Also pivot
the Persp or Camera view in the Viewer.
T-drag Move the camera and camera target together in any view.1014 Appendix A Keyboard Shortcuts and Hot Keys
Keyboard Modifiers for Color Adjustments
The following chart lists all the keyboard shortcuts for color adjustments within a color
control. To use, press one of the listed channel keys, and drag within the color control
to make the listed adjustment.
Keyboard Channel Description
R Red Adjust red channel independently.
G Green Adjust green channel independently
B Blue Adjust blue channel independently.
O Offset Boost or lower all color channels relative to one another.
H Hue Adjuss all channels by rotating around the ColorWheel.
S Saturation Adjuss color saturation, according to the HSV model.
V Value Adjust color “brightness,” according to the HSV model.
T Temperature Adjust overall color between reds and blues.
C Cyan Adjust cyan according to the CYMK colorspace model.
M Magenta Adjust Magenta according to CMYK.
Y Yellow Adjust Yellow according to CMYK.
L Luminance Adjust black level, otherwise referred to as luminance. 1015
Ap
B pendix
A The Shake Command-Line Manual
Shake started in its infancy as a command-line compositor—you can conceivably
execute a 500-node script that is typed out in a Terminal. “Conceivably,” but not
practically, since nodes such as Primatte, Stabilize, QuickPaint, and RotoShape have
unwieldy formats. And, of course, you would have to type out 500 nodes on the
command line, which is impractical. However, using the Terminal remains an ideal
method to execute many daily image-processing functions, such as:
• Image resizing
• Bit depth or channel reordering
• Standardized color correcting (log to lin conversion, gamma corrections, and so on)
• Converting file format
• Flipbook-rendering an image sequence
• Executing scripts
• Accessing image information
• Quick compositing of 3D-rendered elements over backgrounds
These functions can be rendered in the command line more quickly and efficiently
than in the graphical interface (if you are comfortable with typing in the Terminal),
since they involve relatively straightforward commands. The other major use of the
command line is to execute scripts that you have created in the interface and then
saved to disk.
This section discusses some general principles about the command-line shell, and then
lists several examples. The last section is a list of frequently used functions. Since every
node can potentially be used, this not a complete list.
Note: All example images are located in the doc/pix directory. All examples assume you
are in this directory or below, as noted.
Viewing, Converting, and Writing Images
Shake does three basic things on the command line: It executes image operations,
executes scripts, or views images. The following are some simple examples to start the
discussion. 1016 Appendix A The Shake Command-Line Manual
To display images:
m
Type the name of the images, for example:
shake truck.iff bg.iff sign_mask.iff
Note: For instructions on how to use the Flipbook Viewer, see “Previewing Your Script
Using the Flipbook” on page 90.
To convert or save a file:
m Append the -fileout, or -fo:
shake truck.iff -fo test.rla
The above command saves an image called test.rla in the .rla format. For a list of
supported formats and their extensions, see “About Image Input” on page 107.
To compare two images:
m
Load the first image and then the second after the -compare flag:
shake bg.iff -compare sign_mask.iff
On most operating systems, use Option-click or Alt-click to drag between the two
images. Press H, V, and F for horizontal, vertical, and fading wipes. For Linux systems,
you must Shift-Control-drag.
Time and Viewing Image Sequences
Shake assumes time is set to frame 1 unless you specify a time range. This is done with
the -t option, and a frame numbering symbol, such as:
shake alien/alien.#.iff -t 1-50
The frame numbering is substituted with a symbol (# in the above example) which
indicates that the file is padded to four places. The following table lists other symbols
that can be used in frame numbering.
Shake Format Reads/Writes
image.#.iff image.0001.iff, image.0002.iff, etc.
image.%04d.iff image.0001.iff, image.0002.iff, etc.
image.@.iff image.1.iff, image.2.iff, etc.
image.%d.iff image.1.iff, image.2.iff, etc.
image.@@@.iff, image.###.iff image.001.iff, image.002.iff, etc.
image.%03d.iff image.001.iff, image.002.iff, etc.
image.10-50#.iff image.0010.iff at frame 1, 11 at frame 2, etc.Appendix A The Shake Command-Line Manual 1017
The -t option is extremely flexible. You can choose to render frame ranges, stepped
ranges, individual frames, or any combination.
To convert a sequence of images, use a line similar to the following:
shake alien/alien.#.iff -fo test.@.jpg -t 1-30 -v
The following table includes some clever ways to renumber image sequences:
Appending Functions
Append optional functions with the - (dash) sign to preface each function call,
followed by a space for any arguments it might take. Not all functions take arguments,
but most do.
In the following, the Blur command has an argument of 50 (the amount of blur you
want to apply):
shake truck.iff -blur 50
You can append as many functions as you want:
shake truck.iff -blur 50 -invert r -z 2
Time Range Number of Frames Frames Rendered
-t 1-100 100 1, 2, 3... 100
-t 1-100x2 50 1, 3, 5... 99
-t 1-100x20 5 1, 21, 41... 81
-t 1-20,30-40 31 1, 2, 3... 20, and 30, 31, 32... 40
-t 1-10x2,15,18,20-25 13 1, 3, 5... 9, 15, 18, 20, 21, 22... 25
-t 100-1 100 100, 99, 98... 2
Renumbering Clips Description
shake bus2.40-79#.jpg -t 1-40 -fo
toto.#.iff -v
Shifts frame numbering to start at 1.
shake bus2.#.jpg -t 40-79 -fo
toto.#-39.iff -v
Also shifts frame numbering to start at 1.
shake bus2.#.jpg -t 40-79 -fo
toto.80-#.iff -v
Reverses timing starting at frame 1.
shake bus2.#.jpg -t 40-79 -fo
toto.118-#.iff -v
Reverses timing within the same frame range.
shake bus2.#.jpg -t 40-79 -fo
toto.@.iff -v
Unpads the clip.
shake bus2.40-79x2#.jpg -t 1-20
-fo toto.#.iff -v
Halves the timing of the clip.
shake bus2.40-79x.5#.jpg -t 1-80
-fo toto.#.iff -v
Doubles the timing of the clip.1018 Appendix A The Shake Command-Line Manual
The following is a good example of a common command-line test of 3D-rendered
imagery:
shake truck.iff -outside sign_mask.iff -over bg.iff
As long as there is no ambiguity, it is not necessary to type the entire function name.
For example:
shake bg.iff -bri 2
calls the Brightness function since there are no functions starting with bri. However,
calling:
shake bg.iff -con 2
informs you that it cannot choose between Conform, Constraint, ContrastLum,
ContrastRGB, or Convolve. To solve the problem, replace your command with enough
letters to end the ambiguity:
shake bg.iff -contrastl 2
This calls the ContrastLum function.
In regard to capitalization and function names, function names are always capitalized
in the interface. In the command line, function names are always lowercase so you do
not have to press the Shift key. The exception is when you call a function in quotes,
such as the Linear function used to animate parameters:
shake truck.iff -blur “Linear(0,0@1,20@20)” -t 1-20
Note: For more information on animation curves, see “More About Splines” on
page 316.
Most functions are image manipulators—they modify the image you load. Others are
controls to modify how Shake executes the command. For example, -brightness 2
brightens an image, but -fps 24 loads a Flipbook preset to 24 frames per second.
Although image functions always occur in a linear fashion—the order of commands
matters—controls can be placed anywhere. For example, the following lines are
identical:
shake alien/alien.#.iff -rotate 45 -t 1-50 -cpus 2
shake -cpus 2 -t 1-50 alien/alien.#.iff -rotate 45
However, the following two lines are different:
shake truck/truck.iff -pan 200 0 -rotate 135
shake truck/truck.iff -rotate 135 -pan 200 0
If you place a second control on a line, it replaces the previous setting. For example:
shake alien/alien.#.iff -t 1-50 -fps 24 -bri 2 -fps 30 -t 10-20
plays at 30 frames per second and renders only frames 10-20.Appendix A The Shake Command-Line Manual 1019
Getting Help
How do you know what Blur is expecting? Aside from using the product non-stop for
five years, you can also type:
shake -help blur
to return what arguments that function takes. Blur has six arguments:
-blur [xPixels] [yPixels] [spread] [xFilter] [yFilter] [channels]
Often, you don’t need to specify every argument. Shake gives you an error message if it
expects more arguments than you have supplied. For additional information, refer to
the relative function pages. For example, the Blur function can be found on page 864 in
Chapter 28, “Filters.”
You also do not have to know the entire function name. For example, your request for
help on the Blur node probably produced information about seven different functions,
so try:
shake -help mul
and you see that Shake simply looks for the string “mul.” To quickly launch these docs:
shake -doc
Argument Flow
Looking at the arguments in the Blur function, it expects an image as its first input,
which is labeled In:
image Blur(
image In,
float xPixels,
float yPixels,
int spread,
const char * xFilter,
const char * yFilter,
const char * channels
);
However, the command line differs from the interface in that it always assumes the first
image argument is coming from the previous argument in a linear flow, so omit the
first image argument. Therefore, in Blur, the first argument on the command line is the
xPixels parameter.
Shake assumes the previous image is fed into the function that follows. However, if you
have multiple input images, only the last image has the effect applied.
In this example, only sign_mask.iff is blurred:
shake truck.iff bg.iff sign_mask.iff -blur 501020 Appendix A The Shake Command-Line Manual
Occasionally, you want to perform different operations on two different images within
the same command. This can be done with the branching commands -fi (FileIn) and
-label. Label a branch with the -label function, and start a new branch with the -fi
function. The following example blurs the background, and then assigns it a temporary
label of BG. It then reads the truck, brightens it (the truck, but not the background),
and composites the result over the blurred background:
shake bg.iff -blur 50 -label BG -fi truck.iff -bri 2 -over BG
Scripts
Execute pre-saved scripts with the -exec command. These scripts are usually generated
from the interface. The -exec function scans the script and executes all FileOut nodes. If
there are no FileOut nodes in the script, the function does nothing.
shake -exec my_script.shk -v
The -v stands for “verbose,” and provides feedback on the progress of the render. You
can also override many settings in the script. For example:
shake -exec my_script.shk -v -proxys .5 1 -t 20-30 -motion 1 1
renders at half-proxy scale, full-proxy ratio, only frames 20 to 30, and activates motion
blur without having to edit the script itself.
You can save a script from the command line with the -savescript function:
shake truck/truck.iff -outside truck/sign_mask.iff -over truck/bg.iff
-savescript toto.shk
If there is no FileOut in the script, you can test the script with the -script command. It is
similar to the -exec function in that it executes FileOut nodes. However, it also views
every branch in the script, and that can be awkward when rendering. It is really just for
testing scripts:
shake -script toto.shk
Command-Line Controls
In the following tables, < > indicate a mandatory argument, and [ ] indicate an optional
argument.
The following functions are unique to the command line.
Time Range Control Description
-t See “Time and Viewing Image Sequences” on page 1016.
I/O Functions Description
-exec Renders only FileOut nodes in a script. See “Time and Viewing
Image Sequences” on page 1016.
-fi FileIn. Allows branching operations. See “Argument Flow” on
page 1019.Appendix A The Shake Command-Line Manual 1021
-fo FileOut. Writes the image to disk in the format of the file extension.
If no extension is given, it is saved in .iff format.
-savescript Saves all of the previous commands into a script for later execution.
Also helpful to get scripting formats.
-script Reads a script and tests it. All branches are viewed; all FileOut
nodes are rendered to disk.
Viewing Controls Description
-compare
Example:
shake bg.iff -blur 20 -compare
bg.iff
Allows you to compare two images. Option-click or Alt-click (ShiftControl-click on Linux) to drag between the images. Press H or V to
split the screen between horizontal and vertical splits.
-fps Frames per second for the playback. You can also press + or – on
the number keypad to set this. The actual and target fps are
displayed at the top of the Flipbook.
-gui Launches your functions in the interface.
-monitor [keepFrames] [w] [h] With no options, this only displays the current frame, discarding
the frame when you go to the next frame. When keepFrames is set
to 1, it keeps all frames in memory. When set to 0, it discards them.
You can also specify the width and height of the monitor window.
-view [zoom] Can be used to view intermediate stages of a string of nodes. You
can also list an optional zoom level for the Viewer.
Information Controls Description
-doc Launches this fine and flawless documentation.
-help, -h [functionName] Without any arguments, lists all functions available in Shake. When
you supply a string, it matches any functions with that string in it.
See “Getting Help” on page 1019.
-info Lists size, type, bit depth, and channel information regarding your
image.
-version Prints the version of Shake.
-v, -vv Verbose. When -v is on, the rendering time is printed for each
frame. -vv displays a percentage of render completion as it renders.
Render Controls Description
-cpus The number of CPUs to use. Default is 1.
-createdirs Used with -renderproxies to create the necessary subdirectories for
the proxies. It does not affect FileOut renders.
-fldr Turns on field rendering. 1 = odd/PAL, 0 = even/NTSC.
-mem Sets the maximum amount of memory to use.
I/O Functions Description1022 Appendix A The Shake Command-Line Manual
-motion Sets the motion blur quality. In the command line, you must enable
override (set it to 1). When executing a script, you either multiply
what is saved as the global motion blur value (override = 0), or
override it (override = 1).
-node Only executes that node. For use with -exec.
-nocache Disables the call to the cache, forcing complete recomputation of
each element.
-noexec Does nothing except compile the script to see if there are errors.
-pixelscale [ratio] Sets the pixelscale and ratio. Not frequently used in the command
line. See Chapter 4, “Using Proxies,” on page 137.
-proxyscale [ratio] Sets the proxyscale and ratio. Does a low-resolution test render of
your commands. You can specify two numbers—the scale and
ratio, or p1/p2/p3 which uses the preset proxies. See Chapter 4,
“Using Proxies,” on page 137.
-renderproxies [p1] [p2] [p3] Renders FileIn nodes already saved into a script as the preset
proxies. If used without arguments, renders what is saved in the
script, otherwise you can specify which subproxies you want, p1,
p2 , and/or p3. Usually used with -createdirs.
-sequential Sequentially executes functions. For use with -exec. When you have
multiple FileOut nodes, it may be more efficient to render them one
at a time with this flag, rather than simultaneously.
-shutter [offset] Sets motion blur shutter controls.
Quality Controls Description
-fast [quality] When no argument is given (just -fast), anti-aliasing is disabled to
speed the render. -fast 1 sets quality to 0 (low), while -fast 0 sets
quality to 1 (high).
Masking Controls Description
-evenonly
Example:
shake bg.iff -rotate 45 -evenonly
bg.iff
Only executes the previous commands on the even fields of the
image, and mixes it back in with the image you supply.
-isolate
Example:
shake bg.iff -blur 50 -isolate bg.iff
a
Only executes the previous commands on the channels you
specify, and mixes it back in with the image you supply.
-mask [channel]
[percent] [invert]
Example:
shake bg.iff -blur 50 -mask
truck.iff
Masks off the previous commands you specify, with the mask
coming from the image. You can specify the channel and
percentage, or invert the mask.
Render Controls DescriptionAppendix A The Shake Command-Line Manual 1023
Frequently Used Functions
Since you can use any of the functions in Shake, the following tables of frequently used
functions are only partial lists. However, the tables represent the functions that are
both practical to use (not too many parameters), and are useful in the command line,
such as file resizing, basic channel manipulation, and quick generation of test elements.
Typically, the functions have more options than are listed here. For more information
on the functions, see the relative function sections in the appropriate chapter. For
example, the Ramp image function is located in Chapter 21, “Paint.”
-oddonly Only executes the previous commands on the odd fields of the
image, and mixes it back in with the image you supply.
-roi Only executes a square portion of the image that you specify.
Labeling Controls Description
-curve [type]
Creates a global variable available to the script that is loaded or
other functions executed. See below for an example.
You should be aware that defining variables from the command
line using the -curve flag will produce a duplicate variable if that
variable already exists within that script. Afterwards, this script
may fail to load.
-label Allows you to temporarily label an image for later use in the
command line.
Masking Controls Description
Image Functions Description
-addtext A quick way to generate text, as you only have to supply the text
itself, without having to enter the width, height, and bit depth.
-black [w] [h] [bitDepth] A quick way to generate black.
-color
[a] [z]
A color field.
-ramp [w] [h] [bitDepth]
[orientation]
A ramp. Use an orientation of 1 for vertical, 0 for horizontal.
-text
Text.
Color Functions Description
-brightness Multiplies the RGB channels by value.
-contrastlum [center] Performs a contrast, with the pivot point at center.
-delogc A log to linear color conversion to quickly see Cineon plates.
-gamma Gamma.
-logc A linear to log color conversion to convert files into log color space.
-luminance A quick function to generate a black-and-white image based on
the luminance. Generates a 1-channel image.1024 Appendix A The Shake Command-Line Manual
-mult [a] [z] Multiplies color on a per-channel basis.
-saturation Saturation change.
Channel Functions Description
-colorspace
Example:
shake bg.iff -colorsp rgb hls
Converts images to a different colorspace. Indicate the source
space and the destination space.
-copy
Copies a channel from your listed image to the incoming image.
Clipmode indicates the resolution you want, with 1 as the second
image, 0 as the input image.
-forcergb Forces a BW or BWA image into RGB or RGBA format. This is
unnecessary in the Shake interface and can be handled by FileOut
for output, but is awkward in the command line, thus this
function’s raison-d’etre.
-reorder
Examples:
shake truck.iff -reorder aaaa
shake truck.iff -reorder rgbl
shake bg.iff -reorder grb
Swaps channels. Using letter codes (r, g, b, a, z, l for luminance, and
n for null), indicate what channel should go in the r, g, b, a, and z
channels.
-setalpha Sets the alpha to your value. To remove an alpha channel from an
image, enter 0.
-switchmatte Similar to Copy, except only the alpha is swapped. The returned
image is premultiplied.
Compositing Functions Description
-inside An inclusion matte. Multiplies the incoming image by the alpha
channel of the image you specify.
-isuba Extracts the absolute difference between the two images.
-mix [percent] Mixes two images together. 50 percent is half of each image.
-outside An exclusion matte. Multiplies the incoming image by the inverse
of the alpha channel of the image you specify.
-over Puts the incoming image, assumed to be the premultiplied
foreground, over the background.
-under Puts the incoming image beneath the image you list, which is
assumed to be premultiplied.
-screen Performs the Screen operation, which is good for reflections and
glows.
Color Functions DescriptionAppendix A The Shake Command-Line Manual 1025
Resizing Functions Description
-addborders
Pads the image out with black around the edges.
-crop
Crops the image to the two corners you specify. The origin is in the
lower-left corner.
-fit Resizes the image to the resolution you specify, maintaining the
aspect ratio.
-resize Resizes the image to the resolution you specify.
-window
Crops the window at the specified lower-left corner, and to the
resolution you specify.
-z A shortcut for Zoom; both X and Y are zoomed by the same
amount.
-zoom Zooms the image in X and Y independently.
Filter Functions Description
-blur [yPixels] Blur
Transform Functions Description
-flip Turns the image upside down.
-flop Turns the image backward.
-pan Pans the image.
-rotate Rotates the image.
-scale Scales the image without changing the resolution.
Video Field Functions Description
-deinterlace [mode] Deinterlaces the image. 0 equals the even field, 1 equals the odd
field. For mode:
0 = replication of the line below.
1 = interpolation of the missing line between two lines above and
below.
2 = blur. 50 percent of the line, 25 percent each of the two lines
above and below the missing line.
-evenonly
Example:
shake bg.iff -rotate 45 -evenonly
bg.iff
Executes the previous commands on the even fields of the image,
and mixes it back in with the image you list.
-field Extracts just one field, creating a half-height image. 0 = even field,
1 = odd field.
-fldr Activates field rendering. 1 = odd/PAL, 0 = even/NTSC.1026 Appendix A The Shake Command-Line Manual
Examples
-interlace
Interlaces the two images. When clipMode is set to 1, the BG
resolution is taken; when set to 0, the incoming image resolution is
taken.
0 = even field, 1 = odd field.
-oddonly Only executes the previous commands on the odd fields of the
image, and mixes it back in with the image you supply.
-pulldown
[offset]
Example:
shake bus/bus2.40-79#.jpg 0 -t 1-
50
Acts as a FileIn, converts from 24 fps to 30 fps, interlacing the
frames to add the extra frames. 0 = even field, 1 = odd field. The
offset amount is the frame that the interlacing starts after the
beginning.
-pullup [offset] Acts as a FileIn. Removes the 3:2 pulldown interlacing, converts
from 30 fps to 24 fps. 0 = even field, 1 = odd field. The offset
amount is the frame that the interlacing starts after the beginning.
-swapfields Switches the odd and even fields.
Other Functions Description
-average
Acts as a -bytes , averaging frames down from sStart and
sEnd to dStart and dEnd. Usually use 1,1,1 for the last three
arguments.
-bytes 1 = 8 bits, 2 = 16 bits, 4 = float.
Video Field Functions Description
Looking at Images Description
shake bg.iff sign_mask.iff
shake * Uses the UNIX-style wildcard, *. This means “anything.”
shake *.iff Indicates “anything with an .iff extension.”
shake bg.iff -compare
sign_mask.iff
Compares the two images, using Option-click or Alt-click to drag
between the two. For Linux, Shift-Control-click and drag.
Launching Flipbooks
shake alien/alien.#.iff -t 1-50
shake bus/bus2.#.jpg -t 40-79
shake bus/bus2.40-79#.jpg -t 1-40
shake fan/fan.@.iff -t 1-5
Cool Text Tricks Description
shake -addtext %t -t 1-20 Prints time code.
shake -addtext %T -t 1-20 Prints full time code.
shake -addtext %f -t 1-20 Prints the current frame.Appendix A The Shake Command-Line Manual 1027
shake -addtext %F -t 1-20 Prints the padded current frame.
shake -addtext “%D, %d %M” Prints the current date.
Converting Image File Formats Description
shake alien.#.iff -t 1-50 -fo
temp.@.sgi
Writes 50 unpadded images in the .sgi format.
shake fan.@.iff -t 1-5 -fo fan.#.tif Writes 5 padded images in the .tif format.
Adding and Removing
Channels Description
shake alien.0001.iff -lum -fo toto.iff Creates a black-and-white image with an alpha channel.
shake alien.0001.iff -reorder rrra -
fo tutu.iff
Creates a black-and-white image with an alpha channel.
shake tutu.iff -forcergb Makes a 1-channel image into a 3-channel image without
modifying the image.
shake alien.0001.iff -reorder rgbn Removes the alpha channel.
shake alien.0001.iff -setalpha 0 Removes the alpha channel.
shake alien.0001.iff -reorder rgbl Puts the luminance into the alpha channel.
Changing Bit Depth Description
shake bg.iff -bytes 1 -fo bit8.iff Converts the image to 8 bits (what this image was anyway).
shake bg.iff -bytes 2 -fo bit16.iff Converts the image to 16 bits.
shake bg.iff -bytes 4 -fo bit32.iff Converts the image to float.
Manipulating Fields Description
shake -pulldown bus/bus2.40-
79#.jpg 0 -t 1-50 -fo fps30.#.iff -v
3:2 pulldown, converting from 24 fps to 30 fps. The 1-50 is derived
by subtracting 40 from 79, adding 1, and then multiplying that by
1.25.
shake -pullup fps30.#.iff 0 -t 1-40 -
fo fps24.#.iff -v
3:2 pulldown converting from 30 fps to 24 fps. The 1-40 is derived
by dividing 50 (the amount of images) by 1.25.
Averaging Frames Description
shake -average bus2.40-79#.jpg 1
40 1 20 1 1 1 -t 1-20
Averages the 40 frames down to 20 frames.
shake bus2.40-79x2#.jpg -mix
bus2.41-79x2#.jpg -t 1-20
Uses the frame numbering steps in the FileIn to get the averaging,
and mixes them together with the Mix function.
Resizing Images Description
shake bg.iff -z 3 Zooms the image up by 3.
shake bg.iff -z .333
shake bg.iff -z 1/3
Both zoom the image down by 3.
Cool Text Tricks Description1028 Appendix A The Shake Command-Line Manual
shake bg.iff -zoom 2 1 Zooms the image to twice as wide.
shake bg.iff -resize 720 486 Zooms the image to NTSC resolution.
shake bg.iff -fit 720 486 Zooms the image to NTSC resolution, maintaining the original
aspect ratio.
Renumbering Clips Description
shake bus2.40-79#.jpg -t 1-40 -fo
toto.#.iff -v
Shifts frame numbering to start at 1.
shake bus2.#.jpg -t 40-79 -fo
toto.#-39.iff -v
Also shifts frame numbering to start at 1.
shake bus2.#.jpg -t 40-79 -fo
toto.118-#.iff -v
Reverses timing.
shake bus2.#.jpg -t 40-79 -fo
toto.@.iff -v
Unpads the clip.
shake bus2.40-79x2#.jpg -t 1-20 -
fo toto.#.iff -v
Halves the timing of the clip.
shake bus2.40-79x.5#.jpg -t 1-80 -
fo toto.#.iff -v
Doubles the timing of the clip.
Compositing
shake truck.iff -outside
sign_mask.iff -over bg.iff
shake bg.iff -under truck.iff
Animating Parameters Description
shake bg.iff -blur
“Linear(0,0@1,100@20)” -t 1-20
A linear animation from a value of 0 at frame 1 to a value of 100 at
frame 20.
shake bg.iff -rotate
“Linear(0,0@1,360@11)” -t 1-10
Animates a rotation from 0 degrees at frame 1 to 360 degrees at
frame 11. Notice only up to frame 10 is rendered, hopefully giving a
smooth loop.
shake bg.iff -rotate
“Linear(0,0@1,360@11)” -t 1-10 -
motion .5 1
Sets motion blur to half quality.
shake truck.iff -curve A
“JSpline(1,0@1,10@5,300@15)” -
pan A A -t 1-15
Creates a temporary curve named A. It has values of 0 at frame 1,
10 at frame 5, and 300 at frame 15. It also continues its slope after
frame 15. This curve is then fed into the x and y pan parameters of
the Pan.
shake bg.iff -rotate “time*time” -t
1-75 -motion .5 1 -cpus 2
Uses the time variable, placing it in quotes to multiply it by itself. As
the frame count increases, the angle increases. Also, both CPUs are
enabled for a 2-processor computer.
Resizing Images DescriptionAppendix A The Shake Command-Line Manual 1029
Tips
The following section contains tips and tricks for command-line usage.
File Completion
The following is a shortcut to typing file names using the file completion feature in the
shell. When you press Tab, all potential files that match what you type are listed. For
example:
shake Tab
lists
shake truck/
Press Tab again to list all three image files within that directory. If you type another “t,”
that is:
shake truck/Tab
lists
shake truck/truck.iff
Therefore, you can type the entire line with very few keystrokes. If you type the
following:
shake Tab Tab Tab Tab Tab
shake truck.iff -pan
“cos(time*.5)*100” “sin(time)*50” -
motion .5 1 -t 1-13
Uses the cos and sin functions to animate the truck in a
figure eight.
shake bg.iff -rotate
“cos(time*.5)*50+25” -t 1-13 -
motion .5 1
Substituting Values with -curve
If you have the following in a script called myscript.shk:
Text1 = Text(720, 486, 1, {{ stringf(“myVal = %d %s”, myVal, slatestring); }}, “Utopia Regular”, 100, xFontScale,
1, width/2, height/2, 0, 2, 2, 1, 1, 1, 1, 0, 0, 0, 45);
you substitute values in with the -curve option:
shake -curve int myInt 5 -curve string slatestring “SuperSlate” -script myscript.shk
and you get a nice image that says “myVal = 5 SuperSlate”.
Or you can use:
shake -curve int myVal 5 -curve string slatestring “SuperSlate” -text 720 486 1 ’ :stringf(\\\“myVal =
%d\\n%s\\\”, myVal, slatestring ) ; ’
Note the extra backslashes: This quotes it once for the shell and then again for Shake. The shell sees \\
and makes it \ then \“ and makes it ” so you get a result of \“ going into the Shake wrapper script, that
is then passed properly into the Shake executable. Whew.
Animating Parameters Description1030 Appendix A The Shake Command-Line Manual
the following line is listed:
shake truck/truck.iff truck/bg.iff truck/sign_mask.iff
So press Return.
Repeating Previous Commands
There are two ways to repeat previous commands:
• Press the Up Arrow on the command line. Each time you press it, the previous
command is listed, stepping back through your history. Change portions of the
command with the Left Arrow and Right Arrow keys. Press the Down Arrow key to
take you to the next command in the history list.
• Press the ! key. Press !! to repeat the last command (although pressing the Up Arrow
key is easier). Type !s to repeat the last command that started with “s.”
Wildcards
Use wildcards in the command line so you don’t need to type much. The following
table lists some of the wildcards.
Math and Expressions
When performing math on the command line, enclose the math in “quotation marks”:
shake truck.iff -pan “cos(time*.5)*100” “sin(time)*50” -motion .5 1 -
t 1-13
Multiple Words
To use more than one word in a parameter that expects a string (letters), again, use
“quotation marks”:
shake -addtext “kilroy was here”
Note: This comes up frequently for systems running Mac OS X.
Wildcards
*
Matches everything of any
length.
shake *
Shows all files within that directory simultaneously.
shake *.iff
Shows all files within that directory with a .iff extension.
shake *g*
Displays bg.iff and sign_mask.iff.
?
This is used to match anything
for only that position.
shake alien.000?.iff
Displays the first nine images simultaneously.
shake alien.002?.iff
Displays all frames in the twenties.
[range-range]
This can describe a range,
between letters or numbers.
shake alien.000[1-5].iff
This will display the first five images simultaneously.
shake [l-z]*
Displays all images that start with the letters l to z, lowercase.1031
Index
Index
.h files
locations of 355
.plist file 395
.tcshrc file 394, 397
10-bit image files 437–450
converting using LogLin 649
2K images
and caching 131
and QuickShapes 575
and real-time playback 325
file sizes of 180, 447
3:2 Pulldown 116
3D camera paths 485
3D polyhedron (in Primatte) 714
3D renders
and bit-depth 410
and premultiplication 421
and the Z channel 411
3D transform controls 518
4K/6K images 130
A
About Shake menu 32
Absolute path 954
Academy format 94
Accumulate
in Pixel Analyzer tool 630
Add 638
function description 638
modifying image channels with 416
usage described 636
AddBorders 186
function description 186
AddMix 453
example 454
function description 453
Add New Shapes button 830
Add Notes command 81
Add Script command 32
AddShadow 470
function description 470
Add Shapes mode 547
AddText 456
function description 456
AdjustHSV 659
function description 659
usage described 636
AEPreMult macro 989
AIFF audio files 277
Alias 399
All (nodes) command 258
alpha channel 681
and compositing 417
removing from an image with SetAlpha 658
viewing 259
als / alias files 171
alsz files 171
AltIcon macro 998
Anamorphic
examples 210
Anamorphic Film aspect ratio 216
Anamorphic film images 209
Animating cameras 521
Animating layers 506
Animating parameters 291
Animation
pausing 293, 560
Animation curves (see Curve Editor) 316–322
AOI 542
Aperture markings button 55
Apple Qmaster 339, 401
Apply Curve function 299
ApplyFilter 864
function description 864
Area of Interest 542
Argument flow 1019
Arithmetic operators 941
in Primatte 683
Aspect ratio
and film elements 211
and non-square pixels 209
and video elements 210
common ratios 216
for the broadcast monitor 330
functions to be cautious with 2111032 Index
Aspect ratios
anamorphic film 216
Assign color
in Primatte 711
Associated Nodes command 258
Atomic-level correctors 451, 635, 637
Atop 457
function description 457
math and LayerX syntax 452
Audio 277–288
and QuickTime 277
extracting curves from 285
loading and refreshing files 278
mixdown 288
mixing and exporting 288
previewing 280
scrubbing 283
slipping sync 283
viewing 283
viewing and editing 282
Audio Panel 26
AutoAlign node
Nodes
AutoAlign 783
AutoFit macro 994
Autokey button 71, 768
for animating color values 625
Autoload curves 296
Autosave 36
directory 368
file count 368
prefix 368
setting frequency value 367
Average curves operation 313
avi files 171
B
Background color (with DOD) 85–86
Background image (for Primatte) 710
Backup interval times 36
Banding 410
Bit depth 408, 411
changing in command-line mode 1027
float 411
Black repeat mode 267
Blue screen
applying effects to 694–696
creating keys from 682
Blue spill 687
removal using ColorX expressions 648
Blur 864
and Infinite Workspace 865
applying in different color spaces 611
function description 864
bmp files 171
Bookmark button 41
Bounding boxes
for shapes 555
Bound mode (for keyframes) 307
Box controls
customizing 389
Box filter 862, 863
Brightness 638
function description 638
modifed by Lookup function 652
modifying image channels with 416
usage described 636
Broadcast monitor
aspect ratio 330
navigating in 331
using 328
Broadcast Monitor button 57, 330
Browser 38–45
adding as a pop-up for parameters 373
adding personal favorites 372
auto launching when creating a node 374
automatic file filters for 374
navigating in 39
opening 38
selecting files 41
setting default directories 371
viewing controls 42
Brush controls 580
Buffer tabs button 54
Buttons
attaching functions to 376
creating on/off buttons 385
in the Viewer 52
bw files 172, 177
Bytes 413
avoid breaking concatenation with 615
function description 413
C
Cache 349
clearing 349
image 349
processing 350
settings for high-resolution images 131
Cached playback 323
Cache node 344
parameters 347
Caching
and updating frames 346
Calculations
order of, in ColorCorrect 669
scrubbing 630
Cameras
animating 521
attaching to layers 5061033 Index
deleting and duplicating 496
frustum 520
linking from other MultiPlane nodes 496
manipulation 517
CameraShake 794
function description 794
Camera tracking data 494
Candy macro 997
Cardinal splines 317
Channel
functions in command-line mode 1024
variables 942
Channels
about 414
changing the number of 416
editing in command-line mode 1027
in YUV color space 698
shuffling with Reorder 657
viewing 415
Checker 597
function description 597
ChromaKey 703
function description 703
parameters list 703, 704
CinemaScope format 94
Cineon files 170, 171, 437–450
and tracking 731
using LogLin for 648
working with 611
Clamp 639
function description 639
usage described 636
Clearing the cache 349
Clip alpha
using ColorX expressions 648
Clipped images 407
clipping
modifed by Lookup function 652
Clips
looping 281
renumbering in command-line mode 1028
repeating 266
reversing 269
Clock icon 75
Clone brush 581
Close window button 54
CMYToRGB 646
Codec
for QuickTime files 177
Color 598
function description 598
functions in command-line mode 1023
gradients 600, 603
Color banding 410
Color channels 414
hot keys 625
ColorCorrect 661
Color Replace 668
Curves tab 667
function description 661
Invert function 668
lookup curve 667
Misc tab 668
order of calculations 669
preMultiplied 668
reorderChannels 668
subtabs 662
usage described 636
Color correction
and preMult on 3D renders 615
concatenation and float 612
examining with PlotScanline 674
on premultiplied images 656
tools 635
with Infinite Workspace 617
Color correctors
atomic-level 635
consolidated 636, 659
utility 635, 646–659
ColorGrade macro 989
ColorMatch 669
function description 669
usage described 636
Color Palette
assigning colors 623
pop-up palette 626
Color Picker 25, 620–623
assigning to the Parameters Tab 379
associated with parameters 76
customizing the Shake interface for 627
virtual 625, 663
Color Replace
in ColorCorrect 668
ColorReplace 671
creating a key with 682
for spill suppression 689
function description 671
masking a color correction 690
usage described 636
Colors
assigning in Primatte 711
assigning to the Color Palette 623
Colorwheel 599
creating custom palettes 365, 624
for node clusters 247
in QuickPaint 581
interface settings 359
logarithmic and linear 439
sampling from the Viewer 621
Color sliders
grouping 6631034 Index
ColorSpace 646
function description 646
usage described 636
Color space
DV footage 697
models 664
RGB 697
Shake’s color range 611
Color swatches (Pixel Analyzer tool) 628
Color timing 442
Color values
animating 625
displaying in the Terminal 52
ColorWheel 599
function description 599
ColorX 647
function description 647
usage described 636
using expressions with 648
Command line
rendering from 336
Command Line field 25
Command-Line Manual 1015–1030
Command-line mode
appending functions 1017
argument flow 1019
channel functions 1024
color functions 1023
compositing functions 1024
controls 1020
file formats 1027
filter functions 1025
help 1019
I/O functions 1020
image functions 1023
information controls 1021
labeling controls 1023
launching the Flipbook 324
masking controls 1022
quality controls 1022
rendering controls 1021
resizing functions 1025
scripts 1020
timing and image sequences 1016
tips 1029
transform functions 1025
video field functions 1025
viewing, converting, and writing images 1015
viewing controls 1021
Common 458
function description 458
Compare buffers
using 57
Compare Mode button 56
Composites
MultiPlane 486
Compositing
and the alpha channel 417
elements of any resolution 421, 452
in command-line mode 1024
math description 452
Compress 639
function description 639
modifed by Lookup function 652
usage described 636
Compression
Piz 176
PXR 24 176
RLE 176
ZIP 176
Compression controls 170
Concatenating nodes
and masks 533
Concatenation
and masked nodes 615
functions that concatenate 614
how to avoid breaking 615
in color correction 612
indicator on nodes 612
of transformations 764
Conditional expression 941
Conditional statements 959
Connecting nodes 231, 232
Connect Shapes buttons 830
Console tab 26
Consolidated color correctors 636, 659
Const Point Display (Time View) 269
Constraint 205, 459, 542
and masking 542
function description 459, 542
Contextual help 17
Contextual menu (see Right-click menu) 369
ContrastLum
function description 640
usage described 636
ContrastRGB 640
function description 640
usage described 636
Control points
inserting in a curve 667
Converting formats 126
Convolve 865
example kernel 866
function description 865
parameter list 867
Copy 460
function description 460
modifying image channels with 416
CopyDOD macro 996
Copying nodes 239, 257
Copy parameter command 811035 Index
CornerPin 754, 773, 795
function description 795
setting up controls 388
Create Local Variable 81
Crop 182, 186, 773
scaling properties of 775
ct / ct16 files 171
Cursor 54
Curve Editor 25, 291–322
about 294
buttons 299
curve processing 309
editing HueCurves 672
loading and viewing curves 295
navigating in 298
placing into a Parameter Tab 386
right-click menu 298, 316
setting colors for 362
splitting panes 300
Curves
adding jitter 312
applying functions to 309
autoloading 296
averaging 313
cycle types 315, 320
deleting 303
in expressions 945–946
inserting a control point 667
loading and viewing 295
modifying 315
negating 313
resampling 314
reversing 312
scaling 311
shifting 265
smoothing 311
Curves Tab 667
Custom Entries
in Pixel Analyzer tool 631
Custom icons
adding to a button 913
locations for 355
Customization
alternate icons 375
creating on/off buttons 385
push-button toggles 384
radio buttons 384
Customizing Shake 355
Custom nodes
adding to Mask shape list 529
Cycle types (for curves) 320
D
D1 NTSC 94, 216
D1 PAL 95, 216
Dampening
modifed by Lookup function 652
Data clouds 496
Default filter 863
Defaults
Color Picker default values 381
for formats 366
Node View zoom level 378
Parameters Tab 379
slider ranges 382
timecode modes 367
Default settings 356
Deflicker macro 991
Defocus 868
function description 868
parameter list 869, 881, 891
DeInterlace 205, 206
Delete key on Macintosh 18
Delete Keyframe button 71
DepthKey 704
function description 704
MayaDepthKey macro 704
parameter list 705
DepthSlice 705
function description 705
DeSpill macro 994
dib files 171
Digital negatives 438
DilateErode 685, 869
function description 869
in sample matte 685
Dirac filter 863
Disk-Based Flipbook 326, 339
Disk cache
preservation of 351
DisplaceX 807
function description 807
Display DOD and image border button 56
Do/While (scripting usage) 961
Domain of Definition 82–87
and rotoshapes 84
assigning 83
background color 85
color correcting outside of 619
DOD button 55
in the Viewer 67
SetDOD node 84
testing rendering times with 82
Viewer and display differences 67
ways to alter 83
dpx files 171
DropShadow 470
function description 470
DV footage
color space 697
keying 6961036 Index
E
EdgeDetect 870
function description 870
Edge treatment 691
Edit Connections button 830
Edit Menu 34
Edit mode
painting 580
Edit Shapes button 830
Edit Shapes mode 547
Effects
applying to blue screen footage 694–696
Emboss 872
function description 872
modifying image channels with 416
Enhanced node view 221
Environment variables 393
Mac OSX 395
testing 399
Exit command 33
Expand 641
function description 641
usage described 636
Exporting
interlaced footage 203
Expressions 939
and Lookup function 652
arithmetic operators 941
channel variables 942
command-line usage 940
conditional 941
curve functions 945–946
examples of 940
for selecting nodes 231
global variables 941
image variables 941
in command-line mode 1030
logical operators 941
math functions 942
noise functions 943
precedence of operators 940
relational operators 941
string functions 945
trig functions 944
using with parameters 78
with ColorX 648
External monitor
customizing 330
F
Fade 642
function description 642
usage described 636
Favorites List (in Browser) 40
Favorite views 28, 220
restoring 29
FG/BG (Tracker) buttons 725
Field 205, 207
Field chart 63
Field dominance 198
Field rendering 195
settings 203
Fields
and filters 195
and JPEG files 204
and transforms 194
changing in command-line mode 1027
described 192
displaying in the Viewer 200
File Browser 38–45
Favorites list 40
opening 38
File formats 167–180
and temp files 169
command-line mode 1027
file sizes of 180
padding image filenames 167
supported 170
tracking 740
FileIn 110, 205
and time notation 125
and Time View 262
deinterlacing parameter 197
parameter list 111
proxy parameters 166
FileIn Trim 269
File management controls 44
File Menu 32
Filenames
conventions in this book 19
FileOut 334
FileOut node 333
File paths 109
conventions in this book 19
Files
naming 335
naming for output 44
selecting 41
sizes of 180
Film
anamorphic plates 209
and aspect ratio 211
and high-resolution images 130
using proxies 137
FilmGrain 8721037 Index
Filters 861–891
and premultiplication 433
ApplyFilter 864
Blur 864
box 862, 863
characteristics 862
Convolve 865
default 863
defined 861
Defocus 868
DilateErode 869
dirac 863
EdgeDetect 870
Emboss 872
FilmGrain 872
gaussian 863
Grain 875
IBlur 879
IDefocus 880
IDilateErode 882
impulse 863
in command-line mode 1025
IRBlur 883
ISharpen 884
lanczos 863
masking 539, 861
masking versions 539
Median 885
Mitchell 862
Mitchell method 863
PercentBlur 885
Pixelize 886
quad 863
RBlur 886
Sharpen 888
sinc 863
sinc method 862
triangle 863
ZBlur 888
ZDefocus 890
Final Cut Pro
using with Shake 132
First frame 44
Fit 183
scaling properties of 776
Flip 796
Flipbook 90, 323–326
and QuickTime 326, 339
controls 324
customizing 326, 401
hot keys 1008
Launch Flipbook button 57
launching 90, 323
launching from the command line 324
launching in command-line mode 1026
memory requirements 325
rendering 339
temporary files 329
Viewer controls 325
Float bit depth 411
and Logarithmic color 444
and third-party plug-ins 449
explained 446
Float calculations 612
Flock macro 986
Flop 797
function description 797
Flush Cache command 33
Fonts
defaults for menus 369
setting paths 357
Footage
interlaced and non-interlaced 204
Foreground transparency
Keylight plug-in 708
Formats
converting 126
Four-point tracking 723
FrameFill macro 985
Frame range
setting in the Time Bar 88
Frame rate
increasing or decreasing 325
Frames
averaging, in command-line mode 1027
displaying in the Viewer 66
Frames/timecode button 56
Freeze repeat mode (playback) 267
Functions
Add 638
AddBorders 186
AddMix 453
AddShadow 470
AddText 456
AdjustHSV 659
and scripting 958
appending in command-line mode 1017
ApplyFilter 864
Atop 457
Blur 864
Brightness 638
Bytes 413
CameraShake 794
Checker 597
ChromaKey 703
Clamp 639
Color 598
ColorCorrect 661
ColorMatch 669
ColorReplace 671
ColorSpace 646
ColorWheel 5991038 Index
ColorX 647
Common 458
Compress 639
Constraint 459, 542
ContrastRGB 640
Convolve 865
Copy 460
CornerPin 754, 795
Crop 186
declaring in expressions 941
Defocus 868
DeInterlace 206
DepthKey 704
DepthSlice 705
DilateErode 869
DisplaceX 807
DropShadow 470
EdgeDetect 870
Emboss 872
Expand 641
Fade 642
Field 207
FileOut 334
FilmGrain 872
Fit 183
Flip 796
Flop 797
formats of 959
Gamma 642
Grad 600
Grain 875
Histogram 677
HueCurves 672, 691
IAdd 461
IBlur 879
IDefocus 880
IDilateErode 882
IDisplace 808
IDiv 461
IMult 462
Inside 462
Interlace 205
Invert 643
IRBlur 883
ISharpen 884
ISub 463
ISubA 463
KeyMix 464
LayerX 465
LogLin 648
Lookup 650
LookupFile 653
LookupHLS 654
LookupHSV 655
LumaKey 709
Mask 540
MatchMove 740
Max 465
MDiv 656
Median 885
Min 465
Mix 466
MMult 656
Monochrome 643
Move2D 797–799
Move3D 799
Mult 644
MultiLayer 478
nested 958
Orient 801
Outside 466
Over 466
Pan 802
PercentBlur 885
PinCushion 814
Pixel Analyzer 631
Pixelize 886
PlotScanline 676
QuickPaint 579
QuickShape 572
Ramp 601
Rand 602
Randomize 814
RBlur 886
Reorder 657
Resize 184
RGrad 603
Rotate 802
RotoShape 546
Saturation 644
Scale 803
Screen 467
Scroll 804
Select 471
Set 657
SetAlpha 658
SetBGColor 658
SetDOD 804
Sharpen 888
Shear 805
Solarize 645
SpillSuppress 690, 715
Stabilize 745
SwapFields 207
SwitchMatte 467
Text 604
Threshold 645
Tile 609
TimeX 123
Tracker 750
Transition 270
Turbulate 8151039 Index
Twirl 816
Under 468
VideoSafe 208
Viewport 187
Warper 807
WarpX 816
Window 189
Xor 468
ZBlur 888
ZCompose 469
ZDefocus 890
Zoom 185
Function tabs
hot keys 1008
G
Gamma 642
function description 642
usage described 636
Gamma/Offset/LogLin button 63
Gaussian filter 863
gif files 171
Global parameters 74, 91
Global variables 941
Grad 600
function description 600
Grain 875
function description 875
graphic example 877
parameter list 876
Graphs
of Lookup function 652
Green screen keys 682
Green spill 687
Grid Snap 244
Grip to desktop button 54
Grouping color sliders 663
Groups (nodes) 246
exposing parameters for 249
setting colors for 362
guiControls 98
H
Hard mattes 685
Hardware acceleration
in MultiPlane node 486
Hardware Rendering mode 486
Help 26, 863
contextual 17
in command-line mode 1019
Hermite splines 318
Hexadecimal
in Pixel Analyzer tool 631
Hide Others menu 32
Hide Shake 32
High-resolution images 130
Histogram 677
button 55, 64
examples of 677–678
function description 677
History Step
in QuickPaint 582
HLSToRGB 646
Holdout mattes 536, 683
Home directory 395
Hot keys 1005–1012
conventions for different platforms 19
for color channels 625
in the Viewer 68
QuickPaint 591, 1011
HSVToRGB 646
HSV values
illustrated 703
HTML help pages 370
HueCurves 672, 691
function description 672
usage described 637
I
I/O functions
command-line mode 1020
IAdd 461
combining with keyers 683
function description 461
math and LayerX syntax 452
IBlur 879
function description 879
parameter list 880, 882, 883, 884, 885
Iconify Viewer button 48, 54
Icons
custom (locations for) 355
customizing 375
search path 358
standard size 376
IDefocus 880
function description 880
IDilateErode 882
function description 882
IDisplace 808
function description 808
IDiv 461
function description 461
math and LayerX syntax 452
If/Else statements 960
iff files 171
Ignoring nodes 243
Image cache 349, 350
customizing 352
Image functions
command-line mode 10231040 Index
Images
absolute paths of 954
anamorphic 209
changing the number of channels 416
command-line functions 1015, 1016
high-resolution 130
input and output 107
interlaced 194
resizing in command-line mode 1027
saving 108, 333
unpremultiplying 426
viewing channels 415
with different color channels 414
Image sequences 108
Image variables 941
Importing
interlaced images 196
Photoshop files 32, 476
Impulse filter 863
IMult 462
combining with keyers 683
function description 462
math and LayerX syntax 452
Include files (for customization) 358, 907
Infinite duration
Clips
with infinite duration 268
Infinite Workspace 405–408
and color correction 617
and the Blur node 865
and transformations 797
disabling 408
Information controls
command-line mode 1021
InOut Point Display (Time View) 268
Inputs (nodes)
switching 235
Inside 462
combining with keyers 683
function description 462
math and LayerX syntax 452
Interface
assigning processors to 368
Curve Editor settings 362
customization directory location 357
customizing for Color Picker 627
custom palette 365, 624
devices and styles 400
Group settings 362
loading macros 917
node group colors 360
saving settings 33, 623
tab colors 359
Text color settings 363
Time Bar color settings 361
Time View color settings 364
Interface settings 358
Interlace 205
Interlaced images
common problems with 194
importing 196
Interlacing
exporting footage with 203
preserving 198
removing 199
Interleave (for keyframes) 307
Interpolating paint strokes 588
Invert 643
function description 643
in ColorCorrect 668
modifed by Lookup function 652
usage described 637
invertMask 532
Invert Selection command 258
IRBlur 883
function description 883
IRIX
keyboard info 18
ISharpen 884
function description 884
Isometric display angles 489
ISub 463
function description 463
math and LayerX syntax 453
ISubA 463
function description 463
math and LayerX syntax 453
J
Jeffress splines 317
jfif files 172
Jitter (curve operation) 312
JPEG files 170, 172
and fields 204
K
Kernel
Convolve example 866
Keyboard commands
conventions in this guide 19
Keyboard shortcuts 1005–1012
for thumbnails 253
Keyers
ChromaKey 703
combining 683
LumaKey 709
Primatte 710
SpillSupress 715
Keyframes 300–303
adding 300
adding blanks and duplicates 5601041 Index
adding duplicates 293
animating parameters with 291
copying and pasting 314
delete button 71
deleting 292, 303
inserting for tracking 731
Manipulator Box 304
modifying 303
move modes 307
navigating in the Time Bar 294
selecting 302
text fields 305
toggling on and off 71
transform hot keys 305
Keyframing
rules for 293
shapes 558
Keying 681
defined 681
DV footage 696
edge treatment 691
from a green or blue screen 682
reflections 684
with ColorReplace node 682
Keylight
parameter list 709
Keylight plug-in 682, 706
KeyMix 464
example 464
function description 464
math and LayerX syntax 453
understanding its math 424
Keys
pulling 681
Key tab 682, 702
L
Labeling controls
command-line mode 1023
Lanczos filter 863
Launch Flipbook button 57
Layer nodes
for combining keys 686
Layers
attaching to camera and locator points 506
attaching to locator points 510
in Photoshop 476
masking 536
parameters 514
transforming in MultiPlane node 500
LayerX 465
function description 465
Layout controls 245
LensWarp node 811
LensWarp Viewer 812
Light Hardware mode 378
Linear color space
converting using LogLin 648
Linear drag mode 582
Linear Lookup
modifed by Lookup function 653
Linear splines 319
Link cameras 498
Linking
nodes 251
parameters 79, 935
shapes 566
tracks 728
Linux
and audio playback 26
exiting Shake 33
keyboard info 18
overlay info hot key 325
Load/Save button 36
Loading
expressions 81
interface settings 33
Tracks 728
Locator points 499
attaching layers to 510
editing 499
viewing and using 498
Lock Direction button 71
Locking
parameters 76
Lock Tangents button 569
Logarithmic color space 437–450
and float bit depth 444
converting using LogLin 648
correcting in 439
Logical operators 941
LogLin 439, 648
function description 648
parameter list 650
rolloff parameter 449
usage described 637
Lookup 650
function description 650
graphs and expressions 652
Lookup curve
example 653
in ColorCorrect 667
LookupFile 653
function description 653
usage described 637
LookupHLS 654
function description 654
usage described 637
LookupHSV 655
function description 655
usage described 6371042 Index
Looping
QuickTime and still images 263
LumaKey 709
function description 709
Luminance
In YUV color space 697
M
Machine settings
directory location 357
Macintosh
and Delete key 18
keyboard info 18
setting environment variables 394
macroCheck 370
MacroMaker 907
image of 909
parameter list 909–910
Macros 907, 914
adding custom icons to 913
AEPreMult 989
AltIcon 998
attaching button toggles 924
attaching color pickers and subtrees 923
attaching parameter widgets 919
attaching pop-up menus 926
AutoFit 994
basic structure 914
Candy 997
ColorGrade 989
CopyDOD 996
creating on/off buttons for 922
creating the node structure 908
default width and height 366
Deflicker 991
DeSpill 994
directory location 357
examples 929
Flock 986
FrameFill 985
installing 357
loading into the interface 917
MakeNodeIcon 998
making 909
Manga 986
MayaDepthKey 704
MayaZ Depth 996
missing from a script 370
modifying 911
modifying the macro interface 913
opening 251
PreTrack 995
RadioButton 999
Rain 987
Ramp2D 987
RandomLetter 988
Relief 993
ScreenFloat 996
setting default values for 918
setting slider ranges 921
Slate 988
SpeedBump 996
Temp 992
text manipulation 930–933
typical errors 918
UnPin 985
VLUTButton 998
Wallpaper 1000
Wedge 1000
Magnet drag mode 582
Make Macro command 259
MakeNodeIcon macro 998
Manga macro 986
Manipulator Box (for keyframes) 304
Mask
command-line mode 540
command-line usage 541
function defined 540
parameter list 540
script 541
synopsis 541
Masking
a layer 536
controls in command-line mode 1022
defined 527, 681
filters 539, 861
for images with no alpha channel 536
with the Constraint node 542
Masks
and transform nodes 534
inverting 532
using different channels for 536
with concatenating nodes 533
Mask shape list
adding custom nodes to 529
Match case 231
MatchMove 740
function description 740
general description 717
workflow 721
Math
compositing 452
functions and definitions 942
Matrix
in ColorCorrect 662
Mattes
hard 685
holdout matte 683
touch-up tools 6851043 Index
Max 465
combining with keyers 683
function description 465
math and LayerX syntax 453
Maya
file compatibility 173
importing Z channel info 704
Maya .ma files 494
Maya ASCII files 490
MayaDepthKey macro 704
MayaZ Depth macro 996
MDiv 656
function description 656
usage described 637
Media
specifying placement 44
Media formats
adding to Format menu 365
Median 885
function description 885
Memory
and the cache 349
Memory usage
with Warper and Morpher 821
Menus
(Mac OS X only) 32
adding functions to 370
default font sizes for 369
Edit 34
File 32
Render 35
Tools 34
Viewers 35
Min 465
function description 465
math and LayerX syntax 453
Min/Max Basis
in Pixel Analyzer tool 631
Mirror repeat mode 267
Misc tab 668
Mitchell filter 862, 863
Mix 466
function description 466
math and LayerX syntax 453
Mixdown 288
mixPercent 273
MMult 656
function description 656
usage described 637
Monitor controls 101
Monitors
aspect ratio 330
calibrating 331
extra Viewers for 50
setting up dual monitors 20, 400
Monochrome 643
function description 643
modifying image channels with 416
usage described 637
Morpher
defining boundary shapes 843
function description 854
Morpher node 854
controls and parameters 855
Morphing
tips 854
Motion blur 778–781
Mouse
functions described 18
Move2D 763, 769, 771, 775, 797–799
function description 797
Move3D 772, 799
function description 799
Moving nodes 240
mray files 171
Mult 644
function description 644
modifying image channels with 416
usage described 637
MultiLayer 478
button control 481
function description 479
with Photoshop files 476
MultiLayer node 473
Multi-Pane Viewer 488
MultiPlane node 485
about 485
angle controls 503
animating layers 506
default camera 491
hardware acceleration 486
layer controls 500
layer hierarchies 505
linking cameras 496
parameters 487
parenting layers 505
scale controls 504
transforming layers 500
viewer shelf controls 491
Muting audio tracks 282
N
Naming files 335
Natural splines 316
Negate (curve operation) 313
Nested functions 958
Nodes
Add 638
AddBorders 186
AddMix 4531044 Index
AddShadow 470
AddText 456
AdjustHSV 659
aligning 246
ApplyFilter 864
Atop 457
Blur 864
Brightness 638
Bytes 413
CameraShake 794
Checker 597
ChromaKey 703
Clamp 639
cloning 252, 936
Color 598
ColorCorrect 661
ColorMatch 669
ColorReplace 671
ColorSpace 646
ColorWheel 599
ColorX 647
Common 458
Compress 639
connecting 231, 232
Constraint 459, 542
ContrastRGB 640
Convolve 865
Copy 460
copying 239, 257
copying and pasting 239
CornerPin 754, 795
creating 226
creating multiples with one button 377
Crop 186
cutting 257
Defocus 868
DeInterlace 206
deleting 238, 257
DepthKey 704
DepthSlice 705
DilateErode 869
disconnecting 239
DisplaceX 807
DropShadow 470
EdgeDetect 870
Emboss 872
enhanced view 221
Expand 641
extracting 238
Fade 642
Field 207
FileOut 334
FilmGrain 872
finding 34, 258
Fit 183
Flip 796
floating 237
Flop 797
Gamma 642
Grad 600
Grain 875
grouping 246
Histogram 677
HueCurves 672, 691
IAdd 461
IBlur 879
IDefocus 880
IDilateErode 882
IDisplace 808
IDiv 461
ignoring 243
IMult 462
inserting 236
Inside 462
Interlace 205
Invert 643
IRBlur 883
ISharpen 884
ISub 463
ISubA 463
KeyMix 464
LayerX 465
LensWarp 811
linking 251
loading into a Viewer 240
loading parameters 241
LogLin 648
Lookup 650
LookupFile 653
LookupHLS 654
LookupHSV 655
LumaKey 709
match case 231
MatchMove 740
Max 465
MDiv 656
Median 885
Min 465
Mix 466
MMult 656
Monochrome 643
Morpher 854
Move2D 797–799
Move3D 799
moving 240
Mult 644
MultiLayer 473, 478
MultiPlane 485
organizing 244
Orient 801
Outside 466
Over 4661045 Index
Pan 802
pasting 239
PercentBlur 885
PinCushion 814
Pixel Analyzer 631
Pixelize 886
PlotScanline 676
QuickPaint 579
QuickShape 572
Ramp 601
Rand 602
Randomize 814
RBlur 886
renaming 243
Reorder 657
replacing 237
Resize 184
RGrad 603
Rotate 802
RotoShape 546
Saturation 644
Scale 803
Screen 467
Scroll 804
Select 471
select by expression 231
select by name 231
select by type 231
selecting and deselecting 228
selecting downstream 258
selecting upstream 258
Set 657
SetAlpha 658
SetBGColor 658
SetDOD 804
setting interface colors for 360
Sharpen 888
Shear 805
Solarize 645
SpillSuppress 690, 715
Stabilize 745
SwapFields 207
switching inputs 235
SwitchMatte 467
Text 604
Threshold 645
thumbnails 253
Tile 609
TimeX 123
Tracker 750
Transition 270
Turbulate 815
Twirl 816
Under 468
ungrouping 247
VideoSafe 208
Viewport 187
Warper 807, 845
WarpX 816
Window 189
Xor 468
ZBlur 888
ZCompose 469
ZDefocus 890
Zoom 185
Node View
and Tool Tabs 219
contextual menu 257
customizing 378
hot keys 1009
Overview 219
setting default zoom level 378
Noise functions 943
Noodles 217
color 102
color coding 223
disconnecting 239
tension 99
nreal.h file 68, 356, 866
nri files 171
nrui.h file 356
NTSC 94, 216
O
Offset
setting up controls 390
tracking 723–724
usingAdjustHSV 659
Offset Track button 724
On/Off buttons
creating 385
creating in macros 922
OpenEXR 176
auxiliary data channels 175
OpenGL hardware acceleration 486
Open Script command 32
Orient 801
function description 801
Out Points (clips)
about 268
Outside 466, 536
combining with keyers 683
function description 466
math and LayerX syntax 453
Over 466
combining with keyers 683
function description 466
math and LayerX syntax 453
understanding its math 4241046 Index
P
Padding (when naming image files) 167
Paint
tools 580
Paint brush 581
Painting (see QuickPaint) 579
Paint mode 580
Paint strokes
attaching to a tracker 586
converting from Frame to Persistent 589
Interpolating 588
modifying 583
modifying parameters 587
PAL 95, 216
Palettes
custom 365, 624
pal files 173, 177
Pan 769, 802
function description 802
Panning controls
setting up 387
Parameters
animating in command-line mode 1028
editing 74
grouping in a subtree 382
linking 79, 935
linking at different frames 937
locking 76
viewing in grouped nodes 249
Parameters Tab
adding a Curve Editor to 386
right-click menu 81
setting defaults 379
Parameters tabs 25
Parameters View 72
Parameter widgets
attaching to macros 919
Parent/Child relationships for shapes 566
Parenting layers 505
Pasting nodes 239, 257
Paths 109
and FileIn/SFileIn 112
Pausing animation 293
pbm/ppm/pnm/pgm files 172
Peak Meter 281
PercentBlur 885
function description 885
Per-channel view 65
Persistence controls 297
Persist toggle 582
Perspective angle 490
Photoshop
importing images from 32
layering modes 476
Photoshop files 473
Photoshop transfer modes 476
pic files 172
PinCushion 814
function description 814
Pixel Analyzer 26, 631
saving data 632
Pixel Analyzer tool
Accumulate 630
Custom Entries 631
Hexadecimal 631
Min/Max Basis 631
Mode 630
Reset 630
Value Range 631
Pixelize 886
function description 886
pix files 171
Piz compression 176
Platforms 15
Playback rate
subparameters 285
PlotScanline 676
button 55, 63
examples of 675–676
function description 676
using to understand color correction 674
Plug-ins
Keylight 706
Primatte 710
png files 172
Point cloud 498
Point controls
setting up 391
Point modes for RotoShapes 564
Points
contextual menu 568
modifying on a paint stroke 584
Pop-Up Color Palette 626
Pop-up menus 77
adding 383
attaching to macros 926
default font sizes for 369
PostScript fonts 604
Precedence
in expressions 940
Preference files
creating your own 356
load order 358
locations for 357
troubleshooting 359
Preferences 355
color 359
environment variables 393
general 359
templates directory 392
Viewers 3861047 Index
Premultiplication
and 3D renders 616
and filters 433
explained 421
managing 431
typical problems 422
with Over 433
PreTrack macro 995
Previewing audio 280
Primatte plug-in 682, 710
3D polyhedron 714
arithmetic operator 683
assigning colors 711
supplying the background image 710
using the arithmetic parameter 686
Processing cache 350, 351
Processors
assigning to interface 368
Proxies 137
and offline images 148
and YUV files 156
changing preset defaults 144
compatibility with other functions 163
customizing the presets 143
defined 137
interactiveScale 139
network rendering 157
parameters list 164
pregenerating 150
pregenerating with a script 160
remastering resolutions with 185
rendering on the command line 153
setting 141
Proxy button 37
proxyRatio 210–211
psd files (see Photoshop) 172
Pulldown 116
Pulldown / Pullup 116
Pullup 116
Purge Memory Cache command 33
Push (for keyframes) 308
Push-button toggles
creating 384
PXR 24 compression 176
Q
Qmaster
support 339
qnt files 173
qtl files 173, 177
Quad filter 863
Quality controls
command-line mode 1022
QuickPaint 579
active channels 582
attaching a tracker 586
deleting strokes 584
Edit mode 580
Frame mode 582
function parameters 591
general description 579
History Step button 582
hot keys 591, 1011
interpolating strokes 588
Interpolation mode 582
Linear drag mode 582
Magnet drag mode 582
modifying strokes 583
Paint mode 580
parameters list 594
Persist mode 582
picking a color 581
resolution 579
StrokeData synopsis 594
stroke shapes 584
QuickShape 572
function description 572
QuickShapes
animating 575
Build mode 572
creating 572
modifying 573
QuickTime
and audio 277
changing the default configuration for 392
playback controls 329
rendering 336
trimming and looping 263
QuickTime files 172, 177
and Disk-Based Flipbooks 326, 339
Quit 32
R
RadioButton macro 999
Radio buttons
creating 384
Radius controls
setting up 392
Rain macro 987
RAM
for Flipbook playback 325
limits of 350
Ramp 601
function description 601
Ramp2D macro 987
Rand 602
function description 6021048 Index
Randomize 814
function description 814
RandomLetter macro 988
Random noise
using ColorX expressions 648
Range 666
raw files 172, 177
RBlur 886
function description 886
Real-time playback 325
Re-center image 325
Recover Script 33
Red channel
correction with ColorX expressions 648
Redo 34, 257
Reference pattern 721
Reflections
keying 684
Relational operators 941
Relative Path control 40
Relief macro 993
Reload Script 32
Remap parameters 122
Renaming nodes 243
Render
command-line mode 1021
Disk Flipbooks 327
FileOut Nodes 35
Flipbook 35, 257
renderCamera angle 490
Render Disk Flipbook 35, 257
Rendered images
color correcting with MDiv 656
Render File menu 339
Rendering 333
field rendering 195
field rendering settings 203
from the command line 336
parameters 337
Render Menu 35
Render mode 486
Render Parameters window 337
Render Proxies 35
RenderQueue options 401
Render Selected FileOuts 257
Renumbering clips
command-line mode 1017
Reorder 657
function description 657
modifying image channels with 416
usage described 637
Reordering
in ColorCorrect 668
using ColorX expressions 648
Reordering shapes 556
Repeat clips 266
Repeat mode 267
Replace (for keyframes) 308
Resample (curve operation) 314
Reset Track button 724
Reset View 257
Reset Viewer button 56
Resize 183, 184
scaling properties of 776
Resizing
images 181
in command-line mode 1025
Resolution 180–185, 579
and QuickPaint 579
changing 181
functions for modifying 183
of Viewers 50
remastering with proxies 185
setting for the Viewer 366
Restoring
favorite views 29
Retiming 117
parameters for 123
Retiming RotoShapes 561
Reveal brush 581
Reverse (curve operation) 312
Reversing a clip 269
rgb files 172, 177
RGBToCMY 646
RGBToHLS 646
RGBToHSV 646
RGBToYIQ 646
RGBToYUV 646
RGrad 603
function description 603
Right-click
in the Viewer 50
Right-click menu
Clear Expression 81
Clear Tab 81
control points 555
Curve Editor 298, 316
Node View 257
Parameters tab 81
RotoShapes 555
Tracking 728
transform controls 567
Viewer Lookup Table 62
Right-click menu (see Contextual menu)
adding functions to 369
rla files 172, 177
RLE compression 176
Rotate 771, 802
function description 802
Rotate controls
setting up 3901049 Index
RotoShape 546
Add Shapes mode 547
parameter list 570
RotoShape keyframes
cutting and pasting 559
RotoShapes
Add Shapes mode 547
animating 557
creating and modifying 549
Edit Shapes mode 547
parameter list 570
Point modes 564
retiming 561
skeleton relationships 566
transform controls 567
Viewer buttons 568
rpf files 172, 177
S
Sample (color) From Viewer 621
Saturation 644
function description 644
usage described 637
Saving
expressions 81
interface settings 33
Pixel Analyzer data 632
scripts 32
track files 728
Saving tracks 739
Scale 769, 803
function description 803
scaling properties of 775
Scale (curve operation) 311
Scaling
and transformations 775
functions compared 775
in MultiPlane node 504
setting up controls for 387
Screen 467
function description 467
math and LayerX syntax 453
ScreenFloat macro 996
Scripting
commands 928
conditional statements 959
controls 952
if/else statements 960
Script manual 928–933, 951
Scripts
and functions 958
commenting 959
data types 956
described 951
do/while 961
in command-line mode 1020
loading and saving 36
loading into the interface 953
missing macros 370
nested functions 958
recovering 33
variables and data types 953
while 960
Scroll 804
function description 804
Scrubbing
with ChromaKey 703
Search path
for icons 358
Search region
scaling 722
Search region (Tracker) 721
Select 471
function description 471
Selecting files 41
Selecting nodes 228
Send to Shake 132
Sequences
images 108
Sequence timing controls 264
Services menu 32
Set 657
function description 657
modifying image channels with 417
usage described 637
SetAlpha 658
function description 658
usage described 637
SetBGColor 658
function description 658
usage described 637
SetDOD 804
function description 804
scaling properties of 776
Settings
color 359
Curve Editor colors 362
environment variables 393
general 359
Group colors 362
Text colors 363
Time Bar colors 361
Time View colors 364
SFileIn 110
and retiming 117
sgi files 170, 172, 177
sgiraw files 172, 177
Shake
customizing 355
Reference Guide conventions 191050 Index
supported platforms 15
user interface 24–31
Shape data
importing and exporting 567
Shapes
attaching trackers 562
bounding boxes 555
changing color of 555
copying and pasting between nodes 556
drawing and editing 832
drawing with the RotoShape node 548
editing 550
keyframing 558
linking 566
locking tangents 556
reordering 556
showing and hiding 556
timing 560
Shapes (see RotoShapes) 546
Sharpen 888
Shear 805
function description 805
Shift Curves 265
and timing changes 265
Sinc filter 862, 863
Skeleton relationships for shapes 566
Slate macro 988
Slider ranges
setting in macros 921
setting up 382
Smooth (curve operation) 311
SmoothCam node 759
Smoothing
curves 311
tracks 728
Smudge brush 581
Software Rendering mode 487
Solarize 645
function description 645
usage described 637
Soloing audio tracks 282
Sound files (see Audio)
extracting curves from 285
Spatial filter 861
Spawn Viewer Desktop 35, 50
SpeedBump macro 996
SpillSuppress 690, 715
function description 715
parameter list 715–716
Spill suppression 681, 687
Spline Lock toggles 832
Spline Lookup
modifed by Lookup function 653
Splines 316–319
Cardinal 317
creating and modifying 830
Hermite 318
Jeffress 317
linear 319
natural 316
step 319
Squeezed (anamorphic) images 210
compositing with square pixel images 212
rendering 215
Stabilize 745
function description 745
general description 717
workflow 721
Startup directory 357, 906
Step splines 319
Stipple patterns in Node view 365
Stitching images 784
String functions 945
Stylus 19, 400
navigating windows 20
subPixelResolution
for tracking 719
Support
for Qmaster 339
SwapFields 205, 207
SwitchMatte 467
function description 467
modifying image channels with 417
T
Tablet 19
Tablet usage 75
Tabs
arranging 30
attaching functions to buttons 376
color settings 359
setting up node columns 375
Tangents
editing 306
locking 556
Targa files 170
tdi files 172
tdx files 173
Template preference files 392
Temp macro 992
Temporary files 168
Text 604
cool command-line tricks 1026
function description 604
manipulating in macros 930–933
setting colors for 363
tga files 173
Threshold 645
function description 645
usage described 6371051 Index
Thumbnails 253
keyboard shortcuts 253
tiff files 173
Tile 609
function description 609
Tiling with a macro 933
Time Bar 88, 292
frame range settings 371
setting colors for 361
Time Bar area 25
Timecode
default mode 371
displaying in the Viewer 66
setting default modes 367
Time range
command-line mode 1017
Time shift 284
Time View 261
setting colors for 364
Viewing nodes in 262
Time view 261
TimeX 123
function description 123
Title Bar 31
TMV color space 664
Toggles
creating 384
Tools Menu 34
Tool Tabs
and Node view 219
described 25
Key 702
Tool tabs
customizing 375
Track Backward/Track Forward buttons 724
Track Display button 725
Tracker 750
attaching to a paint stroke 586
function description 750
general description 717
how it works 718
reference pattern 721
search region 721
Trackers
attaching to shape control points 563
attaching to shapes 562
Track gain 285
Tracking 717–754
3D 485
and Cineon files 731
file format 740
linking to track data 737
manually inserting keyframes 731
off-frame points 732
onscreen controls 721
paint strokes 586
parameters 725–728
reference pattern 729
removing Jitter 737
right-click menu 728
strategies 728
two-point 738
workflow 720
Tracking data 494
trackRange parameter 725
Tracks
averaging 728, 734
clearing 728
linking 728
loading 728
modifying 733
muting and soloing 282
saving 728, 739
smoothing 728
smoothing curves of 735
Transform
controls for RotoShapes 567
functions in command-line mode 1025
Transformations 763–777
concatenation of 764
inverting 766
multiple transforms in a tree 774
onscreen controls 766
Transform controls 769
Transition 270
creating your own 273
function description 270
Transitions
creating your own 273
Triangle filter 863
Trig functions 944
Trim controls 269
Trimming
QuickTime and still images 263
Truelight 331
TrueType fonts 604
Turbulate 815
function description 815
Turbulent noise 648
Tweaker windows 74
Twirl 816
function description 816
Two-point tracking 738
U
ui.h file 627
ui directory 358, 906
UNC filename convention 372
Under 468
function description 468
math and LayerX syntax 4531052 Index
Undo 34, 257
changing levels of 37
setting levels 368
Undo/Redo button 36
Ungroup 247
UnPin macro 985
Unpremultiplying 426
Update button 37
User Directory 357
User interface 24–31
Utility correctors 635, 646–659
V
Value Range
in Pixel Analyzer tool 631
Variables 928, 929, 937
adding to the interface 936
environment 393
for channels 942
image variables in each node 941
recognized by Shake 398
Video 191
and aspect ratio 210
aspect ratio 215
common problems 194
field functions in command-line mode 1025
field rendering settings 203
fields described 192
functions 205
importing interlaced images 196
timecode display 66
Video functions
Constraint 205
DeInterlace 205
Field 205
FileIn 205
Interlace 205
SwapFields 205
VideoSafe 205
VideoSafe 205, 208, 659
usage described 637
Viewer
cached playback 323
contextual menus 69
Warper and Morpher 830
Viewer buttons 52, 54–57
Viewer Channel button 53
Viewer controls
for the Flipbook 325
Viewer DOD 61
Viewer DOD button 55
Viewer lookups 61
Viewers 45–70
creating 46
deleting 50
expanding 48
hot keys 68
loading images into 50
minimizing 48
Multi-Pane 488
on second monitors 50
preferences 386
resolution of 50
selecting 47
setting max resolution 386
Viewer script controls
activating 62
Viewer scripts 61
creating your own 68
Viewers Menu 35
Viewing
audio 283
Viewing controls
command-line mode 1021
Viewport 187
for cropping 182
function description 187
scaling properties of 775
Views
enhanced node view 221
favorites 220
saving favorites 28
Virtual Color Picker 625, 663
Visibility controls 297
VLUTButton macro 998
VLUTs 61
activating 62
creating your own 68
using to simulate logarithmic space 66
W
Wacom tablet 75
Wallpaper macro 1000
Warper 807
buttons 830
parameters 846
shape controls 843
Warper node 845
Warps
general description 807
WarpX 816
function description 816
Wav audio files 277
Websites
Shake resources 18
Wedge macro 1000
Wedging 442
While (scripting usage) 960
Wildcards 43, 10301053 Index
Window 189
for cropping 182
function description 189
scaling properties of 775
Windows
OS functions 31
panning 28
zooming 28
X
Xor 468
function description 468
math and LayerX syntax 453
xpm files 173
Y
YCrCb defined 697
YIQToRGB 646
yuv files 173
defined 697
YUVToRGB 646
Z
ZBlur 888
function description 888
parameter list 889–890, 891
Z channel
blurring with ZBlur 888
button 56, 64
defocusing 890
display properties 414
for DepthKey function 704
for DepthSlice function 705
inverting 643
multiplying 644
placing in RGB channels with Reorder 657
ZCompose 469
function description 469
math and LayerX syntax 453
ZDefocus 890
function description 890
ZIP compression 176
Zoom 183, 185
scaling properties of 776
Zoom in 257, 325
Zooming
on interlaced images in the Viewer 202
Zoom out 257, 325
Apple Wireless
Keyboard2 English
1 Setting Up Your Apple
Wireless Keyboard
Congratulations on selecting the Apple Wireless Keyboard as
your input device.
Using the Wireless Keyboard
The information in this booklet supplements the setup instructions in the user’s guide
that came with your Mac. Follow the steps on the next several pages to:
 Install batteries in your keyboard.
 Set up your Mac.
 Use Setup Assistant to set up your keyboard with your Mac.
 Use Software Update to install the latest software.
Don’t turn on your keyboard until you start up your Mac in Step 3.
Important: Keep the battery compartment cover and the batteries out of the reach of
small children.English 3
Step 1: Install the Batteries
Follow the instructions below to install batteries in your Apple Wireless Keyboard.
To install batteries in the keyboard:
1 Use a coin to remove the battery compartment cover.
2 Slide the batteries into the battery compartment as shown below.
3 Replace the battery compartment cover and leave the keyboard turned off until you
start up your Mac in Step 3.
Note: When the Power On light is off, the keyboard is off.
Step 2: Set Up Your Mac
Follow the instructions in the user’s guide that came with your Mac to set it up.
Because you have a wireless keyboard, skip the instructions to connect a USB keyboard.
Wait to start up your Mac until instructed to do so in Step 3.
Battery compartment cover
Insert batteries4 English
Step 3: Pair Your Keyboard
Before you can use your keyboard, you have to pair it with your Mac. Pairing allows
your keyboard to communicate wirelessly with your Mac. You only have to pair once.
The first time you start up your Mac, Setup Assistant guides you in setting up your
Apple Wireless Keyboard and pairing it with your Mac.
1 Push and release the On/off (®) switch to turn on the Apple Wireless Keyboard.
2 Turn on your Mac.
3 When your Mac starts up, follow the onscreen instructions in Setup Assistant.
Step 4: Install Software
To use your keyboard and take advantage of the full range of features, you need to
update your Mac to Mac OS X v10.4.10 or later and install the keyboard software
update.
To update to the latest version of Mac OS X and install the keyboard software update,
choose Apple () > Software Update from the menu bar and follow the onscreen
instructions.
Power On light On/off switchEnglish 5
Using Your Keyboard
Use the keys at the top of your keyboard to adjust the brightness of your display, open
Exposé, view Dashboard widgets, control volume, and more.
Decrease ( ) or increase ( ) the brightness of your display.
Use Exposé All Windows to see all of the open windows on your desktop at once.
Open Dashboard to access your widgets and get information about the weather,
stocks, and more.
] Rewind or go to the previous song, movie, or slideshow.
’ Play or pause songs, movies, or slideshows.
‘ Fast-forward or go to the next song, movie, or slideshow.
— Mute the sound coming from the speakers or headphone port on your computer.
– - Decrease (–) or increase (-) the volume of sound coming from the speakers or
headphone port on your computer.
C Press and hold the Media Eject key to eject a disc.6 English
Customizing Your Keyboard
You can customize your keyboard using the Keyboard pane of Keyboard & Mouse
preferences.
To customize your keyboard:
1 Choose Apple () > System Preferences.
2 Click Keyboard & Mouse.
3 Click Keyboard or Keyboard Shortcuts.
Click Keyboard Shortcuts to assign shortcuts to menu commands in a Mac OS X
application or in the Finder.
More information about your keyboard is available in Mac Help. Open Mac Help and
search for “keyboard.”
Renaming Your Keyboard
Your Mac automatically gives your wireless keyboard a unique name the first time you
pair it. You can rename your keyboard using Keyboard & Mouse preferences. Choose
Apple () > System Preferences and click Keyboard & Mouse. Click the Bluetooth® tab
and enter a name in the Name field.
Cleaning Your Keyboard
Follow these guidelines when cleaning the outside of your keyboard:
 Remove the batteries from the keyboard.
 Use a damp, soft, lint-free cloth to clean the exterior of the keyboard. Avoid getting
moisture in any openings.
 Don’t use aerosol sprays, solvents, or abrasives.English 7
About Your Batteries
Your Apple Wireless Keyboard comes with three alkaline batteries. You can use alkaline,
lithium, or rechargeable AA batteries in your keyboard.
You can use Keyboard & Mouse preferences to check the battery level. Choose
Apple () > System Preferences. Click Keyboard & Mouse and click Bluetooth.
Note: To conserve battery power, turn your keyboard off when you aren’t using it. If
you don’t plan to use your keyboard for an extended period, remove the batteries.
Dispose of batteries according to your local environmental laws and guidelines.
Ergonomics
For information about ergonomics, health, and safety, visit the Apple ergonomics
website at www.apple.com/about/ergonomics.
Support
For support and troubleshooting information, user discussion boards, and the latest
Apple software downloads, go to www.apple.com/support.
WARNING: When you replace the batteries, replace them all at the same time. Don’t
mix old batteries with new batteries or mix battery types (for example, don’t mix
alkaline and lithium batteries). Don’t open or puncture the batteries, install them
backwards, or expose them to fire, high temperatures, or water. Keep batteries out of
the reach of children.891011121314 Français
2 Configuration de votre clavier
Apple Wireless Keyboard
Félicitations pour l’acquisition du clavier Apple Wireless
Keyboard.
Utilisation du clavier sans fil
Ce fascicule vient compléter les instructions d’installation se trouvant dans le guide
de l’utilisateur fourni avec votre Mac. Suivez les instructions des pages suivantes pour :
 Installer les piles dans le clavier.
 Configurer votre Mac.
 Utiliser Assistant réglages pour configurer votre clavier via votre Mac.
 Utiliser Mise à jour de logiciels pour installer les derniers logiciels.
N’activez pas votre clavier avant de démarrer votre Mac au cours de l’étape 3.
Important : maintenez le couvercle prévu pour les piles et celles-ci hors de portée
des enfants.Français 15
Étape 1: Installation des piles
Veuillez suivre les instructions ci-dessous pour installer les piles dans votre clavier
Apple Wireless Keyboard.
Pour installer les piles dans le clavier :
1 Utilisez une pièce pour retirer le couvercle des piles.
2 Placez les piles dans le compartiment comme illustré ci-dessous.
3 Replacez le couvercle des piles et laissez le clavier éteint jusqu’au démarrage de votre
Mac lors de l’étape 3.
Remarque : lorsque la lumière « Allumé » est éteinte, le clavier est éteint.
Étape 2: Configurer votre Mac
Configurer votre Mac en suivant les instructions du guide de l’utilisateur qui
l’accompagne. Étant donné que vous possédez un clavier sans fil, les instructions
concernant la connexion d’un clavier USB ne vous concernent pas.
Couvercle des piles
Insérer les piles16 Français
Ne démarrez votre Mac que lorsque cela vous est indiqué dans l’étape 3.
Étape 3: Jumeler votre clavier
Avant de pouvoir utiliser votre clavier, vous devez le jumeler avec votre Mac. Le
jumelage permet à votre clavier de communiquer sans fil avec votre Mac. Cette
opération ne doit être effectuée qu’une seule fois.
Lors du premier démarrage de votre Mac, l’Assistant réglages vous guide durant la
configuration de votre clavier Apple Wireless Keyboard ainsi que lors du jumelage
avec votre Mac.
1 Appuyez sur le bouton Activé/désactivé (®) pour allumer le clavier Apple Wireless
Keyboard.
2 Allumez votre Mac.
3 Suivez ensuite les instructions à l’écran de l’Assistant réglages.
Étape 4: Installer le logiciel
Pour utiliser votre clavier et profiter pleinement de toutes ses fonctionnalités, vous
devez mettre à jour votre Mac à Mac OS X 10.4.10 ou ultérieur et installer la mise à
jour du logiciel clavier.
Bouton
Activé/désactivé
Lumière d’alimentation activeFrançais 17
Pour mettre à jour à la dernière version de Mac OS X et installer la mise à jour du
logiciel clavier, sélectionnez le menu pomme () > Mise à jour de logiciels dans
la barre des menus et suivre les instructions à l’écran.
Utilisation de votre clavier
Utilisez les touches en haut de votre clavier pour régler la luminosité de votre écran,
ouvrir Exposé, afficher les widgets du Dashboard, contrôler le volume et bien plus
encore.
Diminuez ( ) ou augmentez ( ) la luminosité de votre écran.
Utilisez l’option Exposé Toutes les fenêtres pour voir toutes les fenêtres ouvertes
sur votre bureau en même temps.
Ouvrez Dashboard pour accéder à tous les widgets et obtenir des informations
concernant le temps, la bourse et bien plus encore.
] Revenez en arrière ou allez au morceau, à la séquence ou au diaporama
précédent.
’ Lisez ou mettez des morceaux, des séquences ou des diaporamas en pause.18 Français
Personnalisation de votre clavier
Vous pouvez personnaliser votre clavier en utilisant la sous-fenêtre Clavier dans
les préférences Clavier et souris.
Pour personnaliser votre clavier :
1 Choisissez Apple () > Préférences Système.
2 Cliquez sur Clavier et souris.
3 Cliquez sur Clavier ou Raccourcis clavier.
Cliquez sur Raccourcis clavier pour attribuer des raccourcis à des commandes dans
une application Mac OS X ou dans le Finder.
Des informations complémentaires sur votre clavier sont disponibles dans l’Aide Mac.
Ouvrez l’Aide Mac et recherchez « clavier ».
Changement de nom de votre clavier
La première fois que le clavier sans fil est jumelé, votre Mac lui attribue automatiquement
un nom unique.Vous pouvez changer ce nom dans les préférences Clavier et souris.
Sélectionnez le menu Pomme () > Préférences Système et cliquez sur Clavier et souris.
Cliquez ensuite sur Bluetooth et saisissez un nouveau nom dans le champ correspondant.
‘ Avancez ou allez au morceau, à la séquence ou au diaporama suivant.
— Coupez le son provenant des haut-parleurs ou du port de sortie casque de votre
ordinateur.
– - Diminuez (–) ou augmentez (-) le volume du son provenant des haut-parleurs
ou du port de sortie casque de votre ordinateur.
C Maintenez enfoncé la touche d’éjection de disque pour éjecter un disque.Français 19
Nettoyage de votre clavier
Suivez les indications pour le nettoyage externe de votre clavier :
 Retirez les piles du clavier.
 Utilisez un tissu doux légèrement humide pour nettoyer l’extérieur de votre clavier.
Empêchez l’humidité de s’introduire dans les interstices.
 N’utilisez pas de vaporisateurs, de produits solvants ou abrasifs.
À propos des piles
Le clavier Apple Wireless Keyboard est fourni avec trois piles alcalines AA. Vous pouvez
également utiliser des piles AA alcalines, au lithium ou rechargeables.
Pour vérifier le niveau de charge des piles, consultez les préférences Clavier et souris.
Choisissez le menu Pomme () > Préférences Système. Cliquez sur Clavier et souris,
puis sur Bluetooth.
Remarque : pour économiser l’énergie des piles, éteignez votre clavier lorsque vous
n’en faites pas usage. Si vous ne prévoyez pas d’utiliser votre clavier durant une
période prolongée, retirez les piles.
Respectez les lois et les instructions régionales en matière de pile.
AVERTISSEMENT : lorsque vous changez les piles, remplacez-les toutes en même
temps. Ne mélangez pas les nouvelles piles avec les anciennes et ne mélangez pas
les types de piles (par exemple des piles alcalines et des piles au lithium). N’ouvrez
pas les piles, ne les percez pas, ne les installez pas à l’envers et ne les exposez pas au
feu, à des températures élevées ou à l’eau. Conservez-les hors de portée des enfants.20 Français
Ergonomie
Pour obtenir des informations sur l’ergonomie, la santé et la sécurité, rendez-vous sur
le site web d’Apple concernant l’ergonomie : www.apple.com/fr/about/ergonomics
Assistance
Pour toute information concernant l’assistance et le dépannage, les forums de
discussion et les derniers téléchargements des logiciels d’Apple, rendez-vous
sur www.apple.com/fr/support.Español 21
3 Configuración del teclado
inalámbrico Apple Wireless
Keyboard
Enhorabuena por haber elegido el teclado inalámbrico
Apple Wireless Keyboard como dispositivo de entrada.
Utilización del teclado inalámbrico
La información que encontrará en esta guía complementa las instrucciones de
configuración incluidas en el manual suministrado con su ordenador. Siga los
pasos que se detallan en las siguientes páginas para:
 colocar las pilas en el teclado;
 configurar su Mac;
 usar el Asistente de Configuración para configurar el teclado con su Mac;
 usar Actualización de Software para instalar el software más reciente.
No encienda el teclado hasta que vaya a arrancar el ordenador en el paso 3.
Importante: Mantenga la tapa del compartimento de las pilas y las pilas fuera
del alcance de los niños.22 Español
Paso 1: Colocación de las pilas
Siga las instrucciones que figuran a continuación para insertar las pilas en el teclado
inalámbrico Apple Wireless Keyboard.
Para colocar las pilas en el teclado:
1 Con la ayuda de una moneda, extraiga la tapa del compartimento de las pilas.
2 Introduzca las pilas en el compartimento tal como se muestra en la imagen.
3 Coloque la tapa en el compartimento de las pilas y deje el teclado apagado hasta
que vaya a arrancar el ordenador en el paso 3.
Nota: Cuando el indicador luminoso de encendido está apagado, el teclado está
apagado.
Tapa del compartimento de las pilas
Introduzca las pilas Español 23
Paso 2: Configuración de su Mac
Para configurar su ordenador, siga las instrucciones que figuran en el manual de
usuario que venía con su Mac. Puesto que dispone de un teclado inalámbrico, no es
necesario que lea las instrucciones correspondientes a la conexión de un teclado USB.
No arranque el Mac hasta que se le solicite hacerlo en el paso 3.
Paso 3: Configuración del enlace con el teclado
Antes de poder utilizar el teclado, debe enlazarlo con su Mac. El proceso de enlace
permite que el teclado se comunique de forma inalámbrica con el ordenador. Esta
operación solo deberá llevarse a cabo una vez.
La primera vez que arranca el ordenador, el Asistente de Configuración le guía a través
de los pasos necesarios para configurar el teclado inalámbrico Apple Wireless Keyboard
y enlazarlo con su Mac.
1 Para encender el teclado inalámbrico Apple Wireless Keyboard, pulse el botón
de encendido/apagado (®).
2 Encienda el ordenador.
3 Cuando el sistema haya arrancado, siga las instrucciones del Asistente de
Configuración que van apareciendo en pantalla.
Indicador luminoso de encendido Botón de encendido/apagado 24 Español
Paso 4: Instalación del software
Para utilizar su teclado y sacar el máximo partido a todas sus prestaciones, deberá
actualizar su sistema a la versión 10.4.10 o posterior del Mac OS X e instalar la
actualización de software del teclado.
Para actualizar su sistema a la última versión disponible de Mac OS X e instalar
la actualización de software del teclado, seleccione Apple () > “Actualización de
Software” en la barra de menús y siga las instrucciones que aparecen en pantalla.
Uso del teclado
Use las teclas de la hilera superior del teclado para ajustar el brillo de la pantalla, para
abrir Exposé, para ver los widgets del Dashboard, para controlar el volumen del
ordenador y para muchas otras cosas.
Reducir ( ) o aumentar ( ) el brillo de la pantalla.
Usar la tecla “Exposé: todas las ventanas” para ver todas las ventanas abiertas
a la vez en el escritorio.
Abrir el Dashboard para acceder a los widgets y obtener información sobre
cotizaciones en bolsa, consultar la previsión meteorológica y mucho más.
] Retroceder en la reproducción actual o ir a la canción, película o pase de
diapositivas anterior.Español 25
Personalización del teclado
Puede personalizar su teclado utilizando el panel Teclado del panel de preferencias
Teclado y Ratón.
Para personalizar el teclado:
1 Seleccione Apple () > Preferencias del Sistema.
2 Haga clic en “Teclado y Ratón”.
3 Haga clic en “Teclado” o en “Funciones rápidas de teclado”.
Haga clic en “Funciones rápidas de teclado” para asignar combinaciones de teclas a
los comandos de menú de una aplicación de Mac OS X o del Finder.
En la Ayuda Mac encontrará más información acerca del teclado. Abra la Ayuda Mac
y busque “teclado”.
’ Iniciar o poner en pausa la reproducción de canciones, películas o pases
de diapositivas.
‘ Avanzar en la reproducción actual o ir a la canción, película o pase de
diapositivas siguiente.
— Desactivar el sonido de los altavoces o del puerto de auriculares del ordenador.
– - Reducir (–) o aumentar (-) el volumen del sonido de los auriculares o del puerto
de auriculares del ordenador.
C Mantener pulsada la tecla de expulsión de discos para expulsar un disco.26 Español
Cómo cambiar el nombre al teclado
El Mac asigna automáticamente un nombre único al teclado inalámbrico la primera
vez que se establece el enlace con él. No obstante, si lo desea, puede modificar este
nombre en el panel de preferencias Teclado y Ratón. Para ello, seleccione Apple () >
“Preferencias del Sistema” y haga clic en “Teclado y Ratón”. Haga clic en la pestaña
Bluetooth® e introduzca un nuevo nombre en el campo Nombre.
Limpieza del teclado
Siga estas instrucciones para limpiar la parte exterior de su teclado:
 Extraiga las pilas del teclado.
 Utilice un paño suave y húmedo para limpiar el exterior del teclado. Evite que
entre agua o humedad por las aberturas.
 No utilice aerosoles, disolventes ni limpiadores abrasivos.
Acerca de las pilas
El teclado Apple Wireless Keyboard viene con tres pilas alcalinas. Puede usar tanto pilas
alcalinas como pilas de litio o pilas AA recargables.
ADVERTENCIA: Cuando sea necesario cambiar las pilas, sustituya siempre todas y no
mezcle pilas nuevas con viejas ni tipos de pilas distintos (por ejemplo, no mezcle pilas
alcalinas con pilas de litio). No intente abrir ni perforar las pilas, no las coloque al
revés y evite que entren en contacto con el fuego, con altas temperaturas o con el
agua. Mantenga las pilas fuera del alcance de los niños.Español 27
Puede comprobar el nivel de carga de las pilas a través del panel de preferencias
Teclado y Ratón. Para ello, seleccione Apple () > Preferencias del Sistema. Haga
clic en “Teclado y Ratón” y, a continuación, haga clic en Bluetooth.
Nota: Para prolongar la duración de las pilas, apague el teclado cuando no lo utilice.
Si tiene pensado no utilizarlo durante un tiempo prolongado, es aconsejable extraer
las pilas.
Deshágase de las pilas siguiendo la normativa ambiental aplicable en su municipio.
Ergonomía
Para obtener más información sobre ergonomía, salud y seguridad, visite la página web
de Apple sobre ergonomía: www.apple.com/es/about/ergonomics.
Soporte
Para obtener información sobre soporte y resolución de problemas, acceder a foros
de discusión de usuarios y descubrir las últimas novedades en descargas de software
de Apple, visite www.apple.com/es/support.28
Regulatory Compliance Information
Compliance Statement
This device complies with part 15 of the FCC rules.
Operation is subject to the following two conditions: (1)
This device may not cause harmful interference, and (2)
this device must accept any interference received,
including interference that may cause undesired
operation. See instructions if interference to radio or
television reception is suspected.
L‘utilisation de ce dispositif est autorisée seulement aux
conditions suivantes : (1) il ne doit pas produire de
brouillage et (2) l’utilisateur du dispositif doit étre prêt à
accepter tout brouillage radioélectrique reçu, même si
ce brouillage est susceptible de compromettre le
fonctionnement du dispositif.
Radio and Television Interference
The equipment described in this manual generates,
uses, and can radiate radio-frequency energy. If it is not
installed and used properly—that is, in strict accordance
with Apple’s instructions—it may cause interference
with radio and television reception.
This equipment has been tested and found to comply
with the limits for a Class B digital device in accordance
with the specifications in Part 15 of FCC rules. These
specifications are designed to provide reasonable
protection against such interference in a residential
installation. However, there is no guarantee that
interference will not occur in a particular installation.
You can determine whether your computer system is
causing interference by turning it off. If the interference
stops, it was probably caused by the computer or one of
the peripheral devices.
If your computer system does cause interference to
radio or television reception, try to correct the
interference by using one or more of the following
measures:
 Turn the television or radio antenna until the
interference stops.
 Move the computer to one side or the other of the
television or radio.
 Move the computer farther away from the television or
radio.
 Plug the computer into an outlet that is on a different
circuit from the television or radio. (That is, make
certain the computer and the television or radio are on
circuits controlled by different circuit breakers or
fuses.)
If necessary, consult an Apple Authorized Service
Provider or Apple. See the service and support
information that came with your Apple product. Or,
consult an experienced radio or television technician for
additional suggestions.
Important: Changes or modifications to this product
not authorized by Apple Inc. could void the FCC
compliance and negate your authority to operate the
product. This product was tested for FCC compliance
under conditions that included the use of Apple
peripheral devices and Apple shielded cables and
connectors between system components. It is important
that you use Apple peripheral devices and shielded
cables and connectors between system components to29
reduce the possibility of causing interference to radios,
television sets, and other electronic devices. You can
obtain Apple peripheral devices and the proper shielded
cables and connectors through an Apple-authorized
dealer. For non-Apple peripheral devices, contact the
manufacturer or dealer for assistance.
Responsible party (contact for FCC matters only):
Apple Inc., Product Compliance
1 Infinite Loop M/S 26-A
Cupertino, CA 95014-2084
Industry Canada Statements
Complies with the Canadian ICES-003 Class B
specifications. Cet appareil numérique de la classe B est
conforme à la norme NMB-003 du Canada. This device
complies with RSS 210 of Industry Canada.
This Class B device meets all requirements of the
Canadian interference-causing equipment regulations.
Cet appareil numérique de la Class B respecte toutes les
exigences du Règlement sur le matériel brouilleur du
Canada.
European Compliance Statement
This product complies with the requirements of
European Directives 72/23/EEC, 89/336/EEC, and
1999/5/EC.
Bluetooth Europe–EU Declaration of
Conformity
This wireless device complies with the specifications EN
300 328, EN 301-489, EN 50371, and EN 60950 following
the provisions of the R&TTE Directive.
Caution: Modification of this device may result in
hazardous radiation exposure. For your safety, have this
equipment serviced only by an Apple Authorized
Service Provider.
VCCI Class B Statement
Korea Statements
Singapore Wireless Certification
Taiwan Wireless Statement
Taiwan Class B Statement30
Apple and the Environment
Apple Inc. recognizes its responsibility to minimize the
environmental impacts of its operations and products.
More information is available on the web at:
www.apple.com/environment
Disposal and Recycling Information
When this product reaches its end of life, please dispose
of it according to your local environmental laws and
guidelines.
For information about Apple’s recycling programs, visit:
www.apple.com/environment/recycling
Battery Disposal Information
Dispose of batteries according to your local
environmental laws and guidelines.
Deutschland: Das Gerät enthält Batterien. Diese
gehören nicht in den Hausmüll. Sie können verbrauchte
Batterien beim Handel oder bei den Kommunen
unentgeltlich abgeben. Um Kurzschlüsse zu vermeiden,
kleben Sie die Pole der Batterien vorsorglich mit einem
Klebestreifen ab.
Nederlands: Gebruikte batterijen kunnen worden
ingeleverd bij de chemokar of in een speciale
batterijcontainer voor klein chemisch afval (kca) worden
gedeponeerd.
Taiwan:31
European Union—Disposal Information
The symbol above means that according to local laws
and regulations your product should be disposed of
separately from household waste.When this product
reaches its end of life, take it to a collection point
designated by local authorities. Some collection points
accept products for free. The separate collection and
recycling of your product at the time of disposal will
help conserve natural resources and ensure that it is
recycled in a manner that protects human health and
the environment.
© 2007 Apple Inc. All rights reserved. Apple, the Apple
logo, Exposé, Mac, and Mac OS are trademarks of Apple
Inc., registered in the U.S. and other countries. Apple
Store is a service mark of Apple Inc., registered in the
U.S. and other countries.
The Bluetooth® word mark and logos are registered
trademarks owned by Bluetooth SIG, Inc. and any use of
such marks by Apple is under license.www.apple.com
Printed in XXXX
iPod shuffle
Features Guide2
2 Contents
Chapter 1 3 iPod shuffle Basics
4 iPod shuffle at a Glance
4 Using the iPod shuffle Controls
5 Connecting and Disconnecting iPod shuffle
6 Charging the Battery
7 Status Lights
Chapter 2 9 Loading and Playing Music
9 About iTunes
10 Importing Music into Your iTunes Library
12 Organizing Your Music
13 Loading Music onto iPod shuffle
16 Playing Music
Chapter 3 19 Storing Files on iPod shuffle
19 Using iPod shuffle as an External Disk
Chapter 4 21 iPod shuffle Accessories
21 Apple Earphones
22 iPod shuffle Dock
22 iPod USB Power Adapter
22 Available Accessories
Chapter 5 23 Tips and Troubleshooting
26 Updating and Restoring iPod shuffle Software
Chapter 6 27 Safety and Handling
27 Important Safety Information
29 Important Handling Information
Chapter 7 30 Learning More, Service, and Support
Index 331
3
1 iPod shuffle Basics
Congratulations on purchasing iPod shuffle. Read this
chapter to learn about the features of iPod shuffle, how
to use its controls, and more.
To use iPod shuffle, you put songs and other audio files on your computer and then
load them onto iPod shuffle.
Use iPod shuffle to:
 Load songs for listening on the go
 Listen to podcasts, downloadable radio-style shows delivered over the Internet
 Listen to audiobooks purchased from the iTunes Store or audible.com
 Store or back up files and other data, using iPod shuffle as an external disk4 Chapter 1 iPod shuffle Basics
iPod shuffle at a Glance
Using the iPod shuffle Controls
The simple controls make it easy to play songs, audiobooks, and podcasts on
iPod shuffle.
Headphones port
Previous/Rewind
Play/Pause
Next/Fast-forward
Volume Down Power switch
Top status light
Volume Up
Shuffle switch Bottom status light
OFF
To Do this
Turn iPod shuffle on or off Slide the power switch (green indicates iPod shuffle is on).
Play Press Play/Pause (’).
Pause Press Play/Pause (’)
Change the volume Press Volume Up (∂) or Volume Down (D).
Set the play order Slide the shuffle switch (¡ to shuffle, ⁄ to play in order).
Skip to the next track Press Next/Fast-forward (‘).
Start a track over Press Previous/Rewind (]).
Play the previous track Press Previous/Rewind (]) twice.
Go to the first track Press Play/Pause (’) three times quickly.
Fast-forward or rewind Press and hold Next/Fast-forward (‘) or Previous/Rewind (]).
Disable the iPod shuffle buttons
(so nothing happens if you press
them accidentally)
Press Play/Pause (’) for about three seconds (until the status
light blinks orange three times).
Repeat to reenable the buttons (the status light blinks green
three times).
Reset iPod shuffle
(if iPod shuffle isn’t responding)
Remove iPod shuffle from the Dock. Turn iPod shuffle off, wait
5 seconds, and then turn it back on again.
Find the iPod shuffle serial
number
Look on the notch underneath the clip on iPod shuffle. Or, in
iTunes (with iPod shuffle connected to your computer), select
iPod shuffle in the Source pane and click the Settings tab.Chapter 1 iPod shuffle Basics 5
Connecting and Disconnecting iPod shuffle
Connect iPod shuffle to your computer to load songs and other audio files, and to
charge the battery. Disconnect iPod shuffle when you’re done.
Connecting iPod shuffle
To connect iPod shuffle to your computer:
m Plug the included iPod shuffle Dock into a USB port on your computer. Then put
iPod shuffle in the Dock.
Note: Connect the Dock to a high-power USB port to charge the battery. A USB 2.0
port is recommended. Do not use the USB port on your keyboard.
The first time you connect iPod shuffle to your computer, the iPod Setup Assistant
helps you configure iPod shuffle and sync it with your iTunes library.
Important: Once you’ve synced iPod shuffle with the iTunes library on a computer, a
message appears whenever you connect iPod shuffle to another computer, asking if
you want to sync with the iTunes library on the new computer. Click Cancel if you want
to keep the current music content on iPod shuffle. Or, click Transfer Purchases to keep
the contents on iPod shuffle and copy the purchased songs on it to the iTunes library
on the new computer. See iTunes Help for more information.
Disconnecting iPod shuffle
It’s important not to disconnect iPod shuffle from your computer while audio files are
being loaded or when iPod shuffle is being used as an external disk. You can see if it’s
OK to disconnect iPod shuffle by looking at the top of the iTunes window or by
checking the iPod shuffle status light.
Important: If you see the “Do not disconnect” message in iTunes or if the status light
on iPod shuffle is blinking orange, you must eject iPod shuffle before disconnecting it.
Otherwise, you could damage files on iPod shuffle.
If you enable iPod shuffle for disk use (see page 19), you must always eject iPod shuffle
before disconnecting it.6 Chapter 1 iPod shuffle Basics
To eject iPod shuffle:
m In iTunes, click the Eject (C) button next to iPod shuffle in the Source pane.
If you’re using a Mac, you can also eject iPod shuffle by dragging the iPod shuffle icon
on the desktop to the Trash.
If you’re using a Windows PC, you can also eject iPod shuffle by clicking the Safely
Remove Hardware icon in the Windows system tray and selecting iPod shuffle.
To disconnect iPod shuffle:
m Remove iPod shuffle from the dock.
Charging the Battery
iPod shuffle has an internal, rechargeable battery.
For best results, charge the battery fully the first time you use iPod shuffle. A depleted
battery can be 80-percent charged in about two hours and fully charged in about four
hours.
If iPod shuffle isn’t used for a while, the battery might need to be recharged.
To charge the battery using your computer:
m Connect iPod shuffle to a high-power USB port on your computer using the included
iPod shuffle Dock. The computer must be turned on and not in sleep mode (some
models of Macintosh can charge iPod shuffle while in sleep mode).
When the battery is charging, the status light on iPod shuffle is orange. When the
battery is fully charged, the status light turns green.
Note: If iPod shuffle is being used as a disk (see page 19) or if iTunes is loading songs or
settings onto iPod shuffle, the status light blinks orange to let you know that you must
eject iPod shuffle before disconnecting it.
If you don’t see the status light, iPod shuffle might not be connected to a high-power
USB port. Try another USB port on your computer.
Note: You can load music while the battery is charging.
If you want to charge iPod shuffle when you’re away from your computer, you can
connect iPod shuffle to an iPod USB Power Adapter, available at www.apple.com.Chapter 1 iPod shuffle Basics 7
To charge the battery using an iPod USB Power Adapter:
1 Connect the AC plug adapter to the power adapter (they might already be connected).
2 Plug the USB connector of the iPod shuffle Dock into the power adapter.
3 Plug the power adapter into a working electrical outlet.
4 Put iPod shuffle in the dock.
You can disconnect and use iPod shuffle before it is fully charged.
Note: Rechargeable batteries have a limited number of charge cycles. Battery life
and number of charge cycles vary by use and settings. For information, go to
www.apple.com/batteries.
Checking the Battery Status
When you turn iPod shuffle on, or disconnect it from your computer or power adapter,
the status light tells you approximately how much charge is in the battery. See the
table in the following section. If iPod shuffle is already on, you can check the battery
status without interrupting playback by quickly switching iPod shuffle off and then
on again.
Status Lights
iPod shuffle has two status lights, one on the top and one on the bottom, that let you
know when you’ve pressed a button, the state of the battery, that iPod shuffle is
enabled as a disk, or if there’s something wrong.
WARNING: Read all safety instructions about using the iPod USB Power Adapter on
page 28 before use.
AC plug adapter
iPod USB Power Adapter
iPod shuffle Dock cable8 Chapter 1 iPod shuffle Basics
Turning on or disconnecting
green Good charge (30% – 100%)
orange Low charge (10% – 30%)
red Very low charge (< 10%)
no light No charge
alternating green and two
orange (10 seconds)
ERROR: iPod shuffle must
be restored
Connected
orange (continuous) Charging
green (continuous) Fully charged
blinking orange
(continuous)
Do not disconnect (iTunes is
syncing, or iPod shuffle is enabled
for disk use)
Pressing buttons
green Play (’)
green (1 minute) Pause (’)
Pressing and holding:
green, then three orange Disable buttons
orange, then three green Enable buttons
green Volume up (∂) or down (D)
three orange User-set volume limit reached
Pressing and holding:
green Volume up (∂) or down (D)
no light Maximum or zero volume reached
three orange User-set volume limit reached
green Previous track (])
Pressing and holding:
green Rewind (])
green Next track (‘)
Pressing and holding:
green Fast forward (‘)
Any button orange No action (buttons are disabled)
alternating green and
orange (2 seconds)
ERROR: No music loaded
While iPod shuffle is playing
blinking red (continuous) Battery nearly discharged2
9
2 Loading and Playing Music
With iPod shuffle, you can take your music collection
with you wherever you go. Read this chapter to learn
about loading music and listening to iPod shuffle.
You use iPod shuffle by importing songs, audiobooks, and podcasts (radio-style audio
shows) to your computer and then loading them onto iPod shuffle. Read on to learn
more about the steps in this process, including:
 Getting music from your CD collection, hard disk, or the iTunes Store (part of iTunes
and available in some countries only) into the iTunes application on your computer
 Organizing your music and other audio into playlists
 Loading songs, audiobooks, and podcasts onto iPod shuffle
 Listening to music or other audio on the go
About iTunes
iTunes is the software you use to sync music, audiobooks, and podcasts with
iPod shuffle. When you connect iPod shuffle to your computer, iTunes opens
automatically.
This guide explains how to use iTunes to import songs and other audio to your
computer, create personal compilations of your favorite songs (called playlists), load
iPod shuffle, and adjust iPod shuffle settings.
iTunes also has many other features. For information, open iTunes and choose Help >
iTunes Help.10 Chapter 2 Loading and Playing Music
Importing Music into Your iTunes Library
To listen to music on iPod shuffle, you first need to get that music into your iTunes
library on your computer.
There are three ways to get music into your iTunes library:
 Buy music and audiobooks or download podcasts online from the iTunes Store.
 Import music from audio CDs.
 Add music and other audio that’s already on your computer.
Buying Songs and Downloading Podcasts Using the iTunes Store
If you have an Internet connection, you can easily purchase and download songs,
albums, and audiobooks online using the iTunes Store. You can also subscribe to and
download podcasts, radio-style audio shows.
To purchase music online using the iTunes Store, you set up an Apple account in
iTunes, find the songs you want, and then buy them. If you already have an Apple
account, or if you have an America Online (AOL) account (available in some countries
only), you can use that account to sign in to the iTunes Store and buy songs.
To sign in to the iTunes Store:
m Open iTunes and then:
 If you already have an iTunes account, choose Store > Sign In.
 If you don’t already have an iTunes account, choose Store > Create Account and follow
the onscreen instructions to set up an Apple account or enter your existing Apple
account or AOL account information.
To find songs, audiobooks, and podcasts:
You can browse or search the iTunes Store to find the album, song, or artist you’re
looking for. Open iTunes and click iTunes Store in the Source pane.
 To browse the iTunes Store, choose a music genre from the Choose Genre pop-up
menu, click one of the displayed releases or songs, or click Browse in the main iTunes
Store window.
 To browse for podcasts, click the Podcasts link in the main iTunes Store window.
 To search the iTunes Store, type the name of an album, song, artist, or composer in the
Search iTunes Store field.Chapter 2 Loading and Playing Music 11
 To narrow your search, type something in the Search iTunes Store field, press Return
or Enter on your keyboard, and then click items in the Search Bar. For example, to
narrow your search to song titles and albums, click MUSIC.
 To search for a combination of items, click Power Search in the iTunes Store window.
 To return to the main page of the iTunes Store, click the Home button in the top-left
corner of the main iTunes Store window.
To buy a song, album, or audiobook:
1 Click iTunes Store in the Source pane, and then find the item you want to buy.
You can double-click a song or other item to listen to a portion of it and make sure it’s
what you want.
2 Click Buy Song, Buy Album, or Buy Book.
The item is downloaded to your computer and charged to the credit card listed in your
Apple or AOL account.
To download or subscribe to a podcast:
1 Click iTunes Store in the Source pane.
2 Click the Podcasts link on the left side of the main page in the iTunes Store.
3 Browse for the podcast you want to download.
 To download a single podcast episode, click the Get Episode button next to the
episode.
 To subscribe to a podcast, click the Subscribe button next to the podcast graphic.
iTunes downloads the most recent episode. As new episodes become available, they
are automatically downloaded to iTunes when you connect to the Internet.
Importing Music from Your Audio CDs into iTunes
Follow these instructions to get music from your CDs into iTunes.
To import music from an audio CD into iTunes:
1 Insert a CD into your computer and open iTunes.
If you have an Internet connection, iTunes gets the names of the songs on the CD from
the Internet (if available) and lists them in the window.
If you don’t have an Internet connection, you can import your CDs and, later, when you’re
connected to the Internet, choose Advanced > Get CD Track Names. iTunes will get the
track names for the imported CDs.
If the CD track names aren’t available online, you can enter the names of the songs
manually. See “Entering Names of Songs and Other Details” on page 12.
With song information entered, you can browse for songs in iTunes by title, artist,
album, and more. 12 Chapter 2 Loading and Playing Music
2 Click to remove the checkmark next to any song you don’t want to import from the CD.
3 Click the Import CD button. The display area at the top of the iTunes window shows
how long it will take to import each song.
By default, iTunes plays songs as they are imported. If you’re importing a lot of songs,
you might want to stop the songs from playing to improve performance.
4 To eject the CD, click the Eject (C) button.
5 Repeat these steps for any other CDs with songs you want to import.
Entering Names of Songs and Other Details
You can manually enter song titles and other information, including comments, for
songs and other items in your iTunes library.
To enter CD song titles and other information manually:
1 Select the first track on the CD and choose File > Get Info.
2 Click Info.
3 Enter the song information.
4 Click Next to enter information for the next track.
Adding Songs Already on Your Computer to Your iTunes Library
If you have digital music files such as MP3s already on your computer, you can easily
add them to your iTunes library.
To add songs on your computer to your iTunes library:
m Drag the folder or disk containing the audio files to the LIBRARY heading in the iTunes
Source pane (or choose File > Add to Library and select the folder or disk). If iTunes
supports the song file format, the songs are automatically added to your iTunes library.
You can also drag individual song files to iTunes.
Note: Using iTunes for Windows, you can convert unprotected digital music files
created with Windows Media Player to an iTunes-compatible file format, such as AAC or
MP3. This can be useful if you have music encoded in WMA format. For more
information, open iTunes and choose Help > iTunes Help.
Organizing Your Music
Using iTunes, you can organize songs and other items into lists, called playlists, in any
way you want. For example, you can make playlists with songs to listen to while
exercising or playlists with songs for a particular mood.
You can also make Smart Playlists that update automatically based on rules you
choose. When you add songs to iTunes that match the rules, they automatically get
added to the Smart Playlist.Chapter 2 Loading and Playing Music 13
You can make as many playlists as you like using any of the songs in your iTunes library.
Adding a song to a playlist or later removing it doesn’t remove it from your iTunes
library.
To make a playlist in iTunes:
1 Click the Add (∂) button or choose File > New Playlist.
2 Type a name for the playlist.
3 Click Music in the LIBRARY list, and then drag a song or other item to the playlist.
To select multiple songs, hold down the Shift key or the Command (x) key on a Mac,
or the Shift key or the Control key on a Windows PC, as you click each song.
To make a Smart Playlist:
m Choose File > New Smart Playlist and define the rules for your playlist.
Loading Music onto iPod shuffle
After your music is imported and organized in iTunes, you can easily load it onto
iPod shuffle.
You set how music is loaded from iTunes onto iPod shuffle by connecting iPod shuffle
to your computer, selecting iPod shuffle in the Source pane, and configuring options at
the bottom of the Contents pane. Additional options for loading music and using
iPod shuffle appear in the Settings pane.
Autofilling iPod shuffle
iTunes can automatically load a selection of your songs onto iPod shuffle with the click
of a button. You can choose your entire library or a specific playlist to gets songs from,
and set other options for Autofill.
To autofill music onto iPod shuffle:
1 Connect iPod shuffle to your computer.
2 Select iPod shuffle from the list of devices in the Source pane.
3 Click the Contents tab.14 Chapter 2 Loading and Playing Music
4 Choose the playlist you want to autofill from using the pop-up menu.
To autofill music from your entire library, choose Music.
5 Select which of the following options you want:
Choose items randomly: iTunes shuffles the order of songs as it loads them onto
iPod shuffle. If this option is not selected, iTunes downloads songs in the order they
appear in your library or selected playlist.
Choose higher rated items more often: iTunes autofills the songs you listen to most.
Replace all items when Autofilling: iTunes replaces the songs on iPod shuffle with the
new songs you’ve chosen. If this option is not selected, songs you’ve already loaded
onto iPod shuffle remain and iTunes selects more songs to fill the available space.
6 Click Autofill.
While music is being loaded from iTunes onto iPod shuffle, the iTunes status window
shows the progress. When the autofill is done, a message in iTunes says “iPod update is
complete.”
Limiting Autofill to Items Checked in Your iTunes Library
You can set iTunes to autofill only items that are checked in your iTunes library. Items
that you’ve deselected will be ignored.
To limit autofill to checked items:
1 Connect iPod shuffle to your computer.
2 When iPod shuffle appears in the iTunes window, select it.
3 Click the Settings tab.
4 Select “Only update checked songs.”
5 Click Apply.
Loading Songs, Audiobooks, and Podcasts Manually
You can load songs and playlists onto iPod shuffle manually. If you want to load
audiobooks and podcasts onto iPod shuffle, you must load them manually.
To load a song or other item onto iPod shuffle:
1 Connect iPod shuffle to your computer.
2 In iTunes, select your library or a playlist in the Source pane.
3 Drag the song or other item to the iPod shuffle in the Source pane.
You can also drag entire playlists to load them onto iPod shuffle.Chapter 2 Loading and Playing Music 15
Arranging the Order of Songs on iPod shuffle
Once songs are loaded onto iPod shuffle, you can arrange the order of the songs in the
same way you can with any playlist in iTunes.
To arrange the order of songs on iPod shuffle:
1 Connect iPod shuffle to your computer.
2 In iTunes, select iPod shuffle in the Source pane.
3 Click the Contents tab.
4 Click the blank header above the first column to arrange the songs by number.
5 Drag the songs to the order you want.
Fitting More Songs onto iPod shuffle
If you’ve imported songs into iTunes at higher bit-rate formats, such as AIFF, you can set
iTunes to automatically convert songs to 128 kbps AAC files as they are loaded onto
iPod shuffle. This does not affect the quality or size of the songs in iTunes.
Note: Songs in formats not supported by iPod shuffle, such as Apple Lossless, must be
converted if you want to load them onto iPod shuffle. For more information about
formats supported by iPod shuffle, see “If you can’t load a song or other item onto
iPod shuffle” on page 24.
To convert higher bit rate songs to AAC files:
1 Connect iPod shuffle to your computer.
2 In iTunes, select iPod shuffle in the Source pane.
3 Click the Settings tab.
4 Select “Convert higher bit rate songs to 128 kbps AAC.”
5 Click Apply.
Removing Songs and Other Items from iPod shuffle
You can have iTunes automatically replace items on iPod shuffle when you load items
using Autofill. You can also remove items from iPod shuffle manually.
To automatically replace items on iPod shuffle when autofilling:
1 Connect iPod shuffle to your computer.
2 In iTunes, select iPod shuffle in the Source pane.
3 Select the Contents tab.
4 Make sure “Replace all items when Autofilling” is selected.
To remove a song or other item from iPod shuffle:
1 Connect iPod shuffle to your computer.
2 In iTunes, select iPod shuffle in the Source pane.16 Chapter 2 Loading and Playing Music
3 Select a song or other item on iPod shuffle and press the Delete or Backspace key on
your keyboard.
Manually removing a song or other item from iPod shuffle does not delete the song
from your iTunes library.
Playing Music
After you load iPod shuffle with music and other audio, you can listen to it.
To listen to the songs and other items on iPod shuffle:
1 Plug the earphones into iPod shuffle and place the earbuds in your ears.
2 Press Play/Pause (’).
Press Volume Up (∂) or Volume Down (D) to adjust the volume. Take care not to turn
the volume up too high. See “Setting a Volume Limit” on page 17.
For a summary of the iPod shuffle controls, see “Using the iPod shuffle Controls” on
page 4.
Note: If you’re listening to an audiobook, set the shuffle switch to repeat (⁄) so that
the chapters play in order.
Setting iPod shuffle to Shuffle Songs or Play Songs in Order
You can set iPod shuffle to shuffle songs or play them in order.
To set iPod shuffle to shuffle:
m Slide the shuffle switch to shuffle (¡).
To reshuffle songs, press Play/Pause (’) three times quickly.
To set iPod shuffle to play songs in order:
m Slide the shuffle switch to repeat (⁄).
To return to the first song, press Play/Pause (’) three times quickly.
Setting Songs to Play at the Same Volume Level
The loudness of songs and other audio may vary depending on how the audio was
recorded or encoded. iTunes can automatically adjust the volume of songs, so they play
at the same relative volume level. You can set iPod shuffle to use the iTunes volume
settings.
WARNING: Read all safety instructions about avoiding hearing damage on page 28
before use.Chapter 2 Loading and Playing Music 17
To set iTunes to play songs at the same sound level:
1 In iTunes, choose iTunes > Preferences if you are using a Mac, or choose
Edit > Preferences if you are using a Windows PC.
2 Click Playback and select Sound Check.
To set iPod shuffle to use the iTunes volume settings:
1 Connect iPod shuffle to your computer.
2 In iTunes, select iPod shuffle in the Source pane.
3 Click the Settings tab.
4 Select Enable Sound Check.
5 Click Apply.
Important: If you have not turned on Sound Check in iTunes, setting it on iPod shuffle
has no effect.
Setting a Volume Limit
You can set a limit for the volume on iPod shuffle. You can also set a password in iTunes
to prevent this setting from being changed by someone else.
If you’ve set a volume limit on iPod shuffle, the status light blinks orange three times if
you try to increase the volume beyond the limit.
To set a volume limit for iPod shuffle:
1 Set iPod shuffle to the desired maximum volume.
2 Connect iPod shuffle to your computer.
3 In iTunes, select iPod shuffle in the Source pane.
4 Click the Settings tab.
5 Select “Limit maximum volume.”
6 Drag the slider to the desired maximum volume.
The initial slider setting shows the volume the iPod shuffle was set to when you
selected the “Limit maximum volume” checkbox.
7 To require a password to change this setting, click the lock and enter a password.
If you set a password, you must enter it before you can change or remove the volume
limit.
Note: The volume level may vary if you use different earphones or headphones.18 Chapter 2 Loading and Playing Music
To remove the volume limit:
1 Connect iPod shuffle to your computer.
2 In iTunes, select iPod shuffle in the Source pane.
3 Click the Settings tab.
4 Deselect “Limit maximum volume.”
Enter the password, if required.
Note: If you forget the password, you can restore iPod shuffle. See “Updating and
Restoring iPod shuffle Software” on page 26 for more information.
Disabling and Enabling the iPod shuffle Buttons
You can disable the buttons on iPod shuffle so that nothing happens if they are
pressed accidentally.
To disable the iPod shuffle buttons:
m Press and hold Play/Pause (’) for three seconds.
The status light glows green, and then blinks orange three times when the buttons
become disabled. If you press a button when the buttons are disabled, the status light
blinks orange once.
To reenable the buttons:
m Press and hold Play/Pause (’) again for three seconds.
The status light glows orange, and then blinks green three times when the buttons
become enabled.3
19
3 Storing Files on iPod shuffle
Use iPod shuffle to carry your data as well as your music.
Read this chapter to find out how to use iPod shuffle as an external disk.
Using iPod shuffle as an External Disk
You can use iPod shuffle as an external disk to store data files.
Note: To load iPod shuffle with music and other audio that you want to listen to, you
must use iTunes. You cannot play audio files that you copy to iPod shuffle using the
Macintosh Finder or Windows Explorer.
To enable iPod shuffle as an external disk:
1 Connect iPod shuffle to your computer.
2 In iTunes, select iPod shuffle in the Source pane.
3 Click the Settings tab.
4 In the Options section, select “Enable disk use.”
Note: You may need to scroll down to see the disk settings.
5 Adjust the slider to set how much space to reserve for songs versus data.
6 Click Apply.
When you use iPod shuffle as an external disk, the iPod shuffle disk icon appears on the
desktop on the Mac, or as the next available drive letter in Windows Explorer on a
Windows PC.
Transferring Files Between Computers
When you enable disk use on iPod shuffle, you can transfer files from one computer to
another. iPod shuffle is formatted as a FAT-32 volume, which is supported by both Macs
and PCs. This allows you to use iPod shuffle to transfer files between computers with
different operating systems. 20 Chapter 3 Storing Files on iPod shuffle
To transfer files between computers:
1 After enabling disk use on iPod shuffle, connect it to the computer you want to get files
from.
Important: When you connect iPod shuffle to a different computer (or different user
account on your computer), a message asks if you want to erase iPod shuffle and sync
with the new iTunes library there. Click Cancel if you don’t want to delete the current
music content on iPod shuffle.
2 Using the computer’s file system (the Finder on a Mac, Windows Explorer on a PC), drag
the files you want to your iPod shuffle.
3 Disconnect iPod shuffle, and then connect it to the other computer.
Again, click Cancel if you don’t want to delete the current music contents on
iPod shuffle.
4 Drag the files from iPod shuffle to a disk on the new computer.
Preventing iTunes from Opening Automatically
You can keep iTunes from opening automatically when you connect iPod shuffle to
your computer.
To prevent iTunes from opening automatically:
1 Connect iPod shuffle to your computer.
2 In iTunes, select iPod shuffle in the Source pane.
3 Click the Settings tab.
4 In the Options section, deselect “Open iTunes when this iPod is connected.”
5 Click Apply.4
21
4 iPod shuffle Accessories
iPod shuffle comes with earphones and an iPod shuffle
Dock.
Learn about your Apple earphones, the iPod shuffle Dock, and other accessories
available for iPod shuffle.
Apple Earphones
iPod shuffle comes with a pair of high-quality earphones.
To use the earphones:
m Plug the earphones into the Headphones port. Then place the earbuds in your ears as
shown.
WARNING: Read all safety instructions about avoiding hearing damage on page 28
before use.
The earphones
cord is adjustable.22 Chapter 4 iPod shuffle Accessories
iPod shuffle Dock
iPod shuffle comes with an iPod shuffle (2nd Generation) Dock, which you use to
connect iPod shuffle to your computer and other USB devices. See “Connecting and
Disconnecting iPod shuffle” on page 5.
iPod USB Power Adapter
The iPod USB Power Adapter is an optional accessory that allows you to charge
iPod shuffle when you’re away from your computer.
Available Accessories
To purchase iPod shuffle accessories, go to www.apple.com/ipodstore.
Available accessories include:
 Apple iPod In-Ear Headphones
 iPod shuffle (2nd Generation) Dock
 iPod USB Power Adapter5
23
5 Tips and Troubleshooting
Most problems with iPod shuffle can be solved quickly by
following the advice in this chapter.
If iPod shuffle won’t turn on or respond
 If the status light blinks orange when you press a button, the iPod shuffle buttons are
disabled. Press and hold Play/Pause (’) for about three seconds, until the status light
blinks green.
 Connect iPod shuffle to a high-power USB port on your computer. Your iPod shuffle
battery may need to be recharged.
 Turn iPod shuffle off, wait five seconds, and then turn it on again.
 You may need to restore iPod shuffle software. See “Updating and Restoring
iPod shuffle Software” on page 26.
The 5 Rs: Reset, Retry, Restart, Reinstall, Restore
Remember these five basic suggestions if you have a problem with iPod shuffle. Try
these steps one at a time until the problem is resolved. If one of the following doesn’t
help, read on for solutions to specific problems.
 Reset iPod shuffle by turning it off, waiting five seconds, and then turning it back on
again.
 Retry with a different USB port if you cannot see iPod shuffle in iTunes.
 Restart your computer, and make sure you have the latest software updates
installed.
 Reinstall iTunes software from the latest version on the web.
 Restore iPod shuffle. See “Updating and Restoring iPod shuffle Software” on
page 26.24 Chapter 5 Tips and Troubleshooting
If iPod shuffle isn’t playing music
 Make sure the earphone or headphone connector is pushed in all the way.
 Make sure the volume is adjusted properly. A volume limit might be set. See “Setting
a Volume Limit” on page 17.
 iPod shuffle might be paused. Try pressing Play/Pause (’).
If you connect iPod shuffle to your computer and nothing happens
 Connect iPod shuffle to a high-power USB port on your computer. iPod shuffle may
need to be recharged.
 Make sure you have installed the latest iTunes software from
www.apple.com/ipod/start.
 Try connecting to a different USB port on your computer. Make sure iPod shuffle is
firmly seated in the dock. Make sure the USB connector is oriented correctly. It can be
inserted only one way.
 iPod shuffle might need to be reset. Turn iPod shuffle off, wait five seconds, and then
turn it back on again.
 If there is no status light (or the status light is dimmed) and iPod shuffle doesn’t
appear in iTunes or the Finder, the battery may be completely discharged. Let
iPod shuffle charge for several minutes to see if it comes back to life.
 Make sure you have the required computer and software. See “If you want to doublecheck the system requirements” on page 25.
 Try restarting your computer.
 If none of the previous suggestions solves the problem, you might need to restore
iPod software. See “Updating and Restoring iPod shuffle Software” on page 26.
 If restoring iPod shuffle doesn’t solve the problem, iPod shuffle may need to be
repaired. You can arrange for service on the iPod shuffle Service & Support website at
www.apple.com/support/ipodshuffle/service.
If songs load slowly
Connect iPod shuffle to a USB 2.0 port on your computer for fast loading speeds. USB
2.0 loads songs and data faster than USB 1.1.
If you can’t load a song or other item onto iPod shuffle
The song might have been encoded in a format that iPod shuffle doesn’t support. The
following audio file formats are supported by iPod shuffle. These include formats for
audiobooks and podcasts:
 AAC (M4A, M4B, M4P) (up to 320 kbps)
 MP3 (up to 320 kbps)
 MP3 Variable Bit Rate (VBR)
 WAV
 AA (audible.com spoken word, formats 2, 3, and 4)
 AIFFChapter 5 Tips and Troubleshooting 25
A song encoded using Apple Lossless format has full CD-quality sound, but takes up
only about half as much space as a song encoded using AIFF or WAV format. The same
song encoded in AAC or MP3 format takes up even less space. When you import music
from a CD using iTunes, it is converted to AAC format by default.
You can have iPod shuffle automatically convert files encoded at higher bit rates (such
as Apple Lossless) to 128 kbps AAC files as they are loaded onto iPod shuffle. See
“Fitting More Songs onto iPod shuffle” on page 15.
Using iTunes for Windows, you can convert nonprotected WMA files to AAC or MP3
format. This can be useful if you have a collection of music encoded in WMA format.
iPod shuffle does not support Apple Lossless, WMA, MPEG Layer 1, MPEG Layer 2 audio
files, or audible.com format 1.
If you have a song in iTunes that isn’t supported by iPod shuffle, you can convert it to a
format iPod shuffle supports. For more information, see iTunes Help.
If the chapters in an audiobook play out of order
 Make sure the shuffle switch on iPod shuffle is set to repeat (⁄) so that the chapters
of the audiobook play in order.
 If the chapters were added to iPod shuffle out of order, connect iPod shuffle to your
computer and rearrange the tracks using iTunes. See “Arranging the Order of Songs
on iPod shuffle” on page 15.
If you want to double-check the system requirements
To use iPod shuffle, you must have:
 One of the following computer configurations:
 A Macintosh with a USB port (USB 2.0 recommended)
 A Windows PC with a USB port or a USB card installed (USB 2.0 recommended)
 One of the following operating systems: Mac OS X v10.3.9 or later, Windows 2000
with Service Pack 4 or later, or Windows XP Home or Professional with Service Pack 2
or later
 Internet access (a broadband connection is recommended)
 iTunes 7.0.2 or later (iTunes can be downloaded from www.apple.com/ipod/start)
If your Windows PC doesn’t have a high-power USB port, you can purchase and install a
USB 2.0 card. For information, go to www.apple.com/ipodstore.
High-power USB 2.0 port26 Chapter 5 Tips and Troubleshooting
If you want to use iPod shuffle with a Mac and a Windows PC
Whenever you sync iPod shuffle with a different iTunes library, you must erase the
music already on iPod shuffle, regardless of the operating system. When you connect
iPod shuffle to a different computer or user account, a message asks if you want to
erase iPod shuffle and sync to the new iTunes library.
However, you can use iPod shuffle as an external disk with both Macintosh computers
and PCs, allowing you to transfer files from one operating system to the other. See
Chapter 3, “Storing Files on iPod shuffle,” on page 19.
Updating and Restoring iPod shuffle Software
You can use iTunes to update or restore the iPod shuffle software. It is recommended
that you update iPod shuffle to use the latest software. You can also restore the
software, which returns iPod shuffle to its original state.
 If you choose to update, the software is updated but your settings and songs are not
affected.
 If you choose to restore, all data is erased from iPod shuffle, including songs and any
other data. All iPod shuffle settings are restored to their original state.
To update or restore iPod shuffle:
1 Make sure you have an Internet connection and have installed the latest version of
iTunes from www.apple.com/ipod/start.
2 Connect iPod shuffle to your computer.
3 In iTunes, select iPod shuffle in the Source pane and click the Settings tab.
The Version section tells you whether iPod shuffle is up to date or needs a newer
version of the software.
4 Do one of the following:
 To install the latest version of the software, click Update.
 To restore iPod shuffle to its original settings, click Restore. This erases all data from
iPod shuffle. Follow the onscreen instructions to complete the restore process.6
27
6 Safety and Handling
This chapter contains important safety and handling
information for iPod shuffle.
Keep this features guide for your iPod shuffle handy for future reference.
Important Safety Information
Handling iPod shuffle Do not bend, drop, crush, puncture, incinerate, or open
iPod shuffle.
Avoiding water and wet locations Do not use iPod shuffle in rain or near washbasins,
or other wet locations. Take care not to spill any food or liquid into iPod shuffle. In case
iPod shuffle gets wet, unplug all cables and turn iPod shuffle off before cleaning, and
allow it to dry thoroughly before turning it on again.
Repairing iPod shuffle Never attempt to repair iPod shuffle yourself. iPod shuffle
does not contain any user-serviceable parts. For service information, choose iPod Help
from the Help menu in iTunes or go to www.apple.com/support/ipod/service. The
battery in iPod shuffle is not user-replaceable. For more information about batteries,
go to www.apple.com/batteries.
± Read all safety information below and operating instructions before using
iPod shuffle to avoid injury.
WARNING: Failure to follow these safety instructions could result in fire, electric shock,
or other injury or damage.28 Chapter 6 Safety and Handling
Using the iPod USB Power Adapter (available separately) If you use the iPod USB
Power Adapter (sold separately at www.apple.com/ipodstore) to charge iPod shuffle,
make sure that the power adapter is fully assembled before you plug it into a power
outlet. Then insert the iPod USB Power Adapter firmly into the power outlet. Do not
connect or disconnect the iPod USB Power Adapter with wet hands. Do not use any
power adapter other than the Apple iPod USB Power Adapter to charge your
iPod shuffle.
The iPod USB Power Adapter may become warm during normal use. Always allow
adequate ventilation around the iPod USB Power Adapter and use care when handling.
Unplug the iPod USB Power Adapter if any of the following conditions exist:
 The power cord or plug has become frayed or damaged.
 The adapter is exposed to rain or excessive moisture.
 The adapter case has become damaged.
 You suspect the adapter needs service or repair.
 You want to clean the adapter.
Avoiding hearing damage Permanent hearing loss may occur if earbuds or
headphones are used at high volume. Set the volume to a safe level. You can adapt
over time to a higher volume of sound that may sound normal but can be damaging to
your hearing. If you experience ringing in your ears or muffled speech, stop listening
and have your hearing checked. The louder the volume, the less time is required before
your hearing could be affected. Hearing experts suggest that to protect your hearing:
 Limit the amount of time you use earbuds or headphones at high volume.
 Avoid turning up the volume to block out noisy surroundings.
 Turn the volume down if you can’t hear people speaking near you.
For information about how to set a volume limit on iPod shuffle, see “Setting a Volume
Limit” on page 17.
Using headphones safely Use of headphones while operating a vehicle is not
recommended and is illegal in some areas. Be careful and attentive while driving. Stop
using iPod shuffle if you find it disruptive or distracting while operating any type of
vehicle or performing any other activity that requires your full attention.Chapter 6 Safety and Handling 29
Important Handling Information
Carrying iPod shuffle iPod shuffle contains sensitive components. Do not bend, drop,
or crush iPod shuffle.
Using connectors and ports Never force a connector into a port. Check for
obstructions on the port. If the connector and port don’t join with reasonable ease,
they probably don’t match. Make sure that the connector matches the port and that
you have positioned the connector correctly in relation to the port.
Keeping iPod shuffle within acceptable temperatures Operate iPod shuffle in a place
where the temperature is always between 0º and 35º C (32º to 95º F). iPod play time
might temporarily shorten in low-temperature conditions.
Store iPod shuffle in a place where the temperature is always between -20º and 45º C
(-4º to 113º F). Don’t leave iPod shuffle in your car, because temperatures in parked cars
can exceed this range.
When you’re using iPod shuffle or charging the battery, it is normal for iPod shuffle to
get warm. The exterior of iPod shuffle functions as a cooling surface that transfers heat
from inside the unit to the cooler air outside.
Keeping the outside of iPod shuffle clean To clean iPod shuffle, remove it from the
dock and turn iPod shuffle off. Then use a soft, slightly damp, lint-free cloth. Avoid
getting moisture in openings. Don’t use window cleaners, household cleaners, aerosol
sprays, solvents, alcohol, ammonia, or abrasives to clean iPod shuffle.
Disposing of iPod shuffle properly For information about the proper disposal of
iPod shuffle, including other important regulatory compliance information, see
“Regulatory Compliance Information” on page 31.
NOTICE: Failure to follow these handling instructions could result in damage to
iPod shuffle or other property.7
30
7 Learning More, Service,
and Support
You can find more information about using iPod shuffle
in onscreen help and on the web.
The following table describes where to get iPod-related software and service
information.
To learn about Do this
Service and support,
discussions, tutorials, and
Apple software downloads
Go to: www.apple.com/support/ipodshuffle
Using iTunes Open iTunes and choose Help > iTunes Help.
For an online iTunes tutorial (available in some areas only), go to:
www.apple.com/ilife/tutorials/itunes
The latest information about
iPod shuffle
Go to: www.apple.com/ipodshuffle
Registering iPod shuffle To register iPod shuffle, install iTunes on your computer and
connect iPod shuffle.
Finding the iPod shuffle serial
number
Look on the notch underneath the clip on iPod shuffle. Or, in
iTunes (with iPod shuffle connected to your computer), select
iPod shuffle in the Source pane and click the Settings tab.
Obtaining warranty service First follow the advice in this booklet, the onscreen help, and
online resources, and then go to:
www.apple.com/support/ipodshuffle/service 31
Regulatory Compliance Information
FCC Compliance Statement
This device complies with part 15 of the FCC rules.
Operation is subject to the following two conditions:
(1) This device may not cause harmful interference,
and (2) this device must accept any interference
received, including interference that may cause
undesired operation. See instructions if interference
to radio or television reception is suspected.
Radio and Television Interference
This computer equipment generates, uses, and can
radiate radio-frequency energy. If it is not installed
and used properly—that is, in strict accordance with
Apple’s instructions—it may cause interference with
radio and television reception.
This equipment has been tested and found to
comply with the limits for a Class B digital device in
accordance with the specifications in Part 15 of FCC
rules. These specifications are designed to provide
reasonable protection against such interference in a
residential installation. However, there is no
guarantee that interference will not occur in a
particular installation.
You can determine whether your computer system is
causing interference by turning it off. If the
interference stops, it was probably caused by the
computer or one of the peripheral devices.
If your computer system does cause interference to
radio or television reception, try to correct the
interference by using one or more of the following
measures:
 Turn the television or radio antenna until the
interference stops.
 Move the computer to one side or the other of the
television or radio.
 Move the computer farther away from the
television or radio.
 Plug the computer into an outlet that is on a
different circuit from the television or radio. (That
is, make certain the computer and the television or
radio are on circuits controlled by different circuit
breakers or fuses.)
If necessary, consult an Apple-authorized service
provider or Apple. See the service and support
information that came with your Apple product. Or,
consult an experienced radio/television technician
for additional suggestions.
Important: Changes or modifications to this product
not authorized by Apple Inc. could void the EMC
compliance and negate your authority to operate
the product.
This product was tested for EMC compliance under
conditions that included the use of Apple peripheral
devices and Apple shielded cables and connectors
between system components.
It is important that you use Apple peripheral devices
and shielded cables and connectors between system
components to reduce the possibility of causing
interference to radios, television sets, and other
electronic devices. You can obtain Apple peripheral
devices and the proper shielded cables and
connectors through an Apple Authorized Reseller.
For non-Apple peripheral devices, contact the
manufacturer or dealer for assistance.
Responsible party (contact for FCC matters only):
Apple Inc. Product Compliance, 1 Infinite Loop
M/S 26-A, Cupertino, CA 95014-2084, 408-974-2000.
Industry Canada Statement
This Class B device meets all requirements of the
Canadian interference-causing equipment
regulations.
Cet appareil numérique de la classe B respecte
toutes les exigences du Règlement sur le matériel
brouilleur du Canada.
VCCI Class B Statement
Korea Class B Statement32
Russia
European Community
Complies with European Directives 2006/95/EEC and
89/336/EEC.
Disposal and Recycling Information
Your iPod contains a battery. Dispose of your iPod
according to your local environmental laws and
guidelines.
For information about Apple’s recycling program,
go to: www.apple.com/environment
Deutschland: Dieses Gerät enthält Batterien. Bitte
nicht in den Hausmüll werfen. Entsorgen Sie dieses
Gerätes am Ende seines Lebenszyklus entsprechend
der maßgeblichen gesetzlichen Regelungen.
China:
Nederlands: Gebruikte batterijen kunnen worden
ingeleverd bij de chemokar of in een speciale
batterijcontainer voor klein chemisch afval (kca)
worden gedeponeerd.
Taiwan:
European Union—Disposal Information:
This symbol means that according to local laws and
regulations your product should be disposed of
separately from household waste. When this product
reaches its end of life, take it to a collection point
designated by local authorities. Some collection
points accept products for free. The separate
collection and recycling of your product at the time
of disposal will help conserve natural resources and
ensure that it is recycled in a manner that protects
human health and the environment.
Apple and the Environment
At Apple, we recognize our responsibility to
minimize the environmental impacts of our
operations and products.
For more information, go to:
www.apple.com/environment
© 2007 Apple Inc. All rights reserved. Apple, the Apple logo, FireWire,
iPod, iTunes, Mac, Macintosh, and Mac OS are trademarks of Apple Inc.,
registered in the U.S. and other countries. Finder and Shuffle are
trademarks of Apple Inc. Apple Store is a service mark of Apple Inc.,
registered in the U.S. and other countries. Other company and product
names mentioned herein may be trademarks of their respective
companies.
Mention of third-party products is for informational purposes only and
constitutes neither an endorsement nor a recommendation. Apple
assumes no responsibility with regard to the performance or use of
these products. All understandings, agreements, or warranties, if any,
take place directly between the vendors and the prospective users.
Every effort has been made to ensure that the information in this
manual is accurate. Apple is not responsible for printing or clerical
errors.
019-0996/6-2007 33
Index
Index
A
AAC, converting songs to 15
albums, purchasing 11
Apple earphones 21
arranging the order of tracks 15
audiobooks
listening to 16
purchasing 11
audiobooks, loading 14
audio file formats 24
autofilling 14
B
battery
charge status 6
charging 6, 23
rechargeable 7
replacement information 27
status 7
bit rate 15
browsing iTunes Store 10
buttons 4
disabling and enabling 4, 18
C
CDs, importing into iTunes 11
charging the battery
about 6, 23
using the iPod USB Power Adapter 7
using your computer 6
compressing songs 15
computer
charging the battery 6
connecting iPod shuffle 5
problems connecting iPod shuffle 24
requirements 25
connecting iPod shuffle
about 5
charging the battery 6
controls
See also buttons
using 4
converting songs to AAC files 15
converting unprotected WMA files 25
D
data files, storing on iPod shuffle 19
deleting songs 15
disabling iPod shuffle buttons 4, 18
disconnecting iPod shuffle
about 5
during music update 5
eject first 5
instructions 6
disk, using iPod shuffle as 19
downloading podcasts 11
E
earphones
See also headphones
using 21
Eject button in iTunes 6
ejecting iPod shuffle before disconnecting 5
enabling iPod shuffle buttons 4, 18
entering song information manually 12
external disk, using iPod shuffle as 19
F
fast-forwarding 4
features of iPod shuffle 3
fitting more songs onto iPod shuffle 15
formats, audio file 24
G
getting help 30
getting started with iPod shuffle 25
going to the first track 4
H
headphones, using 21
Headphones port 4
hearing damage, avoiding 28
help, getting 30
higher bit rate songs 1534 Index
high-power USB port 5, 6, 23, 24, 25
I
importing CDs into iTunes 11
iPod USB Power Adapter 6, 22, 28
iTunes
ejecting iPod shuffle 6
getting help 30
importing CDs 11
iTunes Store 10
setting not to open automatically 20
version required 25
iTunes Library, adding songs 12
iTunes Store
browsing 10
downloading podcasts 11
purchasing audiobooks 11
purchasing songs and albums 11
searching 10
signing in 10
L
library, adding songs 12
listening to an audiobook 16
loading audiobooks 14
loading music 13
disconnecting iPod shuffle 5
tutorial 30
loading podcasts 14
loading songs manually 14
M
Mac OS X version 25
manually managing music 14
maximum volume limit, setting 17
music
See also songs; loading music
iPod shuffle not playing 24
purchasing 11
tutorial 30
O
operating system requirements 25
overview of iPod shuffle features 3
P
pausing a song 4
playing
previous song 4
songs 4
songs in order 4
podcasts 10, 11
podcasts, loading 14
ports
Headphones 4, 21
high-power USB 5, 6, 23, 24, 25
troubleshooting iPod shuffle connection 24
USB 5, 23, 24
USB 2.0 24, 25
USB on keyboard 5
power adapter 22
Power Search in iTunes Store 11
power switch 4
preventing iTunes from opening automatically 20
problems. See troubleshooting
purchasing songs, albums, audiobooks 11
R
random play 4
rearranging. See arranging
rechargeable batteries 7
reenabling iPod shuffle buttons 4, 18
registering iPod shuffle 30
relative volume, playing songs at 16
removing songs 15
replaying songs 4
requirements
computer 25
iTunes version 25
operating system 25
resetting iPod shuffle 4, 23
reshuffling songs 16
restoring iPod software 26
returning to first song 16
rewinding 4
S
Safely Remove Hardware icon 6
safety considerations 27
searching iTunes Store 10
serial number, locating 4, 30
service and support 30
setting play order of songs 4
settings
playing songs at relative volume 16
shuffle songs 16
volume limit 17
shuffle switch 4
shuffling songs on iPod shuffle 4, 16
skipping to next track 4
sleep mode and charging the battery 6
software, updating and restoring 26
songs
arranging the order 15
entering information manually 12
fast-forwarding 4
going to the first 4
loading manually 14
pausing 4
playing 4Index 35
playing at relative volume 16
playing in order 4
playing next or previous 4
purchasing 11
removing 15
replaying 4
reshuffling 16
returning to first 16
rewinding 4
shuffling 4, 16
skipping to the next 4
Sound Check, enabling 17
status lights 4, 7
battery 6, 7
storing, data files on iPod shuffle 19
subscribing to podcasts 11
supported audio file formats 24
supported operating systems 25
switches
power 4
shuffle 4
T
tracks. See songs
troubleshooting
connecting iPod shuffle to computer 24
connecting to USB port 24
cross-platform use 26
iPod shuffle not playing music 24
iPod shuffle not responding 23
resetting iPod shuffle 4, 23
safety considerations 27
updating and restoring software 26
turning iPod shuffle on or off 4
tutorial 30
U
unresponsive iPod shuffle 23
unsupported audio file formats 25
updating and restoring software 26
USB 2.0 port recommendation 5, 24, 25
USB port 23, 24
USB port on keyboard 5
USB Power Adapter 22
V
volume
changing 4
enabling Sound Check 17
setting limit 17
W
warranty service 30
Windows
supported versions 25
troubleshooting 26
WMA files, converting 25
Color
User ManualCopyright © 2009 Apple Inc. All rights reserved.
Your rights to the software are governed by the
accompanying software license agreement. The owner or
authorized user of a valid copy of Final Cut Studio software
may reproduce this publication for the purpose of learning
to use such software. No part of this publication may be
reproduced or transmitted for commercial purposes, such
as selling copies of this publication or for providing paid
for support services.
The Apple logo is a trademark of Apple Inc., registered in
the U.S. and other countries. Use of the “keyboard” Apple
logo (Shift-Option-K) for commercial purposes without
the prior written consent of Apple may constitute
trademark infringement and unfair competition in violation
of federal and state laws.
Every effort has been made to ensure that the information
in this manual is accurate. Apple is not responsible for
printing or clerical errors.
Note: Because Apple frequently releases new versions
and updates to its system software, applications, and
Internet sites, images shown in this manual may be slightly
different from what you see on your screen.
Apple
1 Infinite Loop
Cupertino, CA 95014
408-996-1010
www.apple.com
Apple, the Apple logo, ColorSync, DVD Studio Pro, Final
Cut, Final Cut Pro, Final Cut Studio, FireWire, Mac, Mac OS,
QuickTime, and Shake are trademarks of Apple Inc.,
registered in the U.S. and other countries.
Cinema Tools, Finder, and Multi-Touch are trademarks of
Apple Inc.
Production stills from the film “Les Poupets” provided
courtesy of Jean-Paul Bonjour. “Les Poupets” © 2006
Jean-Paul Bonjour. All rights reserved.
http://jeanpaulbonjour.com
Other company and product names mentioned herein
are trademarks of their respective companies. Mention of
third-party products is for informational purposes only
and constitutes neither an endorsement nor a
recommendation. Apple assumes no responsibility with
regard to the performance or use of these products.Preface 9 Welcome to Color
9 About Color
10 About the Color Documentation
10 Additional Resources
Chapter 1 13 Color Correction Basics
13 The Fundamental Color Correction Tasks
16 When Does Color Correction Happen?
23 Image Encoding Standards
28 Basic Color and Imaging Concepts
Chapter 2 35 Color Correction Workflows
35 An Overview of the Color Workflow
37 Limitations in Color
39 Video Finishing Workflows Using Final Cut Pro
47 Importing Projects from Other Video Editing Applications
50 Digital Cinema Workflows Using Apple ProRes 4444
56 Finishing Projects Using RED Media
65 Digital Intermediate Workflows Using DPX/Cineon Media
73 Using EDLs, Timecode, and Frame Numbers to Conform Projects
Chapter 3 77 Using the Color Interface
78 Setting Up a Control Surface
78 Using Onscreen Controls
82 Using Organizational Browsers and Bins
88 Using Color with One or Two Monitors
Chapter 4 91 Importing and Managing Projects and Media
92 Creating and Opening Projects
92 Saving Projects
95 Saving and Opening Archives
95 Moving Projects from Final Cut Pro to Color
101 Importing EDLs
102 EDL Import Settings
3
Contents104 Relinking Media
105 Importing Media Directly into the Timeline
106 Compatible Media Formats
112 Moving Projects from Color to Final Cut Pro
114 Exporting EDLs
115 Reconforming Projects
115 Converting Cineon and DPX Image Sequences to QuickTime
117 Importing Color Corrections
118 Exporting JPEG Images
Chapter 5 119 Configuring the Setup Room
119 The File Browser
122 Using the Shots Browser
128 The Grades Bin
129 The Project Settings Tab
135 The Messages Tab
135 The User Preferences Tab
Chapter 6 149 Monitoring Your Project
149 The Scopes Window and Preview Display
151 Monitoring Broadcast Video Output
153 Using Display LUTs
159 Monitoring the Still Store
Chapter 7 161 Timeline Playback, Navigation, and Editing
162 Basic Timeline Elements
163 Customizing the Timeline Interface
165 Working with Tracks
166 Selecting the Current Shot
166 Timeline Playback
169 Zooming In and Out of the Timeline
170 Timeline Navigation
171 Selecting Shots in the Timeline
172 Working with Grades in the Timeline
174 The Settings 1 Tab
175 The Settings 2 Tab
176 Editing Controls and Procedures
Chapter 8 183 Analyzing Signals Using the Video Scopes
183 What Scopes Are Available?
185 Video Scope Options
187 Analyzing Images Using the Video Scopes
4 ContentsChapter 9 207 The Primary In Room
207 What Is the Primary In Room Used For?
208 Where to Start in the Primary In Room?
210 Contrast Adjustment Explained
212 Using the Primary Contrast Controls
222 Color Casts Explained
224 Using Color Balance Controls
234 The Curves Controls
245 The Basic Tab
249 The Advanced Tab
251 Using the Auto Balance Button
252 The RED Tab
Chapter 10 257 The Secondaries Room
258 What Is the Secondaries Room Used For?
259 Where to Start in the Secondaries Room?
260 The Enabled Button in the Secondaries Room
261 Choosing a Region to Correct Using the HSL Qualifiers
268 Controls in the Previews Tab
270 Isolating a Region Using the Vignette Controls
277 Adjusting the Inside and Outside of a Secondary Operation
278 The Secondary Curves Explained
283 Reset Controls in the Secondaries Room
Chapter 11 285 The Color FX Room
286 The Color FX Interface Explained
286 How to Create Color FX
294 Creating Effects in the Color FX Room
300 Using Color FX with Interlaced Shots
301 Saving Favorite Effects in the Color FX Bin
302 Node Reference Guide
Chapter 12 313 The Primary Out Room
313 What Is the Primary Out Room Used For?
314 Making Extra Corrections Using the Primary Out Room
314 Understanding the Image Processing Pipeline
315 Ceiling Controls
Chapter 13 317 Managing Corrections and Grades
317 The Difference Between Corrections and Grades
318 Saving and Using Corrections and Grades
325 Managing Grades in the Timeline
332 Using the Copy To Buttons in the Primary Rooms
Contents 5334 Using the Copy Grade and Paste Grade Memory Banks
334 Setting a Beauty Grade in the Timeline
335 Disabling All Grades
336 Managing Grades in the Shots Browser
343 Managing a Shot’s Corrections Using Multiple Rooms
Chapter 14 347 Keyframing
347 Why Keyframe an Effect?
347 Keyframing Limitations
349 How Keyframing Works in Different Rooms
351 Working with Keyframes in the Timeline
353 Keyframe Interpolation
Chapter 15 355 The Geometry Room
355 Navigating Within the Image Preview
356 The Pan & Scan Tab
361 The Shapes Tab
370 The Tracking Tab
Chapter 16 381 The Still Store
381 Saving Images to the Still Store
383 Saving Still Store Images in Subdirectories
383 Removing Images from the Still Store
384 Recalling Images from the Still Store
384 Customizing the Still Store View
Chapter 17 389 The Render Queue
389 About Rendering in Color
395 The Render Queue Interface
396 How to Render Shots in Your Project
400 Rendering Multiple Grades for Each Shot
401 Managing Rendered Shots in the Timeline
401 Examining the Color Render Log
402 Choosing Printing Density When Rendering DPX Media
403 Gather Rendered Media
Appendix A 405 Calibrating Your Monitor
405 About Color Bars
405 Calibrating Video Monitors with Color Bars
Appendix B 409 Keyboard Shortcuts in Color
409 Project Shortcuts
410 Switching Rooms and Windows
411 Scopes Window Shortcuts
6 Contents411 Playback and Navigation
412 Grade Shortcuts
413 Timeline-Specific Shortcuts
413 Editing Shortcuts
414 Keyframing Shortcuts
414 Shortcuts in the Shots Browser
414 Shortcuts in the Geometry Room
414 Still Store Shortcuts
415 Render Queue Shortcuts
Appendix C 417 Using Multi-Touch Controls in Color
417 Multi-Touch Control of the Timeline
417 Multi-Touch Control in the Shots Browser
418 Multi-Touch Control of the Scopes
418 Multi-Touch Control in the Geometry Room
419 Multi-Touch Control in the Image Preview of the Scopes Window
Appendix D 421 Setting Up a Control Surface
421 JLCooper Control Surfaces
426 Tangent Devices CP100 Control Surface
429 Tangent Devices CP200 Series Control Surface
434 Customizing Control Surface Sensitivity
Contents 7Welcome to the world of professional video and film grading and manipulation using
Color.
This preface covers the following:
• About Color (p. 9)
• About the Color Documentation (p. 10)
• Additional Resources (p. 10)
About Color
Color has been designed from the ground up as a feature-rich color correction environment
that complements a wide variety of post-production workflows, whether your project is
standard definition, high definition, or a 2K digital intermediate. If you've edited a program
using Final Cut Pro, it's easy to send your program to Color for grading and then send it
back to Final Cut Pro for final output. However, it's also easy to reconform projects that
originate as EDLs from other editing environments.
Color has the tools that professional colorists demand, including:
• Primary color correction using three-way color balance and contrast controls with
individual shadow, midtone, and highlight controls
• Curve controls for detailed color and luma channel adjustments
• Up to eight secondary color correction operations per shot with HSL qualifiers, vignettes,
user shapes, and separate adjustments for the inside and outside of each secondary
• Color FX node-based effects for creating custom color effects
• Pan & Scan effects
• Motion tracking that can be used to animate vignettes, user shapes, and other effects
• Broadcast legal settings to guarantee adherence to quality control standards
• Support for color correction–specific control surfaces
• And much, much more
9
Welcome to Color
PrefaceAll of these tools are divided among eight individual “rooms” of the Color interface,
logically arranged in an order that matches the workflow of most colorists. You use Color
to correct, balance, and create stylized “looks” for each shot in your program as the last
step in the post-production workflow, giving your programs a final polish previously
available only to high-end facilities.
About the Color Documentation
The Color User Manual provides comprehensive information about the application and
is written for users of all levels of experience.
• Editors and post-production professionals from other disciplines who are new to the
color correction process will find information on how to get started, with detailed
explanations of how all controls work, and why they function the way they do.
• Colorists coming to Color from other grading environments can skip ahead to find
detailed information about the application’s inner workings and exhaustive
parameter-by-parameter explanations for every room of the Color interface.
Additional Resources
The following websites provide general information, updates, and support information
about Color, as well as the latest news, resources, and training materials.
Color Website
For more information about Color, go to:
• http://www.apple.com/finalcutstudio/color
Apple Service and Support Websites
The Apple Service and Support website provides software updates and answers to the
most frequently asked questions for all Apple products, including Color. You’ll also have
access to product specifications, reference documentation, and Apple product technical
articles:
• http://www.apple.com/support
For support information that's specific to Color, go to:
• http://www.apple.com/support/color
To provide comments and feedback about Color, go to:
• http://www.apple.com/feedback/color.html
A discussion forum is also available to share information about Color. To participate, go
to:
• http://discussions.apple.com
10 Preface Welcome to ColorFor more information on the Apple Pro Training Program, go to:
• http://www.apple.com/software/pro/training
Preface Welcome to Color 11To better learn how Color works, it’s important to understand the overall color correction
process and how images work their way through post-production in standard definition
(SD), high definition (HD), and film workflows.
If you’re new to color correction, the first part of this chapter provides a background in
color correction workflows to help you better understand why Color works the way it
does. The second part goes on to explain color and imaging concepts that are important
to the operation of the Color interface.
This chapter covers the following:
• The Fundamental Color Correction Tasks (p. 13)
• When Does Color Correction Happen? (p. 16)
• Image Encoding Standards (p. 23)
• Basic Color and Imaging Concepts (p. 28)
The Fundamental Color Correction Tasks
In any post-production workflow, color correction is generally one of the last steps taken
to finish an edited program. Color has been created to give you precise control over the
look of every shot in your project by providing flexible tools and an efficient workspace
in which to manipulate the contrast, color, and geometry of each shot in your program.
When color correcting a given program, you’ll be called upon to perform many, if not all,
of the tasks described in this section. Color gives you an extensive feature set with which
to accomplish all this and more. While the deciding factor in determining how far you
go in any color correction session is usually the amount of time you have in which to
work, the dedicated color correction interface in Color allows you to work quickly and
efficiently.
Every program requires you to take some combination of the following steps.
13
Color Correction Basics
1Stage 1: Correcting Errors in Color Balance and Exposure
Frequently, images that are acquired digitally (whether shot on analog or digital video,
or transferred from film) don’t have optimal exposure or color balance to begin with. For
example, many camcorders and digital cinema cameras deliberately record blacks that
aren’t quite at 0 percent in order to avoid the inadvertent crushing of data unnecessarily.
Furthermore, accidents can happen in any shoot. For example, the crew may not have
had the correctly balanced film stock for the conditions in which they were shooting, or
someone may have forgotten to white balance the video camera before shooting an
interview in an office lit with fluorescent lights, resulting in footage with a greenish tinge.
Color makes it easy to fix these kinds of mistakes.
Stage 2: Making Sure That Key Elements in Your Program Look the Way They Should
Every scene of your program has key elements that are the main focus of the viewer. In
a narrative or documentary video, the focus is probably on the individuals within each
shot. In a commercial, the key element is undoubtedly the product (for example, the label
of a bottle or the color of a car). Regardless of what these key elements are, chances are
you or your audience will have certain expectations of what they should look like, and
it’s your job to make the colors in the program match what was originally shot.
When working with shots of people, one of the guiding principles of color correction is
to make sure that their skin tones in the program look the same as (or better than) in real
life. Regardless of ethnicity or complexion, the hues of human skin tones, when measured
objectively on a Vectorscope, fall along a fairly narrow range (although the saturation
and brightness vary). Color gives you the tools to make whatever adjustments are
necessary to ensure that the skin tones of people in your final edited piece look the way
they should.
Stage 3: Balancing All the Shots in a Scene to Match
Most edited programs incorporate footage from a variety of sources, shot in multiple
locations over the course of many days, weeks, or months of production. Even with the
most skilled lighting and camera crews, differences in color and exposure are bound to
occur, sometimes within shots meant to be combined into a single scene.
When edited together, these changes in color and lighting can cause individual shots to
stand out, making the editing appear uneven. With careful color correction, all the different
shots that make up a scene can be balanced to match one another so that they all look
as if they’re happening at the same time and in the same place, with the same lighting.
This is commonly referred to as scene-to-scene color correction.
Stage 4: Creating Contrast
Color correction can also be used to create contrast between two scenes for a more jarring
effect. Imagine cutting from a lush, green jungle scene to a harsh desert landscape with
many more reds and yellows. Using color correction, you can subtly accentuate these
differences.
14 Chapter 1 Color Correction BasicsStage 5: Achieving a “Look”
The process of color correction is not simply one of making all the video in your piece
match some objective model of exposure. Color, like sound, is a property that, when
subtly mixed, can result in an additional level of dramatic control over your program.
With color correction, you can control whether your video has rich, saturated colors or a
more muted look. You can make your shots look warmer by pushing their tones into the
reds, or make them look cooler by bringing them into the blues. You can pull details out
of the shadows, or crush them, increasing the picture’s contrast for a starker look. Such
subtle modifications alter the audience’s perception of the scene being played, changing
a program’s mood. Once you pick a look for your piece, or even for an individual scene,
you can use color correction to make sure that all the shots in the appropriate scenes
match the same look, so that they cut together smoothly.
Stage 6: Adhering to Guidelines for Broadcast Legality
If a program is destined for television broadcast, you are usually provided with a set of
quality control (QC) guidelines that specify the “legal” limits for minimum black levels,
maximum white levels, and minimum and maximum chroma saturation and composite
RGB limits. Adherence to these guidelines is important to ensure that the program is
accepted for broadcast, as “illegal” values may cause problems when the program is
encoded for transmission. QC standards vary, so it’s important to check what these
guidelines are in advance. Color has built-in broadcast safe settings (sometimes referred
to as a legalizer) that automatically prevent video levels from exceeding the specified
limits. For more information, see The Project Settings Tab.
Stage 7: Adjusting Specific Elements Separately
It’s sometimes necessary to selectively target a narrow range of colors to alter or replace
only those color values. A common example of this might be to turn a red car blue or to
mute the excessive colors of an article of clothing. These sorts of tasks are accomplished
with what’s referred to assecondary color correction, and Color provides you with numerous
tools with which to achieve such effects. For more information, see The Secondaries
Room.
Stage 8: Making Digital Lighting Adjustments
Sometimes lighting setups that looked right during the shoot don’t work as well in
post-production. Changes in the director’s vision, alterations to the tone of the scene as
edited, or suggestions on the part of the director of photography (DoP) during post may
necessitate alterations to the lighting within a scene beyond simple adjustments to the
image’s overall contrast. Color provides powerful controls for user-definable masking
which, in combination with secondary color correction controls, allow you to isolate
multiple regions within an image and fine-tune the lighting. This is sometimes referred
to as digital relighting. For more information, see The Secondaries Room and Controls in
the Shapes Tab.
Chapter 1 Color Correction Basics 15Stage 9: Creating Special Effects
Sometimes a scene requires more extreme effects, such as manipulating colors and
exposure intensively to achieve a day-for-night look, creating an altered state for a
flashback or hallucination sequence, or just creating something bizarre for a music video.
In the Color FX room, Color provides you with an extensible node-based tool set for
creating such in-depth composites efficiently, in conjunction with the other primary and
secondary tools at your disposal. For more information, see The Color FX Room.
When Does Color Correction Happen?
A program’s color fidelity shouldn’t be neglected until the color correction stage of the
post-production process. Ideally, every project is begun with a philosophy of color
management that’s applied during the shoot, is maintained throughout the various
transfer and editing passes that occur during post-production, and concludes with the
final color correction pass conducted in Color. This section elaborates on how film and
video images have traditionally made their way through the post-production process.
For detailed information, see:
• Color Management Starts During the Shoot
• Initial Color Correction When Transferring Film
• Traditional Means of Final Color Correction
• Advantages of Grading with Color
Color Management Starts During the Shoot
Whether a program is shot using film, video, or high-resolution digital imaging of another
means, it’s important to remember that the process of determining a program’s overall
look begins when each scene is lit and shot during production. To obtain the maximum
amount of control and flexibility over shots in post-production, you ideally should start
out with footage that has been exposed with the end goals in mind right from the
beginning. Color correction in post-production is no substitute for good lighting.
Optimistically, the process of color correction can be seen as extending and enhancing
the vision of the producer, director, and director of photography (DoP) as it was originally
conceived. Often, the DoP gets personally involved during the color correction process
to ensure that the look he or she was trying to achieve is perfected.
At other times, the director or producer may change his or her mind regarding how the
finished piece should look. In these cases, color correction might be used to alter the
overall look of the piece (for example, making footage that was shot to look cool look
warmer, instead). While Color provides an exceptional degree of control over your footage,
it’s still important to start out with clean, properly exposed footage.
16 Chapter 1 Color Correction BasicsFurthermore, choices made during preproduction and the shoot, including the film or
video format and camera settings used, can have a profound effect on the amount of
flexibility that’s available during the eventual color correction process.
Initial Color Correction When Transferring Film
When a project has been shot on film, the camera negatives must first be transferred to
the videotape or digital video format of choice prior to editing and digital post using a
telecine or datacine machine. A telecine is a machine for transferring film to videotape,
while a datacine is set up for transferring film directly to a digital format, usually a DPX
(Digital Picture eXchange) or Cineon image sequence.
Camera Negative Telecine Videotapes Usually, the colorist running the film transfer session performs some level of color
correction to ensure that the editor has the most appropriate picture to work with. The
goals of color correction at this stage usually depend on both the length of the project
and the post-production workflow that’s been decided upon.
• Short projects, commercials, spots, and very short videos may get a detailed color
correction pass right away. The colorist will first calibrate the telecine’s own color
corrector to balance the whites, blacks, and color perfectly. Then the colorist, in
consultation with the DoP, director, or producer, will work shot by shot to determine
the look of each shot according to the needs of the project. As a result, the editor will
be working with footage that has already been corrected.
• Long-form projects such as feature-length films and longer television programs probably
won’t get a detailed color correction pass right away. Instead, the footage that is run
through the telecine will be balanced to have reasonably ideal exposure and color for
purposes of having a good image for editing, and left at that. Detailed color correction
is then done at another stage.
• Projects of any length that are going through post-production as a digital intermediate
are transferred with a color correction pass designed to retain the maximum amount
of image data. Since a second (and final) digital color correction pass is intended to be
performed at the end of the post-production process, it’s critical that the image data
is high quality, preserving as much highlight and shadow detail as possible. Interestingly,
since the goal is to preserve the image data and not to create the final look of the
program, the highest-quality image for grading may not be the most visually appealing
image.
Chapter 1 Color Correction Basics 17However the color correction is handled during the initial telecine or datacine transfer,
once complete, the footage goes through the typical post-production processes of offline
and online editorial.
Color Correcting Video Versus Film
Color has been designed to fit into both video and film digital intermediate workflows.
Since all footage must first be transferred to a QuickTime or image sequence format to
be imported into Color, film and video images are corrected using the same tools and
methods.
Three main attributes affect the quality of media used in a program, all of which are
determined when the footage is originally captured or transferred prior to Color import:
• The type and level of compression applied to the media
• The bit depth at which it’s encoded
• The chroma subsampling ratio used
For color correction, spatial and temporal compression should be minimized, since
compression artifacts can compromise the quality of your adjustments. Also, media at
higher bit depths is generally preferable (see Bit Depth Explained).
Most importantly of all, high chroma subsampling ratios, such as 4:4:4 or 4:2:2, are
preferred to maximize the quality and flexibility of your corrections. There’s nothing
stopping you from working with 4:1:1 or 4:2:0 subsampled footage, but you may find
that extreme contrast adjustments and smooth secondary selections are a bit more
difficult to accomplish with highly compressed color spaces.
For more information, see Chroma Subsampling Explained.
Traditional Means of Final Color Correction
Once editing is complete and the picture is locked, it’s time for color correction (referred
to as color grading in the film world) to begin. Traditionally, this process has been
accomplished either via a color timing session for film or via a tape-to-tape color correction
session for video.
Color Timing for Film
Programs being finished and color corrected on film traditionally undergo a negative
conform process prior to color timing. When editorial is complete, the original camera
negative is conformed to match the workprint or video cut of the edited program using
a cut list or pull list. (If the program was edited using Final Cut Pro, this can be derived
using Cinema Tools.) These lists list each shot used in the edited program and show how
each shot fits together. This is a time-consuming and detail-oriented process, since
mistakes made while cutting the negative are extremely expensive to correct.
18 Chapter 1 Color Correction BasicsOnce the camera negative has been conformed and the different shots physically glued
together onto alternating A and B rolls, the negative can be color-timed by being run
through an optical printer designed for this process. These machines shine filtered light
through the original negatives to expose an intermediate positive print, in the process
creating a single reel of film that is the color-corrected print.
The process of controlling the color of individual shots and doing scene-to-scene color
correction is accomplished with three controls to individually adjust the amount of red,
green, and blue light that exposes the film, using a series of optical filters and shutters.
Each of the red, green, and blue dials is adjusted in discrete increments called printer
points (with each point being a fraction of an f-stop, the scale used to measure film
exposure). Typically there’s a total range of 50 points, where point 25 is the original neutral
state for that color channel. Increasing or decreasing all three color channels together
darkens or brightens the image, while making disproportionate adjustments to the three
channels changes the color balance of the image relative to the adjustment.
The machine settings used for each shot can be stored (at one time using paper tape
technology) and recalled at any time, to ease subsequent retiming and adjustments, with
the printing process being automated once the manual timing is complete. Once the
intermediate print has been exposed, it can be developed and the final results projected.
Camera Negative Conform Negative Optical Color Timing Final Film Print While this system of color correction may seem cumbersome compared to today’s digital
tools for image manipulation, it’s an extremely effective means of primary color correction
for those who’ve mastered it.
Note: Color includes printer points controls for colorists who are familiar with this method
of color correction. For more information, see The Advanced Tab.
Tape-to-Tape Color Correction
For projects shot on videotape (and for those shot on film that will not receive a second
telecine pass), the color correction process fits into the traditional video offline/online
workflow. Once the edit has been locked, the final master tape is assembled, either by
being reconformed on the system originally used to do the offline or by taking the EDL
(Edit Decision List) and original source tapes to an online suite compatible with the source
tape formats. For more information about EDLs, see Importing Projects from Other Video
Editing Applications.
Chapter 1 Color Correction Basics 19If the online assembly is happening in a high-end online suite, then color correction can
be performed either during the assembly of the master tape or after assembly by running
the master tape through a color correction session.
Videotapes Offline Edit Tape Suite Final Master Tape Note: If the final master tape is color corrected, the colorist must carefully dissolve and
wipe color correction operations to match video dissolves and wipes happening in the
program.
Either way, the video signal is run through dedicated video color correction hardware
and software, and the colorist uses the tape’s master timecode to set up and preserve
color correction settings for every shot of every scene.
The evolution of the online video color correction suite introduced many more tools to
the process, including separate corrections for discrete tonal zones, secondary color
correction of specific subjects via keying and shapes controls, and many other creative
options.
Color Correcting via a Second Telecine Pass
Programs shot on film that are destined for video mastering, such as for an episodic
broadcast series, may end up back in the telecine suite for their final color correction
pass. Once editing is complete and the picture is locked, a cut list or pull list (similar to
that used for a negative conform) is created that matches the EDL of the edited program.
Using the cut list, the post-production supervisor pulls only the film negative that was
actually used in the edit. Since this is usually a minority of the footage that was originally
shot, the colorist now has more time (depending on the show’s budget, of course) to
perform a more detailed color correction pass on the selected footage that will be
assembled into the final video program during this final telecine pass.
Although this process might seem redundant, performing color correction directly from
the film negative has several distinct advantages. Since film has greater latitude from
black to white than video has, a colorist working straight off the telecine potentially has
a wider range of color and exposure from which to draw than when working only with
video.
20 Chapter 1 Color Correction BasicsIn addition, the color correction equipment available to the telecine colorist has evolved
to match (and is sometimes identical to) the tools available to online video colorists, with
the added advantage that the colorist can work directly on the uncompressed images
provided by the telecine.
After the conclusion of the second color correction pass, the color-corrected selects are
reassembled to match the original edit, and the project is mastered to tape.
Camera
Negative
Offline Media Reconform
Final Master
Inexpensive
One-Light
Telecine Pass
Best-Light
Telecine Pass
Offline Edit
Incidentally, even if you don’t intend to color correct your program in the telecine suite,
you might consider retransferring specific shots to make changes that are easier or of
higher quality to make directly from the original camera negative. For example, after
identifying shots you want to retransfer in your Final Cut Pro sequence, you can use
Cinema Tools to create a selects list just for shots you want to optically enlarge, speeding
the transfer process.
Other Advantages to Telecine Transfers
In addition to color correction, a colorist working with a telecine has many other options
available, depending on what kinds of issues may have come up during the edit.
• Using a telecine to pull the image straight off the film negative, the colorist can
reposition the image to include parts of the film image that fall outside of the action
safe area of video.
• With the telecine, the image can also be enlarged optically, potentially up to 50
percent without visible distortion.
• The ability to reframe shots in the telecine allows the director or producer to make
significant changes to a scene, turning a medium shot into a close-up for dramatic
effect, or moving the entire frame up to crop out a microphone that’s inadvertently
dropped into the shot.
Chapter 1 Color Correction Basics 21Advantages of Grading with Color
When Does Color Correction Happen? discusses how color correction is accomplished in
other post-production environments. This section describes how Color fits into a typical
film or video post-production process.
Color provides many of the same high-end color correction tools on your desktop that
were previously available only in high-end tape-to-tape and telecine color correction
suites. In addition, Color provides additional tools in the Color FX room that are more
commonly found in dedicated compositing applications, which give you even more
detailed control over the images in your program. (For more information, see The Color
FX Room.)
Color has been designed as a color correction environment for both film and video. It’s
resolution-independent, supporting everything from standard definition video up to 2K
and 4K film scans. It also supports multiple media formats and is compatible with image
data using a variety of image sequence formats and QuickTime codecs.
Color also has been designed to be incorporated into a digital intermediate workflow.
Digital intermediate refers to a high-quality digital version of your program that can be
edited, color corrected, and otherwise digitally manipulated using computer hardware
and software, instead of tape machines or optical printers.
Editors, effects artists, and colorists who finish video programs in a tapeless fashion have
effectively been working with digital intermediates for years, but the term usually describes
the process of scanning film frames digitally, for the purposes of doing all edit conforming,
effects, and color correction digitally. It is then the digital image data which is printed
directly to film or compiled as a file for digital projection.
Finishing film or video programs digitally frees colorists from the limitations of film and
tape transport mechanisms, speeding their work by letting them navigate through a
project as quickly as they can in a nonlinear editing application. Furthermore, working
with the digital image data provides a margin of safety, by eliminating the risk of scratching
the negative or damaging the source tapes.
When Does Color Correction in Color Happen?
Color correction using Color usually happens at or near the conclusion of the online edit
or project conform, often at the same time the final audio mix is being performed. Waiting
until the picture is locked is always a good idea, but it’s not essential, as Color provides
tools for synchronizing projects that are still being edited via XML files or EDLs.
Color has been designed to work hand in hand with editing applications like Final Cut Pro;
Final Cut Pro takes care of input, editing, and output, and Color allows you to focus on
color correction and related effects.
22 Chapter 1 Color Correction BasicsAbout Importing Projects and Media into Color
To work on a program in Color, you must be provided with two sets of files:
• Final Cut Pro sequence data can be sent to Color directly using the Send To Color
command. Otherwise, the edited project file (or files, if the program is in multiple reels)
should be provided in a format that can be imported into Color. Compatible formats
include Final Cut Pro XML files, and compatible EDL files from nearly any editing
environment.
• High-quality digital versions of the original source media, in a compatible QuickTime
or image sequence format.
Project and media format flexibility means that Color can be incorporated into a wide
variety of post-production workflows. For an overview of different color correction
workflows using Color, see Color Correction Workflows.
About Exporting Projects from Color
Color doesn’t handle video capture or output to tape on its own. Once you finish color
correcting your project in Color, you render every shot in the project to disk as an alternate
set of color-corrected media files, and you then send your Color project back to
Final Cut Pro, or hand it off to another facility for tape layoff or film out. For more
information, see The Render Queue.
What Footage Does Color Work With?
Color can work with film using scanned DPX or Cineon image sequences, or with video
clips using QuickTime files, at a variety of resolutions and compression ratios. This means
you have the option of importing and outputting nearly any professional format, from
highly compressed standard definition QuickTime DV-25 shots up through uncompressed
2K or 4K DPX image sequences—whatever your clients provide.
Image Encoding Standards
The sections listed below provide important information about the image encoding
standards supported by Color. The image data you’ll be color correcting is typically
encoded either using an RGB or Y′CBCR
(sometimes referred to as YUV) format. Color is
extremely flexible and capable of working with image data of either type. For detailed
information, see:
• The RGB Additive Color Model Explained
• The Y′CBCR Color Model Explained
• Chroma Subsampling Explained
• Bit Depth Explained
Chapter 1 Color Correction Basics 23The RGB Additive Color Model Explained
In the RGB color model, three color channels are used to store red, green, and blue values
in varying amounts to represent each available color that can be reproduced. Adjusting
the relative balance of values in these color channels adjusts the color being represented.
When all three values are equal, the result is a neutral tone, from black through gray to
white.
More typically, you’ll see these ratios expressed as digital percentages in the Color Parade
scope or Histogram. For example, if all three color channels are 0%, the pixel is black. If
all three color channels are 50%, the pixel is a neutral gray. If all three color channels are
100% (the maximum value), the pixel is white.
Animation (an older, 8-bit codec) and Apple ProRes 4444 (a newer 10-bit codec) are the
two most commonly used RGB QuickTime codecs. In digital intermediate workflows,
RGB-encoded images are typically stored as uncompressed DPX or Cineon image
sequences.
The Y′CBCR Color Model Explained
Video is typically recorded using the Y′CBCR
color model. Y′CBCR color coding also employs
three channels, or components. A shot’s image is divided into one luma component (luma
is image luminance modified by gamma for broadcast) and two color difference
components which encode the chroma (chrominance). Together, these three components
make up the picture that you see when you play back your video.
• The Y′ component represents the black-and-white portion of an image’s tonal range.
Because the eye has different sensitivities to the red, green, and blue portions of the
spectrum, the image “lightness” that the Y′ component reproduces is derived from a
weighted ratio of the (gamma-corrected) R, G, and B color channels. (Incidentally, the
Y′ component is mostly green.) Viewed on its own, the Y′ component is the
monochrome image.
• The two color difference components, CB and CR
, are used to encode the color
information in such a way as to fit three color channels of image data into two. A bit
of math is used to take advantage of the fact that the Y′ component also stores green
information for the image. The actual math used to derive each color component is CB
= B′ - Y′, while CR = R′ - Y′.
Note: This scheme was originally created so that older black-and-white televisions would
be compatible with the newer color television transmissions.
Chroma Subsampling Explained
In Y′CBCR encoded video, the color channels are typically sampled at a lower ratio than
the luma channel. Because the human eye is more sensitive to differences in brightness
than in color, this has been used as a way of reducing the video bandwidth (or data rate)
requirements without perceptible loss to the image.
24 Chapter 1 Color Correction BasicsThe sampling ratio between the Y′, CB
, and CR channels is notated as a three-value ratio.
There are four common chroma subsampling ratios:
• 4:4:4: 4:4:4 chroma subsampled media encodes completely uncompressed color, the
highest quality possible, as the color difference channels are sampled at the same rate
as the luma channel. 4:4:4 subsampled image data is typically obtained via telecine or
datacine to an image sequence or video format capable of containing it, and is generally
employed for digital intermediate and film workflows. RGB encoded images such as
DPX and Cineon image sequences and TIFF files are always 4:4:4.
The Apple ProRes 4444 codec lets you capture, transcode to, and master media at this
high quality. (The fourth 4 refers to the ability of Apple ProRes 4444 to preserve an
uncompressed alpha channel in addition to the three color channels; however, Color
doesn’t support alpha channels.)
Be aware that simply rendering at 4:4:4 doesn’t guarantee a high-quality result. If media
is not acquired at 4:4:4, then rendering at 4:4:4 will preserve the high quality of
corrections you make to the video, but it won’t add color information that wasn’t there
to begin with.
As of this writing, few digital acquisition formats are capable of recording 4:4:4 video,
but those that do include HDCAM SR, as well as certain digital cinema cameras, including
the RED, Thompson Viper FilmStream, and Genesis digital camera systems.
• 4:2:2: 4:2:2 is a chroma subsampling ratio typical for many high-quality standard and
high definition video acquisition and mastering formats, including Beta SP (an analog
format), Digital Betacam, Beta SX, IMX, DVCPRO 50, DVCPRO HD, HDCAM, and D-5 HD.
Although storing half the color information of 4:4:4, 4:2:2 is standard for video mastering
and broadcast. As their names imply, Apple Uncompressed 8-bit 4:2:2, Apple
Uncompressed 10-bit 4:2:2, Apple ProRes 422, and Apple ProRes 422 (HQ) all use 4:2:2
chroma subsampling.
• 4:1:1 and 4:2:0: 4:1:1 is typical for consumer and prosumer video formats including
DVCPRO 25 (NTSC and PAL), DV, and DVCam (NTSC).
4:2:0 is another consumer-oriented subsampling rate, used by DV (PAL), DVCAM (PAL),
and MPEG-2, as well as the high definition HDV and XDCAM HD formats.
Due to their low cost, producers of all types have flocked to these formats for acquisition,
despite the resulting limitations during post-production (discussed below). Regardless,
whatever the acquisition format, it is inadvisable to master using either 4:1:1 or 4:2:0
video formats.
It’s important to be aware of the advantages of higher chroma subsampling ratios in the
color correction process. Whenever you’re in a position to specify the transfer format with
which a project will be finished, make sure you ask for the highest-quality format your
system can handle. (For more information about high-quality finishing codecs, see A
Tape-Based Workflow.)
Chapter 1 Color Correction Basics 25As you can probably guess, more color information is better when doing color correction.
For example, when you make large contrast adjustments to 4:1:1 or 4:2:0 subsampled
video, video noise in the image can become exaggerated; this happens most often with
underexposed footage. You’ll find that you can make the same or greater adjustments
to 4:2:2 subsampled video, and the resulting image will have much less grain and noise.
Greater contrast with less noise provides for a richer image overall. 4:4:4 allows the most
latitude, or flexibility, for making contrast adjustments with a minimum of artifacts and
noise.
Furthermore, it’s common to use chroma keying operations to isolate specific areas of
the picture for correction. This is done using the HSB qualifiers in the Secondaries room.
(For more information, see Choosing a Region to Correct Using the HSL Qualifiers.) These
keying operations will have smoother and less noisy edges when you’re working with
4:2:2 or 4:4:4 subsampled video. The chroma compression used by 4:1:1 and 4:2:0
subsampled video results in macroblocks around the edges of the resulting matte when
you isolate the chroma, which can cause a “choppy” or “blocky” result in the correction
you’re trying to create.
Despite these limitations, it is very possible to color correct highly compressed video. By
paying attention to image noise as you stretch the contrast of poorly exposed footage,
you can focus your corrections on the areas of the picture where noise is minimized.
When doing secondary color correction to make targeted corrections to specific parts of
the image, you may find it a bit more time consuming to pull smooth secondary keys.
However, with care and patience, you can still achieve beautiful results.
Film Versus Video and Chroma Subsampling
With a bit of care you can color correct nearly any compressed video or image sequence
format with excellent results, and Color gives you the flexibility to use highly compressed
source formats including DV, HDV, and DVCPRO HD.
Standard and high definition video, on the other hand, is usually recorded with lower
chroma subsampling ratios (4:2:2 is typical even with higher-quality video formats, and
4:1:1 and 4:2:0 are common with prosumer formats) and higher compression ratios,
depending entirely upon the recording and video capture formats used. Since the
selected video format determines compression quality at the time of the shoot, there’s
nothing you can do about the lost image data, other than to make the best of what you
have.
In general, film footage is usually transferred with the maximum amount of image data
possible, especially when transferred as a completely uncompressed image sequence
(4:4:4) as part of a carefully managed digital intermediate workflow. This is one reason
for the higher quality of the average film workflow.
26 Chapter 1 Color Correction BasicsBit Depth Explained
Another factor that affects the quality of video images, and can have an effect on the
quality of your image adjustments, is the bit depth of the source media you’re working
with. With both RGB and Y′CBCR encoded media, the higher the bit depth, the more image
data is available, and the smoother both the image and your corrections will be. The
differences between images at different bit depths is most readily apparent in gradients
such as skies, where lower bit depths show banding, and higher bit depths do not.
The bit depth of your source media depends largely on how that media was originally
acquired. Most of the media you’ll receive falls into one of the following bit depths, all of
which Color supports:
• 8-bit: Most standard and high definition consumer and professional digital video formats
capture 8-bit image data, including DV and DVCPRO-25, DVCPRO 50, HDV, DVCPRO
HD, HDCAM, and so on.
• 10-bit: Many video capture interfaces allow the uncompressed capture of analog and
digital video at 10-bit resolution.
• 10-bit log: By storing data logarithmically, rather than linearly, a wider contrast ratio
(such as that of film) can be represented by a 10-bit data space. 10-bit log files are often
recorded from datacine scans using the Cineon and DPX image sequence formats.
• 12-bit: Some cameras, such as the RED ONE, capture digital images at 12-bit, providing
for even smoother transitions in gradients.
• 16-bit: It has been said that it takes 16 bits of linear data to match the contrast ratio
that can be stored in a 10-bit log file. Since linear data is easier for computers to process,
this is another data space that’s available in some image formats.
• Floating Point: The highest level of image-processing quality available. Refers to the
use of floating-point math to store and calculate fractional data. This means that values
higher than 1 can be used to store data that would otherwise be rounded down using
the integer-based 8-bit, 10-bit, 12-bit, and 16-bit depths. Floating Point is a
processor-intensive bit depth to work with.
Higher bit depths accommodate more image data by using a greater range of numbers
to represent the tonal range that’s available. This is apparent when looking at the numeric
ranges used by the two bit depths most commonly associated with video.
• 8-bit images use a full range of 0–255 to store each color channel. (Y′CBCR video uses
a narrower range of 16–235 to accommodate super-black and super-white.) 255 isn’t
a lot of values, and the result can be subtly visible “stairstepping” in areas of the picture
with narrow gradients (such as skies).
Chapter 1 Color Correction Basics 27• 10-bit images, on the other hand, use a full range of 0 to 1023 to store each color
channel. (Again, Y′CBCR video uses a narrower range of 64–940 to accommodate
super-black and super-white.) The additional numeric range allows for smoother
gradients and virtually eliminates bit depth–related artifacts.
Fortunately, while you can’t always control the bit depth of your source media, you can
control the bit depth at which you work in Color independently. This means that even if
the source media is at a lower bit depth, you can work at a higher bit depth to make sure
that the quality of your corrections is as high as possible. In particular, many effects and
secondary corrections look significantly better when Color is set to render at higher bit
depths. For more information, see Playback, Processing, and Output Settings.
Basic Color and Imaging Concepts
Color correction involves controlling both an image’s contrast and its color (exercising
separate control over its hue and saturation). This section explains these important imaging
concepts so that you can better understand how the Color tools let you alter the image.
For detailed information, see:
• Contrast Explained
• Luma Explained
• Gamma Explained
• Chroma Explained
• Primary and Secondary Color Relationships Explained
• The HSL Color Space Model Explained
Contrast Explained
Contrast adjustments are among the most fundamental, and generally the first,
adjustments made. Contrast is a way of describing an image’s tonality. If you eliminate
all color from an image, reducing it to a series of grayscale tones, the contrast of the
picture is seen by the distribution of dark, medium, and light tones in the image.
Controlling contrast involves adjustments to three aspects of an image’s tonality:
• The black point is the darkest pixel in the image.
• The white point is the brightest pixel in the image.
28 Chapter 1 Color Correction Basics• The midtones are the distribution of all tonal values in between the black and white
points.
Black Mids White
An image’s contrast ratio is the difference between the darkest and brightest tonal values
within that image. Typically, a higher contrast ratio, where the difference between the
two is greater, is preferable to a lower one. Unless you’re specifically going for a
low-contrast look, higher contrast ratios generally provide a clearer, crisper image. The
following two images, with their accompanying Histograms which show a graph of the
distribution of shadows, midtones, and highlights from left to right, illustrate this.
In addition, maximizing the contrast ratio of an image aids further color correction
operations by more evenly distributing that image’s color throughout the three tonal
zones that are adjusted with the three color balance controls in the Primary In, Secondaries,
and Primary Out rooms. This makes it easier to perform individual corrections to the
shadows, midtones, and highlights.
Chapter 1 Color Correction Basics 29For more information about adjusting image contrast, see Contrast Adjustment Explained.
Luma Explained
Luma (which technically speaking is gamma-corrected luminance) describes the exposure
(lightness) of a video shot, from absolute black, through the distribution of gray tones,
all the way up to the brightest white. Luma can be separated from the color of an image.
In fact, if you desaturate an image completely, the grayscale image that remains is the
luma.
Luma is measured by Color as a digital percentage from 0 to 100, where 0 represents
absolute black and 100 represents absolute white. Color also supports super-white levels
(levels from 101 to 109 percent) if they exist in your shot. While super-white video levels
are not considered to be safe for broadcast, many cameras record video at these levels
anyway.
Black
0% luminance 100% 109%
White
Super-white
Note: Unadjusted super-white levels will be clamped by the Broadcast Safe settings (if
they’re turned on with their default settings), so that pixels in the image with luma above
100 percent will be set to 100 percent.
What Is Setup?
People often confuse the black level of digital video with setup. Setup refers to the
minimum black level assigned to specific analog video signals and is only an issue with
analog video output to the Beta SP tape format. If you are outputting to an analog tape
format using a third-party analog video interface, you should check the documentation
that came with that video interface to determine how to configure the video interface
for the North American standard for setup (7.5 IRE) or the Japanese standard (0 IRE).
Most vendors of analog video interfaces include a software control panel that allows
you to select which black level to use. Most vendors label this as “7.5 Setup” versus “0
Setup,” or in some cases “NTSC” versus “NTSC-J.”
Video sent digitally via SDI has no setup. The Y′CBCR minimum black level for all digital
video signals is 0 percent, 0 IRE, or 0 millivolts, depending on how you’re monitoring
the signal.
30 Chapter 1 Color Correction BasicsGamma Explained
Gamma refers to two different concepts. In a video signal, gamma refers to the nonlinear
representation of luminance in a picture displayed on a broadcast or computer monitor.
Since the eye has a nonlinear response to light (mentioned in The Y′CBCR Color Model
Explained), applying a gamma adjustment while recording an image maximizes the
perceptible recorded detail in video signals with limited bandwidth. Upon playback, a
television or monitor applies an inverted gamma function to return the image to its
“original” state.
You want to avoid unplanned gamma adjustments when sending media from Final Cut Pro
to Color. It’s important to keep track of any possible gamma adjustments that occur when
exporting or importing clips in Final Cut Pro during the editing process, so that these
adjustments are accounted for and avoided during the Final Cut Pro–to–Color roundtrip.
For more information on gamma handling in Final Cut Pro, see the Final Cut Pro 7
User Manual.
Gamma is also used to describe a nonlinear adjustment made to the distribution of
midtones in an image. For example, a gamma adjustment leaves the black point and the
white point of an image alone, but either brightens or darkens the midtones according
to the type of adjustment being made. For more information on gamma and midtones
adjustments, see The Primary In Room.
Chroma Explained
Chroma (also referred to as chrominance) describes the color channels in your shots,
ranging from the absence of color to the maximum levels of color that can be represented.
Specific chroma values can be described using two properties, hue and saturation.
Hue
Hue describes the actual color itself, whether it’s red or green or yellow. Hue is measured
as an angle on a color wheel.
Chapter 1 Color Correction Basics 31Saturation
Saturation describes the intensity of that color, whether it’s a bright red or a pale red. An
image that is completely desaturated has no color at all and is a grayscale image. Saturation
is also measured on a color wheel, but as the distance from the center of the wheel to
the edge.
As you look at the color wheel, notice that it is a mix of the red, green, and blue primary
colors that make up video. In between these are the yellow, cyan, and magenta secondary
colors, which are equal mixes of the primary colors.
Primary and Secondary Color Relationships Explained
Understanding color wheel interactions will help you to see how the Color controls actually
affect colors in an image.
Primary Colors
In any additive color model, the primary colors are red, green, and blue. These are the
three purest colors that can be represented, by setting a single color channel to 100
percent and the other two color channels to 0 percent.
Secondary Colors
Adding any two primary colors produces a secondary color. In other words, you create a
secondary color by setting any two color channels to 100 percent while setting the third
to 0 percent.
• Red + green = yellow
• Green + blue = cyan
• Blue + red = magenta
One other aspect of the additive color model:
• Red + green + blue = white
32 Chapter 1 Color Correction BasicsAll these combinations can be seen in the illustration of three colored circles below. Where
any two primaries overlap, the secondary appears, and where all three overlap, white
appears.
Complementary Colors
Two colors that appear 180 degrees opposite each other on the wheel are referred to as
complementary colors.
Adding two complementary colors of equal saturation to each other neutralizes the
saturation, resulting in a grayscale tone. This can be seen in the two overlapping color
wheels in the illustration below. Where red and cyan precisely overlap, both colors become
neutralized.
Understanding the relationship of colors to their complementaries is essential in learning
how to eliminate or introduce color casts in an image using the Color Primary or Secondary
color correction controls. For example, to eliminate a bluish cast in the highlights of
unbalanced daylight, you add a bit of orange to bring all the colors to a more neutral
state. This is covered in more detail in The Primary In Room.
Chapter 1 Color Correction Basics 33The HSL Color Space Model Explained
The HSL color space model is another method for representing color and is typically used
for user interface controls that let you choose or adjust colors. HSL stands for hue,
saturation, and lightness (roughly equivalent to luminance) and provides a way of
visualizing the relationships among luminance, hue, and saturation.
The HSL color space model can be graphically illustrated as a three-dimensional cone.
Hue is represented by an angle around the base of the cone, as seen below, while
saturation is represented by a color’s distance from the center of the cone to the edge,
with the center being completely desaturated and the edge being saturated to maximum
intensity. A color’s brightness, then, can be represented by its distance from the base to
the peak of the cone.
Color actually provides a three-dimensional video scope that’s capable of displaying the
colors of an image within an extruded HSL space, for purposes of image analysis. For
more information, see The 3D Scope.
34 Chapter 1 Color Correction BasicsTaking maximum advantage of Color requires careful workflow management. This chapter
outlines where Color fits into your post-production workflow.
Color has been designed to work hand in hand with editing applications like Final Cut Pro
via XML and QuickTime media support, or with other editorial environments via EDL and
image sequence support. While video and film input and editing are taken care of
elsewhere, Color gives you a dedicated environment in which to focus on color correction
and related effects.
This chapter gives you a quick overview of how to guide your project through a workflow
that includes using Color for color correction. Information is provided about both standard
and high definition broadcast video workflows, as well as 2K digital intermediate workflows.
This chapter covers the following:
• An Overview of the Color Workflow (p. 35)
• Limitations in Color (p. 37)
• Video Finishing Workflows Using Final Cut Pro (p. 39)
• Importing Projects from Other Video Editing Applications (p. 47)
• Digital Cinema Workflows Using Apple ProRes 4444 (p. 50)
• Finishing Projects Using RED Media (p. 56)
• Digital Intermediate Workflows Using DPX/Cineon Media (p. 65)
• Using EDLs, Timecode, and Frame Numbers to Conform Projects (p. 73)
An Overview of the Color Workflow
All controls in Color are divided into eight tabbed rooms, each of which corresponds to
a different stage in a typical color correction workflow. When you move from room to
room, the buttons, dials, and trackballs of your control surface (if you have one) remap
to correspond to the controls in that room.
35
Color Correction Workflows
2Each room gathers all the controls pertaining to that particular step of the color correction
process onto a single screen. These rooms are organized from left to right in the order
colorists will typically use them, so that after adjusting your project’s preferences in the
Setup room, you can work your way across from the Primary controls, to the Secondary
controls, Color FX, Primary Out, and finally Geometry as you adjust each shot in your
project.
• Setup: All projects begin in the Setup room. This is where you import and manage the
shots in your program. The grade bin, project settings, and application preferences are
also found within the Setup room. For video colorists, the project settings area of the
Setup room is where you find the Broadcast Safe controls, which allow you to apply
gamut restrictions to the entire program.
• Primary In: Primary color corrections affect the entire image, so this room is where you
make overall adjustments to the color and contrast of each shot. Color balance and
curve controls let you adjust colors in the shadows, midtones, and highlights of the
image. The lift, gamma, and gain controls let you make detailed contrast adjustments,
which affect the brightness of different areas of the picture. There are also controls for
overall, highlight, and shadow saturation, and printer point (or printer light) controls
for colorists used to color timing for film.
• Secondaries: Secondary color corrections are targeted adjustments made to specific
areas of the image. This room provides numerous methods for isolating, or qualifying,
the parts of the image you want to correct. Controls are provided with which to isolate
a region using shape masks. Additional controls let you isolate areas of the picture
using a chroma-keyed matte with individual qualifications for hue, saturation, and
luminance. Each shot can have up to eight secondary operations. Furthermore,
special-purpose secondary curves let you make adjustments to hue, saturation, and
luma within specific portions of the spectrum.
• Color FX: The Color FX room lets you create your own custom effects via a node-based
interface more commonly found in high-end compositing applications, similar to Shake.
These individual effects nodes can be linked together in thousands of combinations,
providing a fast way to create many different types of color effects. Your custom effects
can be saved in the Color FX bin for future use, letting you apply your look to future
projects.
• Primary Out: The Primary Out room is identical to the Primary In room except that its
color corrections are applied to shots after they have been processed by all the other
color grading rooms. This provides a way to post-process your images after all other
operations have been performed.
36 Chapter 2 Color Correction Workflows• Geometry: The Geometry room lets you pan and scan, rotate, flip, and flop shots as
necessary. The Geometry room also provides tools for creating custom masks and for
applying and managing motion-tracking analyses. How Geometry room transformations
are handled depends on your workflow:
• For projects being roundtripped from Final Cut Pro, Geometry room transformations
are not rendered by Color when outputting the corrected project media. Instead, all
the geometric transformations you create in Color are translated into Final Cut Pro
Motion tab settings when the project is sent back to Final Cut Pro. You then have
the option to further customize those effects in Final Cut Pro prior to rendering and
output.
• For 2K and 4K digital intermediates, as well as projects using 4K native RED QuickTime
media, Geometry room transformations are processed by Color when rendering the
output media.
Note: When you send a project from Final Cut Pro to Color, compatible Motion tab
settings are translated into Geometry room settings. You can preview and adjust these
transformations as you color correct. For more information, see The Geometry Room.
• Still Store: You can save frames from anywhere in the Timeline using the Still Store,
creating a reference library of stills from your program from which you can recall images
to compare to other shots you're trying to match. You can load one image from the
Still Store at a time into memory, switching between it and the current frame at the
position of the playhead using the controls in the Still Store menu. The Still Store also
provides controls for creating and customizing split screens you can use to balance
one shot to another. All Still Store comparisons are sent to the preview and broadcast
monitor outputs.
• Render Queue: When you finish grading your program in Color, you use the Render
Queue to manage the rendering of the shots in your project.
Limitations in Color
Color has been designed to work hand in hand with Final Cut Pro; Final Cut Pro lets you
take care of input, editing, and output, while Color allows you to focus on color correction
and related effects. Given this relationship, there are specific things it does not do:
• Recording: Color is incapable of either scanning or capturing film or video footage. This
means that you need to import projects and media into Color from another application.
• Editing: Color is not intended to be an editing application. The editing tools that are
provided are primarily for colorists working in 2K workflows where the Color project is
the final version that will become the digital master. By default, the tracks of imported
XML project files are locked to prevent new edits from introducing errors when the
project moves back to Final Cut Pro.
Chapter 2 Color Correction Workflows 37To accommodate editorial changes, reconforming tools are provided to synchronize
an EDL or Final Cut Pro sequence with the version of that project being graded in Color.
For more information, see Reconforming Projects.
• Filters: Final Cut Pro FXScript or FxPlug filters are neither previewed nor rendered by
Color. However, their presence in your project is maintained, and they show up again
once the project is sent back to Final Cut Pro.
Note: It's not generally a good idea to allow various filters that perform color correction
to remain in your Final Cut Pro project when you send it to Color. Even though they
have no effect as you work in Color, their sudden reappearance when the project is
sent back to Final Cut Pro may produce unexpected results.
• Final Cut Pro Color Corrector 3-way filters: Color Corrector 3-way filters applied to clips
in your sequence are automatically converted into adjustments to the color balance
controls, primary contrast controls, and saturation controls in the Primary In room of
each shot to which they’re applied. Once converted, these filters are removed from the
XML data for that sequence, so that they do not appear in the sequence when it’s sent
back to Final Cut Pro.
If more than one filter has been applied to a clip, then only the last Color Corrector
3-way filter appearing in the Filters tab is converted; all others are ignored. Furthermore,
any Color Corrector 3-way filter with limit effects turned on is also ignored.
• Transitions: Color preserves transition data that might be present in an imported EDL
or XML file, but does not play the transitions during previews. How they're rendered
depends on how the project is being handled:
• For projects being roundtripped from Final Cut Pro, transitions are not rendered in
Color. Instead, Color renders handles for the outgoing and incoming clips, and
Final Cut Pro is relied upon to render each transition after the project's return.
• When rendering 2K or 4K DPX or Cineon image sequences, all video transitions are
rendered as linear dissolves when you use the Gather Rendered Media command to
consolidate the finally rendered frames of your project in preparation for film output.
This feature is only available for projects that use DPX and Cineon image sequence
media or RED QuickTime media, and is intended only to support film out workflows.
Only dissolves are rendered; any other type of transition (such as a wipe or iris) will be
rendered as a dissolve instead.
• Superimpositions: Superimposed shots are displayed in the Timeline, but compositing
operations involving opacity and composite modes are neither displayed nor rendered.
• Speed effects: Color doesn't provide an interface for adding speed effects, relying instead
upon the editing application that originated the project to do so. Linear and variable
speed effects that are already present in your project, such as those added in
Final Cut Pro, are previewed during playback, but they are not rendered in Color during
output. Instead, Final Cut Pro is relied upon to render those effects in roundtrip
workflows.
38 Chapter 2 Color Correction Workflows• Final Cut Pro generators and Motion projects: Final Cut Pro generators and Motion projects
are completely ignored by Color. How you handle these types of effects also depends
on your workflow:
• If you're roundtripping a project between Final Cut Pro and Color, and you want to
grade these effects in Color, you should render these effects as self-contained
QuickTime .mov files. Then, edit the new .mov files into your sequence to replace
the original effects shots prior to sending your project to Color.
• If you're roundtripping a project between Final Cut Pro and Color, and there's no
need to grade these effects, you don't need to do anything. Even though these effects
aren't displayed in Color, their position in the Timeline is preserved, and these effects
will reappear in Final Cut Pro when you send the project back. Titles are a good
example of effects that don't usually need to be graded.
• If you're working on a 2K or 4K digital intermediate or RED QuickTime project, you
need to use a compositing application like Shake or Motion to composite any effects
using the image sequence data.
Important: When you send frames of media to a compositing application, it's vital that
you maintain the frame number in the filenames of new image sequence media that
you generate. Each image file's frame number identifies its position in that program's
Timeline, so any effects being created as part of a 2K digital intermediate workflow
require careful file management.
• Video or film output: While Color provides broadcast output of your project's playback
for preview purposes, this is not intended to be used to output your program to tape.
This means that when you finish color correcting your project in Color, the rendered
output needs to be moved to Final Cut Pro for output to tape or to another environment
for film output.
Video Finishing Workflows Using Final Cut Pro
If a program has been edited using Final Cut Pro, the process of moving it into Color is
fairly straightforward. After editing the program in Final Cut Pro, you must reconform the
program, if necessary, to use the original source media at its highest available quality.
Chapter 2 Color Correction Workflows 39Once that task has been accomplished, you can send the project data and files into Color
for color correction. Upon completion of the color correction pass, you need to render
the result and send the project back to Final Cut Pro for final output, either to tape or as
a QuickTime file.
Media Data
Final Cut Pro Color Final Cut Pro
XML
Online
Media
XML
New Color
Corrected
Media
Render
Edit
Final
Effects and
Output
Color
Correction
Send to
Final Cut Pro
Send to
Color
Output
Final Master
Source
Media
Exactly how you conform your source media in Final Cut Pro depends on the type of
media that's used. For more information, see:
• A Tape-Based Workflow.
• Reconforming Online Media in a Tapeless Digital Video Workflow.
• Reconforming Online Media in a Film-to-Tape Workflow.
A Tape-Based Workflow
For a traditional offline/online tape-based workflow, the video finishing process is simple.
The tapes are captured into Final Cut Pro, possibly at a lower-quality offline resolution to
ease the initial editing process by using media that takes less hard disk space and is easier
to work with using a wider range of computers.
40 Chapter 2 Color Correction WorkflowsAfter the offline edit is complete, the media used by the edited program must be
recaptured from the source tapes at maximum quality. The resulting online media is what
will be used for the Final Cut Pro–to–Color roundtrip.
Final Cut Pro
Output
Final Master
XML
Online
Media
XML
New Color
Corrected
Media
Offline Duplicates
Source Media
Final Cut Pro Color
Online
Reconform
Offline
Edit
Media Data
Send to
Color
Color Render
Correction
Send to
Final Cut Pro
Final
Effects and
Output
The following steps break this process down more explicitly.
Stage 1: Capturing the Source Media at Offline or Online Resolution
How you decide to capture your media prior to editing depends on its format. Compressed
formats, including DV, DVCPRO-50, DVCPRO HD, and HDV, can be captured at their highest
quality without requiring enormous storage resources. If this is the case, then capturing
and editing your media using its native resolution and codec lets you eliminate the
time-consuming step of recapturing (sometimes called conforming or reconforming) your
media later.
Uncompressed video formats, or projects where there are many, many reels of source
media, may benefit from being captured at a lower resolution or with a more highly
compressed codec. This will save disk space and also enable you to edit using less
expensive equipment. Later, you'll have to recapture the media prior to color correction.
Stage 2: Editing the Program in Final Cut Pro
Edit your program in Final Cut Pro, as you would any other project. If you're planning on
an extensive use of effects in your program during editorial, familiarize yourself with the
topics covered in Limitations in Color.
Chapter 2 Color Correction Workflows 41Stage 3: Recapturing the Source Media at Online Resolution
If you originally captured your source media using an offline format, you need to recapture
the media used in your project at the highest available quality prior to sending it to Color.
• If your media was originally recorded using a compressed format (such as DV,
DVCPRO-50, DVCPRO HD, or HDV), then recapturing it using the original source codec
and resolution is fine; Color can work with compressed media and automatically
promotes the image data to higher uncompressed bit depths for higher quality imaging
when monitoring and rendering.
• If you're capturing a higher-bandwidth video format (such as Betacam SP, Digital
Betacam, HDCAM, and HDCAM SR) and require high quality but need to use a
compressed format to save hard disk space and increase performance on your particular
computer, then you can recapture using the Apple ProRes 422 codec, or the higher
quality Apple ProRes 422 (HQ) codec.
• If you're capturing high-bandwidth video and require the highest-quality uncompressed
video data available, regardless of the storage requirements, you should recapture your
media using Apple Uncompressed 8-bit 4:2:2 or Apple Uncompressed 10-bit 4:2:2.
You may also want to take the opportunity to use the Final Cut Pro Media Manager to
delete unused media prior to recapturing in order to save valuable disk space, especially
when recapturing uncompressed media. For more information, see the Final Cut Pro 7
User Manual.
Note: Some codecs, such as HDV, can be more processor-intensive to work with than
others. In this case, capturing or recompressing the media with a less processor-intensive
codec, such as Apple ProRes 422 or Apple ProRes 422 (HQ), will improve your performance
while you work in Color, while maintaining high quality and low storage requirements.
Stage 4: Preparing Your Final Cut Pro Sequence
To prepare your edited sequence for an efficient workflow in Color, follow the steps
outlined in Before You Export Your Final Cut Pro Project.
Stage 5: Sending the Sequence to Color or Exporting an XML File
When you finish prepping your edited sequence, there are two ways you can send it to
Color.
• If Color is installed on the same computer as Final Cut Pro, you can use the Send To
Color command to move an entire edited sequence to Color, automatically creating a
new project file.
• If you're handing the project off to another facility, you may want to export the edited
sequence as an XML file for eventual import into Color. In this case, you'll also want to
use the Final Cut Pro Media Manager to copy the project's media to a single,
transportable hard drive volume for easy handoff.
42 Chapter 2 Color Correction WorkflowsStage 6: Grading Your Program in Color
Use Color to grade your program. When working on a roundtrip from Final Cut Pro, it's
crucial to avoid unlocking tracks or reediting shots in the Timeline. Doing so can
compromise your ability to send the project back to Final Cut Pro.
If the client needs a reedit after you've started grading, you should instead perform the
edit back in Final Cut Pro, and export an XML version of the updated sequence which
you can use to quickly update the Color project in progress using the Reconform
command. For more information, see Reconforming Projects.
Stage 7: Rendering New Source Media and Sending the Updated Project to
Final Cut Pro
When you finish grading, you use the Color Render Queue to render all the shots in the
project as a new, separate set of graded media files.
Afterward, you need to send the updated project to Final Cut Pro using one of the two
following methods:
• If Color is installed on the same computer as Final Cut Pro, you can use the Send To
Final Cut Pro command.
• If you're handing the color-corrected project back to the originating facility, you need
to export the Color project as an XML file for later import into Final Cut Pro.
Important: Some parameters in the Project Settings tab of the Setup room affect how
the media is rendered by Color. These settings include the Deinterlace Renders, QuickTime
Export Codec, Broadcast Safe, and Handles settings. Be sure to verify these and other
settings prior to rendering your final output.
Stage 8: Adjusting Transitions, Superimpositions, and Titles in Final Cut Pro
To output your project, you need to import the XML project data back into Final Cut Pro.
This happens automatically if you use the Send To Final Cut Pro command. At this point,
you can add or adjust other effects that you had applied previously in Final Cut Pro, before
creating the program's final master. Things you may want to consider while prepping
the program at this stage include:
• Do you need to produce a "textless" master of the program, or one with the titles
rendered along with the image?
• Are there any remaining effects clips that you need to import and color correct within
Final Cut Pro?
Stage 9:Outputting the Final Video Master to Tape or Rendering a Master QuickTime
File
Once you complete any last adjustments in Final Cut Pro, you can use the Print to Video,
Edit to Tape, or Export QuickTime Movie command to create the final version of your
program.
Chapter 2 Color Correction Workflows 43Reconforming Online Media in a Tapeless Digital Video Workflow
If a program uses a tapeless video format, the steps are similar to those described in A
Tape-Based Workflow; however, they likely involve multiple sets of QuickTime files: the
original media at online resolution and perhaps a second set of media files that have
been downconverted to an offline resolution for ease of editing. After the offline edit,
the online conform involves relinking to the original source media, prior to going through
the Final Cut Pro–to–Color roundtrip.
Final Cut Pro
XML
Online
Media
XML
New Color
Corrected
Media
Offline Duplicates
Final Cut Pro Color
Send to
Color
Render
Send to
Final Cut Pro
Final
Effects and
Output
Source Media
Media Data
Online
Reconform
Offline
Edit
Color
Correction
Output
Final Master
Here's a more detailed explanation of the offline-to-online portion of this workflow.
Stage 1: Shooting and Backing Up All Source Media
Shoot the project using whichever tapeless format you've chosen. As you shoot, make
sure that you're keeping backups of all your media, in case anything happens to your
primary media storage device.
Stage 2: Creating Offline Resolution Duplicates and Archiving Original-Resolution
Media
If necessary, create offline resolution duplicates of the source media in whatever format
is most suitable for your system. Then, archive the original source media as safely as
possible.
44 Chapter 2 Color Correction WorkflowsImportant: When you create offline duplicates of tapeless media, it's vital that you
duplicate and maintain the original filenames and timecode with which the source files
were created. This is critical to guaranteeing that you'll be able to easily relink to the
original high-resolution source files once the offline edit is complete.
Stage 3: Editing the Program in Final Cut Pro
Edit your program in Final Cut Pro, as you would any other project. If you're planning on
an extensive use of effects in your program during editorial, familiarize yourself with the
topics covered in Limitations in Color.
Stage 4: Relinking Your Edited Sequence to the Original Source Media
Once your offline edit is complete, you need to restore the original online-quality source
media and relink to or retransfer the high-resolution files.
Stage 5: Prerendering Effects, Sending the Sequence to Color, and Grading
At this point, the workflow is identical to Stage 6: Grading Your Program in Color in A
Tape-Based Workflow.
Reconforming Online Media in a Film-to-Tape Workflow
If you're working on a project that was shot on film but will be mastered on video, it must
be transferred from film to tape using a telecine prior to being captured and edited in
Final Cut Pro. At that point, the rest of the offline and online edit is identical to any other
tape-based format.
Telecine
Final Cut Pro Color Final Cut Pro
XML
Online
Media
XML
New Color
Corrected
Media
Render
Send to
Final Cut Pro
Send to
Color
Offline
and Online
Edits
Final
Effects and
Output
Color
Correction
Camera
Negative
Media Data
Output
Final Master
Transferred
Video Media
Here's a more detailed explanation of the offline-to-online portion of this workflow.
Chapter 2 Color Correction Workflows 45Stage 1: Shooting Your Film
Shoot the project as you would any other film project.
Stage 2: Telecining the Dailies
After the film has been shot, process and telecine the dailies to a video format appropriate
for your workflow.
• Some productions prefer to save money up front by doing an inexpensive "one-light"
transfer of all the footage to an inexpensive offline video format for the initial offline
edit. (A one-light transfer refers to the process of using a single color correction setting
to transfer whole scenes of footage.) This can save time and money up front, but may
necessitate a second telecine session to retransfer only the footage used in the edit at
a higher level of visual quality.
• Other productions choose to transfer all the dailies (or at least the director's selected
takes) via a "best-light" transfer, where the color correction settings are individually
adjusted for every shot that's telecined, optimizing the color and exposure for each
clip. The footage is transferred to a high-quality video format capable of preserving as
much image data as possible. This can be significantly more expensive up front, but
saves money later since a second telecine session is not necessary.
Stage 3: Capturing the Source Media at Offline or Online Resolution
How you capture your media prior to editing depends on your workflow. If you telecined
offline-quality media, then you might as well capture using an offline-quality codec.
If you instead telecined online-quality media, then you have the choice of either pursuing
an "offline/online" workflow or capturing via an online codec and working at online quality
throughout the entire program.
Stage 4: Editing the Program in Final Cut Pro
Edit your program in Final Cut Pro, as you would any other project. If you're planning on
the extensive use of effects in your program during editorial, familiarize yourself with the
topics covered in Limitations in Color.
Stage 5: Recapturing or Retransferring the Media at Online Resolution
The way you conform your offline project to online-quality media depends on how you
handled the initial video transfer.
• If you originally did a high-quality telecine pass to an online video format, but you
captured your source media using an offline format for editing, you need to recapture
the media from the original telecine source tapes using the highest-quality
uncompressed QuickTime format that you can accommodate on your computer (such
as Apple ProRes 4444, Apple ProRes 422 (HQ), or Apple Uncompressed) and relink the
new media to your project.
46 Chapter 2 Color Correction Workflows• If you did an inexpensive one-light telecine pass to an offline video format, you'll want
to do another telecine pass where you transfer only the media you used in the program
at high quality. Using Cinema Tools, you can generate a pull list, which you then use
to carefully retransfer the necessary footage to an online-quality video format. Then,
you need to recapture the new online transfer of this media using the highest-quality
uncompressed QuickTime format that you can accommodate on your computer.
Important: Do not use the Media Manager to either rename or delete unused media in
your project when working with offline media that refers to the camera negative. If you
do, you'll lose the ability to create accurate pull lists in Cinema Tools.
Stage 6: Prerendering Effects, Sending the Sequence to Color, and Grading
At this point, the workflow is identical to Stage 6: Grading Your Program in Color in A
Tape-Based Workflow.
Importing Projects from Other Video Editing Applications
Color is also capable of importing projects from other editing environments, by importing
edit decision lists (EDLs). An EDL is an event-based list of all the edits and transitions that
make up a program.
Once you've imported your project file into Color and copied the program media onto a
storage device with the appropriate performance, you can then link the shots on the
Color Timeline with their corresponding media.
• For more information about importing EDLs into Final Cut Pro before sending to Color,
see Importing EDLs in a Final Cut Pro–to–Color Roundtrip.
• For more information about importing EDLs directly into Color, see Importing and
Notching Preedited Program Masters.
Importing EDLs in a Final Cut Pro–to–Color Roundtrip
If you've been provided with an edit decision list of the edited program and a box of
source media, you can import the EDL into Final Cut Pro to capture the project's media
and prepare the project for sending to Color. In addition to being able to recapture the
footage, Final Cut Pro is compatible with more EDL formats than is Color. Also, Final Cut Pro
is capable of reading superimpositions, all SMPTE standard transitions, and audio edits,
in addition to the video edits.
Chapter 2 Color Correction Workflows 47Note: Although capable of importing EDLs directly, Color reads only the video portion
of edits in track V1. Video transitions, audio, and superimpositions are ignored.
Output
Final Master
Media Data
Final Cut Pro
XML
Online
Media
XML
New Color
Corrected
Media
Final Cut Pro Color
Send to
Color
Render
Send to
Final Cut Pro
Final
Effects and
Output
Recapture
Media
Import EDL
to Create
Project
Color
Correction
EDL file
EDL
Source Media
Here's a more detailed explanation of this workflow.
Stage 1: Importing the Project into Final Cut Pro
Import the EDL of the edited project into Final Cut Pro.
Stage 2: Capturing Media at Online Resolution
You need to recapture the sequence created when importing the EDL using the
highest-quality QuickTime format that you can accommodate on your computer (such
as Apple ProRes 422 or Apple Uncompressed).
Stage 3: Prerendering Effects, Sending the Sequence to Color, and Grading
At this point, the workflow is identical to that in Stage 6: Grading Your Program in Color
in A Tape-Based Workflow.
Importing and Notching Preedited Program Masters
Another common way of obtaining a program for color correction is to be provided with
an edited master, either on tape or as a QuickTime movie or image sequence, and an
accompanying EDL. You can use the EDL to automatically add edits to the master media
file in Color (called "notching" the media), to make it easier to grade each shot in the
program individually.
48 Chapter 2 Color Correction WorkflowsImportant: The EDL import capabilities of Color are not as thorough as those in
Final Cut Pro, and are limited only to shots on track V1. All transitions in EDLs are imported
as dissolves. Superimpositions and audio are not supported, and will be ignored.
Final Cut Pro
Online
Media
XML
Final
Effects and
Output
Final Cut Pro Color
Capture
Entire
Program
Create
Color
Project
from EDL
to “Notch”
Online
Media
Output
Tape Master EDL file Final Master
EDL
Media Data
Color
Correction
Render
New Color
Corrected
Media
Send to
Final Cut Pro
Here's a more detailed explanation of this workflow.
Stage 1: Capturing the Program Master
If you were given the program master on tape, you need to capture the entire program
using the highest-quality QuickTime format that you can accommodate on your computer
(such as Apple ProRes 4444, Apple ProRes 422 (HQ), or Apple Uncompressed). If you're
being given the program master as a QuickTime file, you should request the same from
whoever is providing you with the media.
For this process to work correctly, it's ideal if the timecode of the first frame of media
matches the first frame of timecode in the EDL.
Stage 2: Importing the EDL into Color and Relinking to the Master Media File
Either select the EDL from the Projects dialog that appears when you first open Color, or
use the File > Import > EDL command. When the EDL Import Settings dialog appears,
choose the EDL format, project, EDL, and source media frame rates.
To properly "notch" the master media file, you need to turn on "Use as Cut List," and then
choose the master media file that you captured or were given. For more information, see
Importing EDLs.
Chapter 2 Color Correction Workflows 49Stage 3: Grading Your Program in Color
Use Color to grade your program, as you would any other.
Stage 4: Rendering New Source Media and Sending the Updated Project to
Final Cut Pro
When you finish grading, you use the Color Render Queue to render all the shots in the
project as a new, separate set of graded media files.
Afterward, you need to send the updated project to Final Cut Pro using one of the two
following methods:
• If Color is installed on the same computer as Final Cut Pro, use the Send To Final Cut Pro
command.
• If you're handing the color-corrected project back to the originating facility, you need
to export the Color project as an XML file for later import into Final Cut Pro.
Stage 5: Adjusting Transitions, Superimpositions, and Titles in Final Cut Pro
To output your project, you can use the Send To Final Cut Pro command, or you can
export an XML project file that can be manually imported into Final Cut Pro. At this point,
you can add other effects in Final Cut Pro, before creating the program's final master.
Stage 6:Outputting the Final Video Master to Tape or Rendering a Master QuickTime
File
Once you complete any last adjustments in Final Cut Pro, you can use the Print to Video,
Edit to Tape, or Export QuickTime Movie commands to create the final version of your
program.
Digital Cinema Workflows Using Apple ProRes 4444
If you’re working with images that were originated on film, HDCAM SR, or some other
high-resolution, RGB-based media, and your intention is to finish and output a project
to film, the Apple ProRes 4444 codec enables you to follow a simple, consolidated
workflow. Consider the following:
• If you’re working with film, you can scan all footage necessary for the project, and then
convert the DPX or Cineon files to Apple ProRes 4444 media in Color.
• If you’re working with DPX or Cineon image sequences from other sources, these can
be converted into Apple ProRes 4444 media using Color, as well.
• If you’re working with HDCAM SR media, you can ingest it directly as Apple ProRes
4444 clips using Final Cut Pro with a capture device that supports this. Both HDCAM
SR and Apple ProRes 4444 are RGB-based, 4:4:4 color subsampled formats, so one is a
natural container for the other.
50 Chapter 2 Color Correction WorkflowsOnce all your source media has been transcoded or captured as Apple ProRes 4444, it
can be imported into your Final Cut Pro project. If necessary, you can then create a
duplicate set of lower-resolution offline media with which you can edit your project more
efficiently.
Upon completion of the offline edit, you then relink the program to the original
Apple ProRes 4444 media before sending the sequence to Color, where you’ll be grading
your program. Ultimately, you’ll send the finished media that Color renders directly to
the film recording facility.
Chapter 2 Color Correction Workflows 51Mastering from a single set of Apple ProRes 4444 media keeps your workflow simple,
making media management straightforward, and eliminating the need to retransfer or
relink to the source DPX media later. The only disadvantage to this method is that it can
require a substantial amount of storage, depending on the length and shooting ratio of
the project.
HDCAM SR Media
Camera
Negative
Datacine
Transfers
2K/4K DPX
Image
Sequence
Color
Correction
Final Cut Pro
Ingest into
Final Cut Pro
Color
Offline Edit
Color
DPX
Convert to
QuickTime
Media
Media Data
Send to Color
Create offline
duplicates
Apple ProRes
4444 Media
Conform Edit
Film Output
Film
Recorder
Film
Print
Final Output
Sequence
DPX
Render
The following steps break this process down more explicitly. Because of the extra steps
needed, this workflow assumes that you’re shooting film.
Stage 1: Running Tests Before You Begin Shooting
Ideally, you should do some tests before principal photography to see how the film
scanner–to–Color–to–film recorder pipeline works with your choice of film formats and
stocks. It's always best to consult with the film lab you'll be working with in advance to
get as much information as possible.
52 Chapter 2 Color Correction WorkflowsStage 2: Scanning All Film as DPX Image Sequences
Depending on how the shoot was conducted, you can opt to do a best-light datacine of
just the selects, or of all the camera negative (if you can afford it). The scanned 2K or 4K
digital source media should be saved as DPX or Cineon image sequences.
To track the correspondence between the original still frames and the offline QuickTime
files that you'll create for editing, you should ask for the following:
• A non-drop frame timecode conversion of each frame's number (used in that frame's
filename), saved within the header of each scanned image.
• It can also help to organize all of the scanned frames into separate directories, saving
all the frames from each roll of negative to separate directories (named by roll). This
will help you to keep track of each shot’s roll number later.
Stage 3: Converting DPX Image Sequences to Apple ProRes 4444 QuickTime Files in
Color
Since Final Cut Pro doesn’t work directly with image sequences, you need to create
high-quality, online-resolution QuickTime duplicates using Color before you can begin
editing. Once you’ve done this, it’s a good idea to archive both the original source media
and the converted Apple ProRes 4444 media as safely as possible.
You can use Color to create online-resolution QuickTime versions of each DPX image
sequence you need to use in your edit. To do this, create a new project with the Render
File Type set to QuickTime and the Export Codec set to Apple ProRes 4444. Then, edit all
the shots you want to convert into the Timeline, grade them if necessary, add them to
the Render Queue, and click Start Render.
When you convert the DPX files to offline QuickTime files using Color, the timecode
metadata stored in the header of each DPX frame is copied into the timecode track of
each .mov file that’s created. (If there’s no timecode in the DPX headers, the frame number
in the DPX filename will be converted into timecode, instead. For more information, see
How Does Color Relink DPX/Cineon Frames to an EDL?).
This helps you to maintain the correspondence between the source DPX media and the
Apple ProRes 4444 QuickTime files you’ve created, in case you ever need to go back to
the original media. To make this easier, enter the roll number of each image sequence
into the reel number of the converted QuickTime clip. You can do this in the Final Cut Pro
Browser.
For more information, see Converting Cineon and DPX Image Sequences to QuickTime.
Chapter 2 Color Correction Workflows 53Stage 4: Creating Offline Resolution Clips for Editing in Final Cut Pro (Optional)
This step is especially useful if you’re working on a project at 4K resolution. High-resolution
media can be processor-intensive, reducing application responsiveness and real-time
processing unless you have an exceptionally robust system. If this is the case, you can
create an offline set of media (using whichever resolution and codec your particular
workflow requires) with which to work using the Media Manager in Final Cut Pro.
If you downconvert to a compressed high definition format, such as Apple ProRes 422 or
Apple ProRes 422 (HQ), you can offline your project on an inexpensively equipped
computer and still be able to output and project it at a resolution suitable for high-quality
client and audience screenings during the editorial process.
Once you finish your offline edit, you can easily reconform your sequence to the
high-resolution Apple ProRes 4444 source media you generated.
Stage 5: Doing the Offline Edit in Final Cut Pro
Edit your project in Final Cut Pro, being careful not to alter the timecode or duration of
the offline master media in any way.
Stage 6: Preparing Your Final Cut Pro Sequence
To prepare your edited sequence for an efficient workflow in Color, follow the steps
outlined in Before You Export Your Final Cut Pro Project. If you’re planning on printing
to film, it’s prudent to be even more cautious and eliminate any and all effects that are
unsupported by Color, since the media rendered by Color will be the final media that’s
delivered to the film recording facility.
• Clips using speed effects should be rendered as self-contained QuickTime movies, with
the resulting media files reedited into the Timeline to replace the original effects. This
is also true for any clip with effects you want to preserve in the final program, including
filters, animated effects, composites, opacity settings, and embedded Motion projects.
• The only type of transition that Color is capable of processing is the dissolve. Any other
type of transition in the sequence will be rendered as a dissolve of identical duration.
• The only other types of effect that Color supports are Position, Rotation, Scale, and
Aspect Ratio Motion tab settings, which are converted into Pan & Scan room settings.
While keyframes for these settings in Final Cut Pro cannot be sent to Color, the Pan &
Scan settings can be keyframed in Color later.
Stage 7: Sending the Sequence to Color or Exporting an XML File
When you finish prepping your edited sequence, there are two ways you can send it to
Color.
• If Color is installed on the same computer as Final Cut Pro, you can use the Send To
Color command to move an entire edited sequence to Color, automatically creating a
new project file.
54 Chapter 2 Color Correction Workflows• If you're handing the project off to another facility, you may want to export the edited
sequence as an XML file for eventual import into Color. In this case, you'll also want to
use the Final Cut Pro Media Manager to copy the project's media to a single,
transportable hard drive volume for easy handoff.
Stage 8: Grading Your Program in Color
Grade your program in Color as you would any other.
Important: When grading scanned film frames for eventual film output, it's essential to
systematically use carefully profiled LUTs (look up tables) for monitor calibration and to
emulate the ultimate look of the project when printed out to film. For more information,
see Using LUTs.
Stage 9: Rendering Graded Media Out of Color
Once you finish grading the project in Color, use the Render Queue to render out the
final media. If the film recording facility you’re working with requires an image sequence,
now is the time to:
• Change the Render File Type to DPX or Cineon, depending on what the facility has
requested.
• Choose the Printing Density to match your facility’s recommendations.
• If you’ve been using a LUT to monitor your program while you work, turn it off by
choosing File > Clear Display LUT. Otherwise, you’ll bake the LUT into the rendered
media.
• Double-check the Broadcast Safe and Internal Pixel Format settings to make sure they’re
appropriate for your project.
Rendering high-resolution media will take time. Keep in mind that the Render Queue has
been set up to let you easily render your project incrementally; for example, you can
render out all the shots of a program that have been graded that day during the following
night to avoid having to render the entire project at once.
However, when you're working on a project using 2K image sequence scans, rendering
the media is only the first step. The rendered output is organized in the specified render
directory in such a way as to easily facilitate managing and rerendering the media for
your Color project, but it's not ready for delivery to the film recording facility until the
next step.
Stage 10: Assembling the Final Image Sequence for Delivery
Once every single shot in your program has been rendered, you need to use the Gather
Rendered Media command to consolidate all the frames that have been rendered,
eliminating handles, rendering dissolves, copying every frame used by the program to a
single directory, and renumbering each frame as a contiguously numbered image
sequence. Once this has been done, the rendered media is ready for delivery to the film
recording facility.
Chapter 2 Color Correction Workflows 55Stage 11: Creating Additional Transitions, Effects, and Titles
In a 2K or 4K workflow, you can also use a compositing application such as Shake to create
additional transitions or layered effects, including superimpositions, titles, and other
composites, after the color correction has been completed.
Each image file's frame number identifies its position in that program's Timeline. Because
of this, when you send frames to a compositing application, it's vital that the frame
numbers in filenames of newly rendered media are identical to those of the original
source media. This requires careful file management.
Finishing Projects Using RED Media
RED media has become an important acquisition format for both broadcast and digital
cinema. When you install the necessary software to use RED media with Final Cut Studio,
you get access to a variety of workflows for ingesting, grading, and mastering programs
using native RED QuickTime movies in Final Cut Pro and Color.
This section describes the various RED workflows that Final Cut Studio supports. For
information about grading controls that are specific to native RED QuickTime clips, see
The RED Tab.
When you’re working on a project that uses RED media, there are essentially four workflows
you can follow:
Transcode All Native RED QuickTime Media to Apple ProRes 422 (HQ)
If you’re mastering specifically to video, one very simple workflow is to transcode from
RED to Apple ProRes 422 (HQ) clips, and then master Apple ProRes 422 (HQ). After initially
ingesting and transcoding using the Log and Transfer window, this workflow is similar
to the master flowchart shown in Video Finishing Workflows Using Final Cut Pro.
Keep in mind that whenever you transcode native RED R3D media to Apple ProRes using
the Log and Transfer window, you preprocess the original RAW image data. For more
information, see RED Metadata Versus Color Processing in Transcoded Media.
• Advantages: Simple workflow for video mastering. Apple ProRes 422 (HQ) can be easily
edited on most current computers. Apple ProRes 422 (HQ) is suitable for high definition
video mastering, and media can be sent directly to Color for finishing without the need
to reconform.
• Disadvantages: Transcoding may take a long time. You lose the quality advantage of
being able to grade and finish using the RAW RGB 4:4:4 data that native RED QuickTime
files provide.
56 Chapter 2 Color Correction WorkflowsIngest Native RED QuickTime Media for Editing and Finishing
It’s also possible to edit and finish using native RED QuickTime media. This is an efficient
workflow that skips the need for reconforming, and gives you access to the high-quality
native image data when you grade in Color. Since working with native RED QuickTime
media is processor-intensive, this workflow may be most appropriate for short-form
projects and spots. This workflow is illustrated in Editing and Finishing with RED QuickTime
Media.
• Advantages: Ingesting RED QuickTime media is fast when compared to transcoding.
Skips the need for an offline reconform. Provides maximum data fidelity through direct
access to each shot’s native R3D image data.
• Disadvantages: RED QuickTime media is processor-intensive when editing.
Ingest Transcoded Apple ProRes Media for Editing; Conform to Native RED QuickTime
for Finishing
The most practical workflow for long-form work when you want to be able to grade using
native RED QuickTime media involves transcoding the original RED media to Apple ProRes
media for efficient offline editing, and then reconforming your edited sequence back to
native RED QuickTime media for final mastering and color correction in Color. This workflow
is illustrated in Offline Using Apple ProRes; Finishing with RED Media.
• Advantages: Apple ProRes 422 (HQ) can be easily edited on most current computers.
After you reconform, this workflow provides maximum data fidelity through direct
access to each shot’s native R3D image data.
• Disadvantages: Reconforming is an extra step that requires good organization.
Improving Performance When Using Native RED QuickTime Media in
Color
To get the best performance when working with native RED QuickTime media (especially
when working with 4K media, which can be extremely processor-intensive), be sure to
turn on Enable Proxy Support in the User Prefs tab of the Setup room. These are the
suggested settings for optimal performance:
• Set Grading Proxy to Half Resolution
• Set Playback Proxy to Quarter Resolution
Proxies for native RED QuickTime media are generated on the fly, without the need to
prerender proxy files as you do with DPX or Cineon media. For more information on
the Color proxy settings, see Using Proxies.
Chapter 2 Color Correction Workflows 57Offline Using Apple ProRes; Finishing with RED Media
An advantage to editing with Apple ProRes media is that it’s less processor-intensive than
editing using RED QuickTime files, which makes editing in Final Cut Pro more efficient.
After you reconform, you can still work in Color at the higher quality with access to all of
the raw image data in the R3D file, since Color can bypass QuickTime and use the RED
framework directly to read the native 2K or 4K RGB 4:4:4 data inside of each file.
The only real disadvantages to this workflow are that the initial transcoding stage can be
time-consuming, and that later, reconforming is an extra step that requires careful
organization.
Color
Render
Final Cut Pro
Edit Using
ProRes
Media
Film Output
Film
Recorder
Film
Print
Final Output
Sequence
Video Output
Final Cut Pro
Videotape
QuickTime
Master
Rendered
QuickTime
Media
DPX
RED Media
Directories
Archive
Original RED
Media
Ingest Offline
ProRes
Media
Gather
Rendered
Media
Send to Final Cut Pro
Send to Color
Color
Correction
Reingest and
Reconform to
Native RED QT
Media Data
Export
QuickTime or
Edit to Tape
The following steps break this process down more explicitly.
58 Chapter 2 Color Correction WorkflowsStage 1: Archiving the Original RED Media
It’s always recommended that you archive all of the original RED media for your project
onto one or more backed-up volumes. Whether you’re shooting with CF cards or a RED
drive, you should always copy the entire contents of each CF card or drive that you’ve
finished recording with to an individually named folder on your archive volume.
• If you’re using CF cards: The contents of each card should be copied into separate
directories. For example, if you’ve shot a project using 12 CF cards, at the end of the
process you should have 12 different directories (perhaps named “MyGreatProject_01”
through “MyGreatProject_12”), each of which contains the entire contents of the CF
card to which it corresponds.
• If you’re using RED drives: You should copy the entire contents of the drive to a new
folder every time you fill it up or are finished with a particular part of your shoot. For
example, if you’re archiving the contents of the drive after every day’s shoot, then after
four days you should have four directories (perhaps named “MyGreatProject _Day01”
through “MyGreatProject_Day04”).
Each folder or disk image you copy RED media into must have a unique name; preferably
one that clearly identifies the contents. After you copy the RED media into these folders,
they will contain one or more sub-folders with an .RDM extension that contain the actual
RED media. The name of the enclosing RDM folder will be used as the reel name for each
clip that’s ingested by Final Cut Pro during the log and transfer process.
After you initially copy the RED media, you may elect to change the name of the RDM
folders to something more readable (the .RDM extension itself is optional). If you make
such changes, make sure that the name of each folder is unique, and do not under any
circumstances change the names of any folders or files that appear within.
After you've ingested the media using the Log and Transfer window, do not change the
name of the RDM folder again. Doing so will jeopardize your ability to later reconform
offline sequences to the original RED source media.
Important: It's not recommended to enter new reel names for RED media that you ingest
using the Reel field of the Log and Transfer window.
Stage 2: Ingesting Media Using Apple ProRes to Perform the Offline-Quality Edit
If it’s necessary to edit your program at offline quality for efficiency, transcode the archived
RED media to one of the Apple ProRes codecs using the Log and Transfer window in
Final Cut Pro.
See the Final Cut Pro 7 User Manual for more information about transcoding on ingest,
and which codec to choose for offline work.
Chapter 2 Color Correction Workflows 59Stage 3: Editing Using Apple ProRes Media
Edit your project in Final Cut Pro, being careful not to alter the timecode of the offline
master media in any way. If you want to minimize the amount of preparation you’ll be
doing later in Stage 5: Preparing Your Final Cut Pro Sequence, keep the following
limitations in mind while you edit:
• Restrict transitions in your project to cross dissolves only. When you render DPX image
sequences out of Color and use the Gather Rendered Media command to prepare a
single image sequence for film printing, Color automatically processes all cross dissolves
in your program. Other transitions are not supported, and will instead be processed as
cross dissolves if they’re present in your project.
• Keyframes are not sent from Final Cut Pro to Color, so don’t use the Motion tab to create
animated Pan & Scan effects. Instead, use the Pan & Scan tab in the Geometry room of
Color, which lets you scale, recenter, change the aspect ratio of, and rotate your clips,
and which can be keyframed. Pan & Scan effects are rendered along with your grades
when you render DPX or Cineon image sequences out of Color.
• Don’t use superimpositions, transfer modes, speed effects, or filters, unless you’re
planning on prerendering these clips (exporting each as a self-contained QuickTime
clip and reediting them into the Timeline to replace the original effects) as Apple ProRes
4444 media before you send them to Color. Color does not render these effects.
Stage 4: Reconforming Your Project to Native RED QuickTime Media
Once your edit is locked, prepare your edited sequence to be media-managed by moving
all video clips that aren’t being superimposed as part of a compositing operation down
to track V1. This makes navigation and grade management much easier once you start
working in Color, and also eliminates unused clips directly from the Timeline, reducing
the amount of media needing to be reconformed.
Next, you’ll media manage your project to create an offline version of your edited sequence
with the appropriate sequence settings, and then batch transfer the resulting sequence
using the Log and Transfer window to reingest native RED QuickTime media from the
originally archived RED media directories.
See the Final Cut Pro 7 User Manual for more information.
60 Chapter 2 Color Correction WorkflowsStage 5: Preparing Your Final Cut Pro Sequence
To prepare your edited sequence for an efficient workflow in Color, follow the steps
outlined in Before You Export Your Final Cut Pro Project. If you’re planning on printing
to film, it’s prudent to be even more cautious and eliminate any and all effects that are
unsupported by Color, since the media rendered by Color will be the final media that’s
delivered to the film recording facility.
• Clips using speed effects should be rendered as self-contained QuickTime movies, with
the resulting media files reedited into the Timeline to replace the original effects. This
is also true for any clip with effects you want to preserve in the final program, including
filters, animated effects, composites, opacity settings, and embedded Motion projects.
• The only type of transition that Color is capable of processing is the dissolve. Any other
type of transition in the sequence will be rendered as a dissolve of identical duration.
• The only other types of effect that Color supports are Position, Rotation, Scale, and
Aspect Ratio Motion tab settings, which are converted into Pan & Scan room settings.
While keyframes for these settings in Final Cut Pro cannot be sent to Color, the Pan &
Scan settings can be keyframed in Color later.
Stage 6: Sending the Finished Sequence to Color
When you finish prepping your edited sequence, there are two ways you can send it to
Color.
• If Color is installed on the same computer as Final Cut Pro, you can use the Send To
Color command to move an entire edited sequence to Color, automatically creating a
new project file.
• If you're handing off the project to another facility, you may want to export the edited
sequence as an XML file for eventual import into Color. In this case, you'll also want to
use the Final Cut Pro Media Manager to copy the project's media to a single,
transportable hard drive volume for easy handoff.
Stage 7: Grading Using Additional RED Tab Settings in the Primary In Room
Once in Color, you have access to each clip’s camera setting metadata via the RED tab in
the Primary In room. You can use the RED image data as is, or make adjustments as
necessary. For more information, see The RED Tab.
You may also find it to your advantage to use a proxy setting in Color to speed up effects
processing as you work, especially if you’re working with 4K source media. For example,
setting Grading Proxy to Half Resolution and Playback Proxy to Quarter Resolution will
significantly improve real-time performance as you work in Color, while still allowing you
to monitor your data with complete color accuracy at approximately 1K. For more
information, see Using Proxies.
Important: Clips that have been transcoded to Apple ProRes 422 (HQ) cannot access
these native camera settings, as they no longer contain the native RED raw image data.
Chapter 2 Color Correction Workflows 61Stage 8: Choosing How to Render the Final Graded Media
When working with native RED QuickTime media, the frame size of your final graded
media is determined by the Resolution Presets menu in the Project Settings tab of the
Setup room. For more information, see Resolution and Codec Settings.
The format you use to render your final graded media depends on whether you’re planning
on printing to film, or sending the program back to Final Cut Pro for output to video.
• If you’re rendering for film output: Change the Render File Type pop-up menu to DPX
or Cineon (depending on what the facility doing the film printing asks for), and choose
the appropriate 2K or 4K resolution from the Resolution Preset pop-up menu. If you
choose DPX, you also need to choose the appropriate Printing Density. For more
information, see Choosing Printing Density When Rendering DPX Media.
• If you’re rendering to send back to Final Cut Pro for video output: Keep the Render File
Type pop-up menu set to QuickTime and choose an appropriate mastering codec from
the QuickTime Export Codec pop-up menu. For more information, see Compatible
QuickTime Codecs for Output. Keep in mind that the RED QuickTime format is a
read-only format; you cannot master a program using this format.
Note: Rendering native RED QuickTime media is processor-intensive, and rendering times
can be long, especially at 4K resolutions.
Stage 9: Assembling the Final Image Sequence for Delivery, or Sending Back to
Final Cut Pro
The final stage of finishing your project depends, again, on whether you’re printing to
film, or outputting to video.
• If you’re rendering for film output: Once every single shot in your program has been
rendered, use the Gather Rendered Media command to consolidate all the frames that
have been rendered, eliminating handles, rendering dissolves, copying every frame
used by the program to a single directory, and renumbering each frame as a
contiguously numbered image sequence. Once this has been done, the rendered media
is ready for delivery to the film recording facility. For more information, see Gather
Rendered Media.
• If you’re rendering to send back to Final Cut Pro for video output: Simply send your project
back to Final Cut Pro after you finish rendering it. For more information, see Sending
Your Project Back to Final Cut Pro.
Editing and Finishing with RED QuickTime Media
The advantage of this workflow is that it skips the need for reconforming, giving you
access to high-quality image data when you grade in Color. Ingesting RED QuickTime
media is fast when compared to transcoding. This is a good workflow for projects such
as short-form and spots.
62 Chapter 2 Color Correction WorkflowsThe main disadvantage is that RED QuickTime media is processor-intensive when editing.
Because of performance limitations, editing with less powerful computers or editing a
feature length show using 4K RED QuickTime media may not be practical.
Color
Final Cut Pro
Film Output
Film
Recorder
Film
Print
Final Output
Sequence
Video Output
Final Cut Pro
Videotape
QuickTime
Master
Rendered
QuickTime
Media
DPX
RED Media
Directories
Archive
Original RED
Media
Ingest Media
as Native RED
Quicktime
Gather
Rendered
Media
Send to Final Cut Pro
Send to Color
Render
Color
Correction
Edit
Media Data
Export
QuickTime or
Edit to Tape
The following steps break this process down more explicitly.
Stage 1: Importing Media as Native RED QuickTime Clips
Import all of your RED media using the Native option in the Log and Transfer window.
For more information, see the Final Cut Pro 7 User Manual.
Stage 2: Editing Using Native RED QuickTime Media
Edit your project in Final Cut Pro. For the smoothest editing experience, choose Unlimited
RT from the Timeline RT pop-up menu, set Playback Video Quality to Low or Medium,
and set Playback Frame Rate to Full.
Chapter 2 Color Correction Workflows 63For more information on editing programs that will be printed to film, see Stage 3: Editing
Using Apple ProRes Media.
Stage 3: Preparing Your Final Cut Pro Sequence, Sending to Color, Grading, Rendering,
and Finishing
Because you’re already working with native RED QuickTime media, no reconforming is
necessary. At this point, the workflow is identical to Stage 5: Preparing Your Final Cut Pro
Sequence.
Use Unlimited RT When Editing Native RED QuickTime Media in
Final Cut Pro
As mentioned previously, RED QuickTime media is processor-intensive to work with in
Final Cut Pro. For the smoothest editing experience, choose Unlimited RT from the
Timeline RT pop-up menu, set Playback Video Quality to Low or Medium, and set
Playback Frame Rate to Full.
RED Metadata Versus Color Processing in Transcoded Media
The Color, Color Temp, and View RED camera settings in use while shooting are stored
as metadata within each recorded R3D file. If you ingest or reconform using native RED
QuickTime media, this metadata remains intact, and is accessible via the RED tab of the
Primary In room. This is the most flexible way to work, as this image metadata has no
effect on the actual RAW R3D data that the camera has recorded, and, in fact, if you’re
unhappy with how the current metadata settings are processing the image, you can
change them to retrieve additional image data from the RAW source.
When you transcode R3D media to one of the Apple ProRes codecs using the Log and
Transfer window, this metadata is used to preprocess the color and contrast of the
transcoded media as long as the RED FCP Log and Transfer plugin submenu of the Action
pop-up menu is set to Native, which is the default setting. The result is that each
transcoded clip visually matches the image that was monitored during the shoot. This
preprocessing is “baked” into each ingested clip. If you want to later reapply a different
type of image preprocessing to a clip, you need to reingest it from the original source
media.
If necessary, you can choose other color processing options from the RED FCP Log and
Transfer plugin submenu of the Action pop-up menu. For more information, see the
Final Cut Pro 7 User Manual.
64 Chapter 2 Color Correction WorkflowsDigital Intermediate Workflows Using DPX/Cineon Media
Color supports grading for 2K and 4K digital intermediate workflows. Simply put, the term
digital intermediate (DI) describes the process of performing all effects and color correction
using high-resolution digital versions of the original camera negative. Color can work
with 2K and 4K 10-bit log image sequences produced by datacine scanners, processing
the image data with extremely high quality and rendering the result as an image sequence
suitable for film output.
The following sections describe different 2K and 4K workflows that you can follow and
show you how to keep track of your image data from stage to stage.
• For more information on tapeless online/offline DI workflows, see A Tapeless DI
Workflow.
• For more information about DI workflows involving telecined offline media, see A Digital
Intermediate Workflow Using Telecined Media.
• For more information about how Color reconforms media in DI workflows, see Using
EDLs, Timecode, and Frame Numbers to Conform Projects.
A Tapeless DI Workflow
The easiest digital intermediate (DI) workflow is one where you scan all footage necessary
for the offline edit and then create a duplicate set of offline media to edit your project
with. Upon completion of the offline edit, you then relink the program to the original 2K
or 4K source frames in Color.
Chapter 2 Color Correction Workflows 65Deriving the offline media from the original digital media keeps your workflow simple
and eliminates the need to retransfer the source film later. The only disadvantage to this
method is that it can require an enormous amount of storage space, depending on the
length and shooting ratio of the project.
Color
Conform
Gather
Rendered
Media
Render
Color
Correction
Final Cut Pro
Offline
Edit
Film Output
Offline Media
(With Cloned
Timecode)
Datacine
Transfers
EDL
Film
Recorder
Film
Print
Final Output
Sequence
DPX
2K/4K DPX
Image Sequence
DPX
Offline
Quicktime
Conversion
Color
Media Data
Camera
Negative
The following steps break this process down more explicitly.
Stage 1: Running Tests Before Shooting
Ideally, you should do some tests before principal photography to see how the film
scanner–to–Color–to–film recorder pipeline works with your choice of film formats and
stocks. It's always best to consult with the film lab you'll be working with in advance to
get as much information as possible.
Stage 2: Scanning All Film as 2K or 4K DPX Image Sequences
Depending on how the shoot was conducted, you could opt to do a best-light datacine
of just the selects or of all the camera negative, if you can afford it. The scanned 2K digital
source media should be saved as DPX or Cineon image sequences.
66 Chapter 2 Color Correction WorkflowsTo track the correspondence between the original still frames and the offline QuickTime
files that you'll create for editing, you should ask for the following:
• A non-drop frame timecode conversion of each frame's number (used in that frame's
filename) saved within the header of each scanned image.
• It can also help to organize all of the scanned frames into separate directories, saving
all the frames from each roll of negative to separate directories (named by roll).
• The resulting DPX files should be named using the following format:
fileName_0123456.dpx (For more information on naming DPX and Cineon files, see
Required Image Sequence Filenaming.)
Stage 3: Converting the DPX Image Sequences to Offline-Resolution QuickTime Files
Create offline-resolution duplicates of the source media in whatever format is most
suitable for your editing system. Then, archive the original source media as safely as
possible.
When you convert the DPX files to offline QuickTime files:
• The roll number of each image sequence (taken from the name of the directory that
encloses the frames being converted) is used as the reel number for each .mov file.
• The timecode values stored in the header of each frame file are used as the timecode
for each .mov file. If there’s not timecode in the header, the frame number in the
filename is converted to timecode and used, instead.
You can use Color to perform this downconversion by creating a new project with the
Render File Type set to QuickTime and the Export Codec set to the codec you want to
use. Then, simply edit all the shots you want to convert into the Timeline, add them to
the Render Queue, and click Start Render. For more information, see Converting Cineon
and DPX Image Sequences to QuickTime.
You can also use Compressor to perform this downconversion. For more information, see
the Compressor documentation.
Tip: If you downconvert to a compressed high definition format, such as Apple ProRes
422 or Apple ProRes 422 (HQ), you can offline your project on an inexpensively equipped
computer system and still be able to output and project it at a resolution suitable for
high-quality client and audience screenings during the editorial process.
Stage 4: Doing the Offline Edit in Final Cut Pro
Edit your project in Final Cut Pro, being careful not to alter the timecode or duration of
the offline media in any way.
Chapter 2 Color Correction Workflows 67Stage 5: Preparing Your Final Cut Pro Sequence
To prepare your edited sequence for an efficient workflow in Color, follow the steps
outlined in Before You Export Your Final Cut Pro Project. Because you’ll be exporting an
EDL to Color in order to relink to the original DPX image sequences, it’s prudent to be
extremely conservative and eliminate any and all effects that are unsupported by the
CMX EDL formats, or by Color itself.
Cross dissolves are the one exception. These are the only type of transition that Color
supports. Any other type of transition will be rendered as a cross dissolve of identical
length.
Stage 6: Exporting an EDL
When you finish with the edit, you need to generate an EDL in either the CMX 340, CMX
3600, or GVG 4 Plus formats.
Important: You cannot use the Send To Color command to move projects to Color that
are being reconformed to DPX or Cineon media.
Stage 7: Importing the EDL into Color and Relinking to the Original DPX Media
Use the File > Import > EDL command to import the EDL. In the Import EDL dialog, specify
the directory where the original high-resolution source media is located, so that the EDL
is imported and the source media is relinked in one step. For more information, see
Importing EDLs.
Stage 8: Grading Your Program in Color
Grade your program in Color as you would any other. For better performance, it’s advisable
to use the Proxy controls in the User Prefs tab of the Setup Room to work at a lower
resolution than the native 2K or 4K frame size of the media. For more information, see
Using Proxies.
Important: When grading scanned film frames, it's essential to systematically use carefully
profiled LUTs for monitor calibration and to emulate the ultimate look of the project
when printed out to film. For more information, see Using LUTs.
Stage 9: Conforming Transitions, Effects, and Titles
In a 2K workflow, you also need to use a compositing application such as Shake to create
any transitions or layered effects, including superimpositions, titles, and other composites,
using the 2K image sequence data.
Important: Each image file's frame number identifies its position in that program's
Timeline. Because of this, when you send frames to a compositing application, it's vital
that the frame numbers in filenames of newly rendered media are identical to those of
the original source media. This requires careful file management.
68 Chapter 2 Color Correction WorkflowsStage 10: Rendering Your Media Out of Color
Once you finish grading the project in Color, use the Render Queue to render out the
final media. The Render Queue has been set up to let you easily render your project
incrementally; for example, you can render out all the shots of a program that have been
graded that day during the following night to avoid rendering the entire project at once.
However, when you're working on a project using 2K image sequence scans, rendering
the media is only the first step. The rendered output is organized in the specified render
directory in such a way as to easily facilitate managing and rerendering the media for
your Color project, but it's not ready for delivery to the film recording facility until the
next step.
Stage 11: Assembling the Final Image Sequence for Delivery
Once every single shot in your program has been rendered, you need to use the Gather
Rendered Media command to consolidate all the frames that have been rendered,
eliminating handles, copying every frame used by the program to a single directory, and
renumbering each frame as a contiguously numbered image sequence. Once this has
been done, the rendered media is ready for delivery to the film recording facility.
Chapter 2 Color Correction Workflows 69A Digital Intermediate Workflow Using Telecined Media
A more traditional way to edit and color correct a project is to do an offline edit using a
less expensive telecine transfer of the dailies, and then do a datacine film scan of only
the shots used in the edit to create the online media.
Camera
Negative
Cinema Tools
Color
FLEx
Final Cut Pro
Datacine
Telecine
EDL
Media Data
Film Output
Film
Recorder
Film
Print
Final Output
Sequence
DPX
Export
Pull List
Render
Color
Correction
Conform
Gather
Rendered
Media
Offline
Edit
Create
Database
Capture
DPX
Image
Sequence
DPX
The following steps break this process down more explicitly.
Stage 1: Shooting the Film
Ideally, you should do some tests before principal photography to see how the film
scanner–to–Color–to–film recorder pipeline works with your choice of film formats and
stocks. It's always best to consult with the film facility you'll be working with in advance
to get as much information as possible.
70 Chapter 2 Color Correction WorkflowsStage 2: Telecining the Dailies
Once the film has been shot, telecine the dailies to a video format that's appropriate for
the offline edit. Whether or not you telecine to a high definition video format for the
offline depends on the configuration of the editing system you'll be working with and
the amount of hard disk space available to you.
Of more importance is the frame rate at which you choose to telecine the dailies.
• To eliminate an entire media management step, it's recommended that you telecine
the film directly to a 23.98 fps video format.
• Otherwise, you can telecine to a 29.97 fps video format and use Cinema Tools in a
second step to perform 3:2 pull-down removal.
To more easily maintain the correspondence between the telecined video and the 2K or
4K film frames that will be scanned later, you should request that:
• A marker frame is assigned to each roll of film at a point before the first shot begins,
with a hole punch permanently identifying that frame. This marker frame is assigned
the timecode value of XX:00:00:00 (where XX is an incremented hour for each
subsequent camera roll being transferred), and determines the absolute timecode for
each shot on that roll.
• The timecode recorded to tape during the offline telecine must be non-drop frame.
• Each roll of negative should be telecined to a separate reel of tape. This way, the reels
specified by the EDL will match the rolls of camera negative from which the shots are
scanned.
• If the transfer is being done strictly for offline editing, you can ask for a window burn
that displays both timecode and edgecode to provide an additional means of reference.
If you’re transferring film to a 4:3 aspect ratio video format, you may elect to have this
window burn made in the black letterboxed area so it doesn’t obscure the image. It
may also be possible to write the edgecode number of the source film to the user bit
of VITC timecode for electronic tracking. Ask the facility doing the transfer what would
be best for your situation.
Stage 3: Using Cinema Tools and Final Cut Pro to Perform the Offline Edit
As with any other film edit, generate a Cinema Tools database from the ATN, FLEx, FTL,
or ALE telecine log files provided by the telecine operator, then export an XML-based
batch capture list you can import into Final Cut Pro to use to capture the corresponding
media and edit the program.
Important: When working with offline media that tracks the original camera negative,
do not use the Media Manager to either rename or delete unused media in your project.
If you do, you'll lose the ability to create accurate pull lists in Cinema Tools.
Chapter 2 Color Correction Workflows 71Stage 4: Preparing Your Final Cut Pro Sequence
To prepare your edited sequence for an efficient workflow in Color, follow the steps
outlined in Before You Export Your Final Cut Pro Project. Because you’ll be exporting an
EDL to Color in order to relink to the original DPX image sequences, it’s prudent to be
extremely conservative and eliminate any and all effects that are unsupported by the
CMX EDL formats, or by Color itself.
Cross dissolves are the one exception. These are the only type of transition that Color
supports. Any other type of transition will be rendered as a cross dissolve of identical
length.
Stage 5: Exporting an EDL for Color and a Pull List for the Datacine Transfer
Once the offline edit is complete, you need to export a pull list out of Final Cut Pro to give
to the facility doing the final datacine transfer at 2K or 4K resolution. You also need to
export the entire project as an EDL for importing and conforming in Color.
• The pull list specifies which shots were used in the final version of the edit. (This is
usually a subset of the total amount of footage that was originally shot.) Ideally, you
should export a pull list that also contains the timecode In and Out points corresponding
to each clip in the edited project. This way, the timecode data can be written to each
frame that's scanned during the datacine transfer to facilitate conforming in Color.
• The EDL moves the project's edit data to Color and contains the timecode data necessary
to conform the scanned image sequence frames into the correct order.
Stage 6: Doing a Datacine Transfer of the Selected Shots from Negative to DPX
Using the pull list generated by Cinema Tools, have a datacine transfer made of every
shot used in the project.
During the datacine transfer, specify that the timecode of each frame of negative be
converted to frames and used to generate the filenames for each scanned DPX file, and
that the timecode also be written into the DPX header of each shot. The names of the
resulting image sequence should take the following form: fileName_0123456.dpx. For
more information about filenaming conventions, see Required Image Sequence Filenaming.
Each image sequence from the film scanner must be saved into a directory that is named
with the number of the roll of camera negative from which it was scanned. There should
be separate directories for each roll of camera negative that's scanned.
Stage 7: Importing the EDL into Color and Relinking to the Original DPX Media
Use the File > Import > EDL command to import the EDL. In the Import EDL dialog, you
also specify the directory where the original high-resolution source media is located, so
that the EDL is imported and the source media is relinked in one step.
72 Chapter 2 Color Correction WorkflowsStage 8: Grading Your Program in Color
Grade your program in Color as you would any other. For better performance, it’s advisable
to use the Proxy controls in the User Prefs tab of the Setup room to work at a lower
resolution than the native 2K or 4K frame size of the media. For more information, see
Using Proxies.
Important: When grading scanned film frames, it's essential to systematically use carefully
profiled LUTs for monitor calibration and to emulate the ultimate look of the project
when printed out to film. For more information, see Using LUTs.
Stage 9: Conforming Transitions, Effects, and Titles, Rendering Media, and Gathering
Rendered Media
At this point, the process is the same as in Stage 9: Conforming Transitions, Effects, and
Titles in A Tapeless DI Workflow.
Using EDLs, Timecode, and Frame Numbers to Conform Projects
Using careful data management, you can track the relationship of the original camera
negative to the video or digital transfers that have been made for offline editing using
timecode. The following sections provide information on how Color tracks these
correspondences.
• For more information on how Color relinks DPX images to EDLs, see How Does Color
Relink DPX/Cineon Frames to an EDL?
• For more information on how color parses EDLs for DI conforms, see Parsing EDLs for
Digital Intermediate Conforms.
• For more information on how your image sequences should be named for DI workflows,
see Required Image Sequence Filenaming.
How Does Color Relink DPX/Cineon Frames to an EDL?
The key to a successful conform in Color is to make sure that the timecode data in the
EDL is mirrored in the scanned DPX or Cineon frames you're relinking to. The
correspondence between film frames and timecode is created during the first telecine
or datacine transfer session.
How Is Film Tracked Using Timecode?
A marker frame is assigned to the very beginning of each roll of film, at a point before
the first shot begins (typically before the first flash frame). A hole is punched into the
negative, which permanently identifies that frame. This marker frame is assigned the
timecode value of XX:00:00:00 (where XX is an incremented hour for each subsequent
camera roll being transferred), creating an absolute timecode reference for each frame
of film on that roll. Each camera roll of film is usually telecined to a new reel of videotape
(each reel of tape usually starts at a new hour), or datacined to a separate directory of
DPX files.
Chapter 2 Color Correction Workflows 73This makes it easy to create and maintain a film frame-to-timecode correspondence
between the original camera negative and the transferred video or DPX media. This
correspondence carries through to the captured or converted QuickTime media that you
edit in Final Cut Pro. As an added benefit of this process, you can always go back to the
original rolls of camera negative and retransfer the exact frames of film you need, as long
as you accurately maintain the reel number and timecode of each clip in your edited
sequence.
If you’re having a datacine transfer done, you also need to request that the frame numbers
incorporated into the filenames of the transferred image files be based on the absolute
timecode that starts at each camera roll’s marker frame. Your final DPX or Cineon image
sequences should then have frame numbers in the filename that, using a bit of
mathematical conversion, match the timecode value in the header information, providing
valuable data redundancy.
How Color Relinks DPX/Cineon Media to EDLs Using Timecode
Later, when Color attempts to relink the EDL that you’ve exported from Final Cut Pro to
the transferred DPX or Cineon image sequence media, it relies on several different
methods, depending on what information is available in the image sequence files:
• First, Color looks for a timecode value in the header metadata of each DPX or Cineon
frame file. If this is found, it's the most reliable method of relinking.
• If there's no matching timecode number in the header metadata, then Color looks for
a correspondence between the timecode value requested in the EDL and the frame
numbers in the filename of each DPX or Cineon frame. This also requires that the files
be strictly named. For more information, see Required Image Sequence Filenaming.
• Color also looks for each shot’s corresponding reel number (as listed in the EDL) in the
name of the directory in which the media is stored. Each frame of DPX or Cineon media
from a particular roll of camera negative should be stored in a separate directory that’s
named after the roll number it was scanned from. If there are no roll numbers in the
enclosing directory names, then Color attempts to relink all the shots using the timecode
number only.
After you import an EDL with linked DPX or Cineon image sequence media, a Match
column appears in the Shots browser. This column displays the percentage of confidence
that each shot in the Timeline has been correctly linked to its corresponding DPX, Cineon,
or QuickTime source media, based on the methods used to do the linking. For more
information, see Explanation of Percentages in the Match Column.
Relinking DPX/Cineon Frames to an EDL Using a Cinema Tools Database
If issues arise when conforming an EDL to DPX or Cineon media in Color, you can create
a Cinema Tools database with which to troubleshoot the problem.
74 Chapter 2 Color Correction WorkflowsIf you don’t already have a Cinema Tools database tracking your film media, you can easily
create one. To create a Cinema Tools database from one or more directories of DPX or
Cineon image sequences, simply drag all of the enclosing directories onto the Cinema Tools
application icon, and a database is generated automatically. If necessary, you can use the
Cinema Tools interface to check the reel numbers and timecode values of each shot,
correcting any problems you find.
Afterward, when you’re conforming an EDL to DPX or Cineon media in Color, you can
choose the Cinema Tools database as your source directory in the EDL Import Settings
window. (See Importing EDLsfor more information.) This way, your updated reel numbers
and timecode values will be used to link your Color project to the correct source media.
For more information on creating Cinema Tools databases from DPX or Cineon media,
see the Cinema Tools documentation.
Note: Changing information in a Cinema Tools database does nothing to alter the source
media files on disk.
Parsing EDLs for Digital Intermediate Conforms
This section explains how Color makes the correspondence between the timecode values
in an EDL and the frame numbers used in the timecode header or filename of individual
image sequence frames.
Here's a sample line from an EDL:
001 004 V C 04:34:53:04 04:35:03:04 00:59:30:00 00:59:40:00
In every EDL, the information is divided up into eight columns:
• The first column contains the edit number. This is the first edit in the EDL, so it is labeled
001.
• The second column contains the reel number, 004. This is what the directory that
contains all of the scanned DPX or Cineon image files from camera roll 004 should be
named.
• The next two columns contain video/audio track and edit information that, while used
by Color to assemble the program, isn't germane to conforming the media.
The last four columns contain timecode—they're pairs of In and Out points.
• The first pair of timecode values are the In and Out points of the original source media
(usually the telecined tape in ordinary online editing). In a digital intermediate workflow,
this is used for naming and identifying the scanned frames that are output from the
datacine.
• The second pair of In and Out points identifies that shot's position in the edited program.
These are used to place the media in its proper location on the Timeline.
Chapter 2 Color Correction Workflows 75Required Image Sequence Filenaming
Here's a sample filename of the first image sequence file that corresponds to the EDL
event shown in Parsing EDLs for Digital Intermediate Conforms:
fileName_0494794.dpx
The first portion of the filename for each scanned frame (the alpha characters and
underscore) is an ignored but necessary part of the filename. The file's frame number
should equal the (non-dropframe) timecode conversion of that value appearing in the
EDL.
For example, a frame with timecode 05:51:18:28 would have a frame number of 632368.
Numeric extensions must always be padded to seven digits; in this case, you would add
one preceding 0, like this:
fileName_0632368.dpx
The following filename formats are also acceptable:
fileName 0632368.dpx
fileName0632368.dpx
fileName-0632368.dpx
fileName.0632368.dpx
Important: For Color to be able to link to a media file, filenames need at minimum an
alpha-only character name (consisting of at least one upper- or lowercase character),
frame number, and a .dpx or .cin file extension.
76 Chapter 2 Color Correction WorkflowsYou can work in Color either by using a mouse with the onscreen interface, or, more
directly, by using a dedicated control surface that’s been designed for professional color
correction work.
This chapter covers the general interface conventions used by Color. It describes the use
of controls that are shared by multiple areas of the interface, as well as some of the
specialized controls that are unique to color correction applications.
This chapter covers the following:
• Setting Up a Control Surface (p. 78)
• Using Onscreen Controls (p. 78)
• Using Organizational Browsers and Bins (p. 82)
• Using Color with One or Two Monitors (p. 88)
77
Using the Color Interface
3Setting Up a Control Surface
Color was designed from the ground up to support control surfaces specifically designed
for color correction from manufacturers such as Tangent and JL Cooper Designs. These
control surfaces typically include three trackballs that correspond to the three overlapping
tonal zones of the Primary and Secondary color balance controls (shadows, midtones,
and highlights), three rotary controls for the three contrast controls (black level, gamma,
and white point), and a number of other rotary controls and buttons that support different
functions depending on which room you’ve selected.
R3 B3
B2
R1 B1
R2
F1
M1
W1
W2
W3
W4
W5
W6
W7
JOG SHUTTLE
M2 M3
1 2 3 4 5 6 7 8
PAGE
M4 M5
F2
F3
F4
F5
F6
F7
F8
F1
MEM
ALT
GRACE DELETE
MARK IN OUT
CUE
DO UNDO
MORE
REDO 7
4
1
00 0
2 3 -
5 6 +
8 9
PREV NEXT
MODE
CLEAR
F2 F3 F7 F8 F9
F4 F5 F6
You can either choose a control surface to use when Color starts up, or click Show Control
Surface Dialog in the User Prefs tab of the Setup room to choose an available control
surface at any time. For more information on setting up a control surface, see Setting Up
a Control Surface. For more information on configuring a control surface from within
Color, see Control Surface Settings.
Using Onscreen Controls
If you don’t have a control surface, you can still operate every feature in Color using the
onscreen controls. In addition to the standard buttons, checkboxes, and pop-up menus
common to most applications, Color uses some custom controls that are described in
this section. See the referenced sections for more information on:
• Using the Mouse
• Tabs
• Using Text Fields and Virtual Sliders
• Using Timecode Fields
• Using Color Balance Controls
78 Chapter 3 Using the Color InterfaceUsing the Mouse
Color supports the use of a three-button mouse, which provides quick access to shortcut
menus and various navigational shortcuts. Color also supports the middle scroll wheel
or scroll ball of a three-button mouse, either for scrolling or as a button.
Mouse button Documentation reference
Left mouse button Click
Middle mouse button Middle mouse button or middle-click
Right mouse button Right-click (identical to Control-click with a single button mouse)
Note: Many controls can be accelerated up to ten times their normal speed by pressing
the Option key while you drag.
Tabs
Tabs are used to navigate among the eight different Color “rooms.” Each room is a distinct
portion of the interface that contains all the controls necessary to perform a specific task.
Changing rooms changes the available interface, the keyboard shortcuts, and the mapping
of the control surface controls.
Some rooms have additional features that are revealed via tabs within that room.
Using Text Fields and Virtual Sliders
There are four types of data that can populate edit fields in Color:
• Timecode
• Text, including filenames, directory paths, and so forth
• Whole numbers; fields that display whole numbers cannot accept either decimals or
fractional values
• Percentages and fractional values, such as 0.25 or 1.873
There are four ways you can modify text fields.
To enter text into a field using the keyboard
1 Move the pointer into the text field you want to edit, and do one of the following:
• Click once within any field to place the insertion point at the position you clicked.
• Double-click within any field to select the word at the position of the pointer.
• Triple-click within any field to select the entire contents of that field.
The text in that field becomes highlighted.
2 Type something new.
Chapter 3 Using the Color Interface 793 Press Return to confirm the change.
To modify the value of a numeric or percentage-based text field with a virtual slider
1 Move the pointer to the field you want to adjust.
2 Middle-click and drag to the left to decrease its value, or to the right to increase its value.
3 Release the mouse button when you’re finished.
To modify the value of a numeric or percentage-based text field with a scroll wheel
1 Move the pointer to the field you want to adjust.
2 Without clicking in the field, roll the scroll wheel or ball up to increase that field’s value,
or down to decrease that field’s value.
To adjust a field using a shortcut menu
µ Control-click or right-click any field, and choose one of the following options from the
shortcut menu:
• Reset: Resets the field to its default setting.
• Min: Chooses the minimum value available to that field.
• Max: Chooses the maximum value available to that field.
• Set as Default: Changes the default value of a parameter to whatever value is currently
specified. After changing the default value, you can change the value of that parameter
back to the value you specified by clicking Reset.
Using Timecode Fields
Timecode fields display timing information, such as media In and Out points, and the
position of the playhead. Time is represented in Color in one of two ways:
• Within fields, most time values are represented with standard SMPTE timecode. SMPTE
timecode is represented by four colon-delimited pairs of digits: hh:mm:ss:ff, where hh
is hours, mm is minutes, ss is seconds, and ff is frames.
• Time values in the Timeline ruler may be displayed as non-drop frame timecode, drop
frame timecode, or frames.
Note: Drop frame timecode appears with a semicolon between the seconds and frames
positions.
Here are some pointers for entering values into the hours, minutes, seconds, and frames
positions of timecode fields:
• Time values are entered from left to right (similar to entering a duration into a
microwave); however, the last value you type is assumed to be the last digit of the
frames position.
• Press Return whenever you’ve finished typing a timecode value to confirm the new
value you entered.
80 Chapter 3 Using the Color Interface• If you enter a partial number, the rightmost pair of numbers is interpreted as frames
and each successive pair of numbers to the left populates the remaining seconds,
minutes, and hours positions. Omitted numbers default to 00.
For example, if you enter 1419, Color interprets it as 00:00:14:19.
• When you enter timecode in a field, you don’t need to enter all of the separator
characters (such as colons); they’re automatically added between each pair of digits.
• You can type a period to represent a pair of zeros when entering longer durations.
For example, type “3.” (3 and a period) to enter timecode 00:00:03:00. The period is
automatically interpreted by Color as 00.
• To enter 00:03:00:00, type “3..” (3 and two periods).
These periods insert pairs of zeros into both the seconds and frames position.
• Type “3...” to enter 03:00:00:00.
• Use the Plus Sign key (+) to enter a series of single-digit values for each time position.
For example, type “1+5+8” to enter timecode 00:01:05:08.
Using Color Balance Controls
Color controls are used in several rooms in Color to let you choose and modify colors
using the HSL model.
• Dragging within the main color wheel lets you simultaneously adjust the hue and
saturation of the selected color.
A crosshair within the color wheel shows the current color value that’s being selected.
The remaining controls depend on the type of color control being displayed.
• Dragging up and down within the multicolored Hue slider lets you adjust the hue.
• Dragging up within the single-colored Saturation slider increases the saturation of the
current hue; dragging down decreases its saturation.
• Dragging up within the single-colored Brightness slider increases the brightness of the
current color; dragging down decreases its brightness.
Chapter 3 Using the Color Interface 81The angle at which colors appear on the color wheel of color controls can be customized
to match the interface of other color correction systems you may be used to. In addition,
the speed with which control surface joyballs (trackballs) adjust the corresponding Color
color controls can be modified. For more information, see Control Surface Settings.
Using Organizational Browsers and Bins
Color offers several browsers and bins for organizing shots, media, and grades that share
some common controls. All these browsers and bins are used to manage files on your
hard disk, rather than data that’s stored within the Color project file itself. As a result, their
controls are used to navigate and organize the directory structure of your hard disk, much
as you would in the Finder. See the following sections for more information on:
• The File Browser
• The Shots Browser
• The Grades Bin
• Corrections Bins
• Browser, Still Store, Grades, and Corrections Bins Controls
• How Are Grades and Corrections Saved and Organized?
The File Browser
The browser that dominates the left half of the Setup room lets you navigate the directory
structure of your computer’s disk drives (and by extension any RAID, DAS, and SAN
volumes that are currently mounted) in order to find and import compatible QuickTime
and still image media files.
It’s important to remember that the file browser is not the same as a project bin. The files
displayed within the file browser are not associated with your Color project in any way
unless you drag them into the Timeline manually, or relink the shots of an imported
project to their associated media files on disk using the Relink Media or Reconnect Media
command.
Note: The file browser displays only directories and media files that are compatible with
Color.
82 Chapter 3 Using the Color InterfaceWhen you select a media file in the file browser, a panel appears to the right displaying
the first frame of that file along with information underneath.
The information given includes:
• Shot Name: The filename
• Duration: Its total duration
• Codec: The codec used to encode that file
• Resolution: The frame size of the file, width by height
• Frame Rate: The frame rate of the file
• Timecode: The timecode value of the first frame in that file
• Import: This button lets you edit the currently selected shot into the Timeline at the
current position of the playhead.
Collapsing the File Browser
If you like, the file browser can be collapsed so that the tabbed area on the right can
occupy the entire Color window.
To collapse the file browser
µ Move the pointer to the file browser divider at the right side of the file browser, and when
it’s highlighted in blue, click once to collapse it.
Chapter 3 Using the Color Interface 83To expand the file browser
µ Move the pointer to the file browser divider at the left side of the window, and when it’s
highlighted in blue, click once to expand it.
For more information on the Setup room, see Configuring the Setup Room.
The Shots Browser
The other browser in the Setup room is the Shots browser. This browser lets you see all
the shots that are in the current project in either icon or list view.
In icon view, you can create groups of shots to which you can apply a single correction
or grade to at once. For more information, see Managing Grades in the Shots Browser.
In list view, you can sort all of the shots using different info fields. For more information
on using the Shots browser, see Using the Shots Browser.
The Grades Bin
The Grades bin, in the Setup room, lets you save and organize grades combining primary,
secondary, and Color FX corrections into a single unit.
84 Chapter 3 Using the Color InterfaceYou can use this bin to apply saved grades to other shots in the Timeline. The contents
of the Grades bin are available to all Color projects opened while logged into that user
account. For more information on saving and applying grades, see Saving Grades into
the Grades Bin.
Corrections Bins
The Primary In and Out, Secondaries, and Color FX rooms all allow you to save the
corrections made inside those rooms as individual presets that you can apply to later
shots. The contents of corrections bins are available to all Color projects opened while
logged into that user account.
• Primary In and Out: Let you save and organize primary corrections. The Primary In and
Primary Out rooms both share the same group of saved corrections.
• Secondaries: Lets you save and organize secondary corrections.
• Color FX: Lets you save and organize Color FX corrections.
Corrections Versus Grades
There is a distinct difference between corrections and grades in Color. Corrections refer
to adjustments made within a single room. You have the option to save individual
corrections inside the Primary In and Out, Secondaries, and Color FX rooms and apply
them to shots individually.
A grade can include multiple corrections across several rooms; you can save one or more
primary, secondary, and Color FX corrections together. By saving a group of corrections
as a grade, you can apply them all together as a single preset.
Browser, Still Store, Grades, and Corrections Bins Controls
All browsers and bins share the following controls:
Display Controls
All browsers and bins have display controls that let you choose how you want to view
and organize their contents.
• List View button: Displays the contents of the current directory as a list of filenames.
• Icon View button: Displays the contents of the current directory as icons.
• Icon Size slider: Appears only in icon view. Scales the size of icons.
Chapter 3 Using the Color Interface 85Directory Navigation Controls
The file browser and Grades and corrections bins also have directory navigation controls
that you can use to organize and browse the grades and corrections that are saved on
your hard disk.
• Up Directory: Moves to and displays the contents of the parent directory.
• Home Directory: Navigates to the appropriate home directory for that browser or bin.
This is not your Mac OS X user home directory. The home directory is different for each
bin:
• File browser: The Home button takes you to the currently specified Color media
directory.
• Primary In, Secondaries, Color FX, and Primary Out: Home takes you to the appropriate
subdirectory within the /Users/username/Library/Application Support/Color directory.
Each room has its own corresponding subdirectory, within which are stored all the
corrections you’ve saved for future use.
• Still Store: Home takes you to the StillStore directory inside the current project
directory structure.
File Controls
The file browser and Grades and corrections bins also have directory creation and
navigation controls at the bottom.
• File field: Displays the file path of the currently viewed directory.
• Directory pop-up menu: This pop-up menu gives you a fast way to traverse up and down
the current directory hierarchy or to go to the default Color directory for that room.
• New Folder button: Lets you create a new directory within the currently specified path.
You can create as many directories as you like to organize the grades and corrections
for that room.
• Save button: This button saves the grade or correction settings of the shot at the current
position of the playhead in the directory specified in the above text fields.
86 Chapter 3 Using the Color Interface• Load button: Applies the selected grade or correction to the shot that’s at the current
position of the playhead (if no other shots are selected) or to multiple selected shots
(ignoring the shot at the playhead if it’s not selected). As with any Color bin, items
displayed can be dragged and dropped from the bin into the Timeline.
How Are Grades and Corrections Saved and Organized?
Grades and corrections that you save using the Grades and Corrections bins in Color are
saved within the Color preferences directory in your /Users/username/Library/Application
Support/Color directory.
Saved correction category Location on disk
Grades /Users/username/Library/Application Support/Color/Grades/
Primary corrections /Users/username/Library/Application Support/Color/Primary/
Secondary corrections /Users/username/Library/Application Support/Color/Secondary/
Color FX corrections /Users/username/Library/Application Support/Color/Effects/
Saved grades and corrections in these bins are available to every project you open.
Individual corrections in each of the above directories are saved as a pair of files: an .lsi
file that contains a thumbnail for visually identifying that grade, and the specific file for
that type of correction which actually defines its settings. Unless you customized the
name, both these files have the same name, followed by a dot, followed by the date (day
month year hour.minute.secondTimeZone), followed by the file extension that identifies
the type of saved correction it is.
• Grade_Name.date.lsi: The thumbnail image used to represent that grade in icon view
• Grade_Name.date.pcc: Primary correction file
• Grade_Name.date.scc: Secondary correction file
• Grade_Name.date.cfx: Color FX correction file
Saved grades are, in fact, file bundles that contain all the correction files that make up
that grade. For example, a grade that combines primary, secondary, and Color FX
corrections would be a directory using the name given to the grade,
“Grade_Name.date.grd,” containing the following files:
• Grade_Name.date.lsi
• Grade_Name.date.pcc
• Grade_Name.date.scc
• Grade_Name.date.cfx
Chapter 3 Using the Color Interface 87Reorganizing Saved Corrections and Grades in the Finder
Each of the corrections bins in Color simply mirrors the contents of the corresponding
subdirectory in the /Users/username/Library/Application Support/Color directory. You
can use the Finder to reorganize your saved corrections and grades by creating new
subdirectories and moving previously saved grades and corrections into them.
When you move saved corrections from one directory to another, it’s important that you
copy both the .lsi thumbnail image for that grade and the .pcc, .scc, or .cfx file that contains
the actual grade information, together.
If you reorganize saved grades and corrections in the Finder while Color is open, you
need to manually refresh the contents of the Grades and corrections bins you changed
so that they correctly display the current contents.
To update the contents of the currently displayed corrections bin
µ Click the Home button.
Moving Saved Corrections and Grades to Other Computers
If you have saved corrections and grades that you want to move to Color installations on
other computers, you can simply copy the folders described in How Are Grades and
Corrections Saved and Organized? to a portable storage device and then copy their
contents into the corresponding folders on the new system. The next time you open
Color, the saved corrections and grades will appear as they did before.
Using Color with One or Two Monitors
Color is compatible with both one- and two-monitor computer configurations, and
requires a minimum resolution of 1680 x 1050 in either mode. Most users will benefit
from using Color in dual display mode with two monitors, as this provides the most screen
real estate and also allows for the most flexible use of the preview and video scopes
displayed in the Scopes window of the second monitor.
However, Color can also be used in single display mode, which lets you operate Color in
situations where a second display is not available. Single display mode is only
recommended on 30-inch Cinema Displays.
Warning: It is not recommended to run Color on a system with more then one graphics
card. For two-monitor support, both monitors should be connected to the same graphics
card.
88 Chapter 3 Using the Color InterfaceTo switch between single and dual display modes
Do one of the following:
µ Choose Window > Single Display Mode or Dual Display Mode.
µ Press Shift-Command-0 to switch between modes.
You must quit Color and reopen it for this change to take effect.
Chapter 3 Using the Color Interface 89Color provides powerful tools for managing projects and media as you work.
This chapter describes the commands and methods used to create and save projects,
move projects from Final Cut Pro to Color and back again, and link and otherwise manage
your projects and media once they’re within Color. It also covers compatible media
formats, EDL import and export, and the conversion of DPX and Cineon image sequences
to QuickTime media.
This chapter covers the following:
• Creating and Opening Projects (p. 92)
• Saving Projects (p. 92)
• Saving and Opening Archives (p. 95)
• Moving Projects from Final Cut Pro to Color (p. 95)
• Importing EDLs (p. 101)
• EDL Import Settings (p. 102)
• Relinking Media (p. 104)
• Importing Media Directly into the Timeline (p. 105)
• Compatible Media Formats (p. 106)
• Moving Projects from Color to Final Cut Pro (p. 112)
• Exporting EDLs (p. 114)
• Reconforming Projects (p. 115)
• Converting Cineon and DPX Image Sequences to QuickTime (p. 115)
• Importing Color Corrections (p. 117)
• Exporting JPEG Images (p. 118)
91
Importing and Managing Projects
and Media 4Creating and Opening Projects
When you open Color, you’re presented with a dialog from which you can open an existing
project or create a new one. Most users will send projects to Color straight from
Final Cut Pro, but there are specific workflows that require you to create a new project
in Color.
To open an existing project
Do one of the following:
µ If Color is already open, choose File > Open (or press Command-O), choose a project from
the Projects dialog, then click Open.
µ Double-click a Color project file in the Finder.
µ Open Color, choose a Color project file using the Projects dialog, then click Open.
Color can have only one project open at a time, so opening a second project closes the
one that was originally open.
To create a new project when Color is first opened
1 Open Color.
The Projects dialog opens to the Default Project Directory you chose when you first
opened Color.
2 Click New Project.
The New Project dialog appears.
3 Type a name for the project in the Name of New Project field, then click Save.
A new project is created and opened.
To create a new project while Color is open
1 If necessary, save the current project.
Color can have only one project open at a time, so creating a new project will close the
currently open project.
2 Choose File > New (or press Command-N).
3 Click New Project.
The New Project dialog appears.
4 Type a name for the project in the Name of New Project field, then click Save.
A new project is created and opened.
Saving Projects
Saving a project works the same way in Color as it does in any other application you’ve
used. As with any application, you should save early and often as you work.
92 Chapter 4 Importing and Managing Projects and MediaTo save a project
µ Choose File > Save (or press Command-S).
To revert the project to the last saved state
µ Choose File > Revert (or press Command-R).
Color also has an automatic saving mechanism which, when turned on, saves the current
project at an interval set by the Auto Save Time (Minutes) parameter in the User Prefs
tab of the Setup room. By default, automatic saving is turned on, with the interval set to
5 minutes. For more information, see Auto Save Settings.
Note: Whenever you manually save a project, an archive is also automatically saved with
the date and time as its name. When a project is automatically saved, an archive is not
created. This prevents your archive list from being inundated with entries. For more
information, see Saving and Opening Archives.
What Is a Color Project?
The only shots that are in your project are those in the Timeline (which are also mirrored
in the Shots browser). Color projects only contain a single sequence of shots. Furthermore,
Color projects have no organizational notion of shots that aren’t actually in the Timeline,
and so they contain no unused media.
Chapter 4 Importing and Managing Projects and Media 93The Contents of Color Projects
Color projects are actually bundles. Inside each Color project bundle is a hierarchical
series of directories, each of which contains specific components belonging to that
project, which are either image or XML files. It’s possible to open a Color bundle using
the Show Package Contents command in the Finder. The directory structure and contents
of these bundles are described here.
• Archives directory: Contains all the saved archives of that project. Each archive is
compressed using both .tar and .gzip compression (a “tarball”) and is identified with
the .tgz extension.
• .lsi file: This is an image file that contains the frame at the position of the playhead
when you last saved.
• .pdl file: This is the XML-based project file itself, which contains all the information
that organizes the shots, timing, and grades used in that project.
• Shots directory: Each shot in your project’s Timeline has a corresponding subdirectory
here. Each subdirectory contains some or more of the following:
• Grade1 (through 4) subdirectories: These directories contain all the correction files
associated with that grade.
• ShotName.lsi file: This is that shot’s thumbnail as displayed in the Timeline.
• ShotName.si file: This file contains that shot’s name, media path, and timing
information.
• Grade_Name.date.pcc: Primary correction description
• Grade_Name.date.scc: Secondary correction description
• Grade_Name.date.cfx: Color FX correction description
• PanAndScan subdirectory: This directory contains a .kfd file that stores keyframe
data and a .pns file that stores pan and scan data.
• shot_notes.txt file: If a note is present for that shot, it’s saved here.
• StillStore directory: This directory contains all the Still Store images that you’ve saved
for reference within that project. Each reference still has two corresponding files, an
.lsi file which is that image’s thumbnail icon and a .sri file which is the full-resolution
image (saved using the DPX image format).
Important: It is not recommended that you modify the contents of Color project files
unless you know exactly what you’re doing. Making changes manually could cause
unexpected problems.
94 Chapter 4 Importing and Managing Projects and MediaSaving and Opening Archives
An archive is a compressed duplicate of the project that’s stored within the project bundle
itself. For efficiency, the archive file lacks the thumbnail and Still Store image files that
the full version of the project contains. Archives only save the state of the internal project
file, Timeline, shot settings, grades, corrections, keyframes, and Pan & Scan settings, which
are easily compressed and occupy little space.
Whenever you manually save your project, an archive is automatically created that is
named using the date and time at which it was saved. If you want to save an archive of
your project at a particular state with a more easily identifiable name, you can use the
Save Archive As command.
To save an archive of the project with a specific name
1 Choose File > Save Archive As (or press Command-Option-S).
2 Type a name into the Archive Name field, then click Archive.
There is no limit to the number of archives you can save, so the archives list can grow
quite long. Archives are compressed using both .tar and .gzip (a “tarball”) so they take
up little room. All archive files for a particular project are saved in the Archives subdirectory
inside that project bundle.
Later, if anything should happen to your project file’s settings, or if you want to return
the project to a previously archived state, you can load one of the archive files.
To open an archive
1 Choose File > Load Archive (or press Command-Option-O).
2 Select an archive to open from the Load Archive window, then click Load Archive.
Opening an archive overwrites the current state of the project with that of the archive.
Moving Projects from Final Cut Pro to Color
One of the easiest ways of importing a project is to send a Final Cut Pro sequence to Color
using one of two XML-based workflows. This section discusses how to prepare your
projects in Final Cut Pro and how to send them using XML. For more information, see:
• Before You Export Your Final Cut Pro Project
• Using the Send To Color Command in Final Cut Pro
• Importing an XML File into Color
• Video Finishing Workflows Using Final Cut Pro
Chapter 4 Importing and Managing Projects and Media 95Before You Export Your Final Cut Pro Project
Whether you’re working on your own project, or preparing a client’s project in advance
of a Color grading session, you should take some time to prepare the Final Cut Pro
sequence you’ll be sending in order to ensure the best results and smoothest workflow.
Here are some recommended steps.
Move Clips That Aren’t Being Composited to Track V1 in the Timeline
Editors often use multiple tracks of video to assemble scenes, taking advantage of the
track ordering rules in Final Cut Pro to determine which clips are currently visible. It’s
generally much faster and easier to navigate and work on a project that has all its clips
on a single video track. It’s recommended that you move all video clips that aren’t being
superimposed as part of a compositing operation down to track V1.
Remove Unnecessary Video Filters
You aren’t required to remove video filters from a sequence you’re sending to Color. In
fact, if there are one or more effects filters that you want to keep, then it’s perfectly fine
to leave them in. However, it's not usually a good idea to allow filters that perform color
correction operations (such as Brightness and Contrast, RGB Balance, or Desaturate) to
remain in your sequence. Even though they have no effect as you work in Color, they’ll
be redundant after you’ve made additional corrections, and their sudden reappearance
when the project is sent back to Final Cut Pro may produce unexpected results.
Organize All Color Corrector 3-Way Filters
Color Corrector 3-way filters applied to clips are handled differently; they’re automatically
converted into Primary In room adjustments. However, if more than one filter has been
applied to a clip, then only the last Color Corrector 3-way filter appearing in the Filters
tab is converted; all others are ignored. Furthermore, any Color Corrector 3-way filter with
Limit Effects turned on is also ignored.
Converted Color Corrector 3-way filters are removed from the XML data for that sequence,
so that they do not appear in the sequence when it’s sent back to Final Cut Pro.
Note: Because Final Cut Pro is a Y′CBCR processing application, and Color is an RGB
processing application, Color Corrector 3-way conversions are only approximations and
will not precisely match the original corrections made in Final Cut Pro.
Divide Long Projects into Reels
To better organize rendering and output, and to maximize performance when you work
with high-bandwidth formats (such as uncompressed high definition, RED, or DPX media),
you should consider breaking long-form projects down into separate 15- to 23-minute
sequences (referred to asreels) prior to sending them to Color. While reel length is arbitrary,
film reels and broadcast shows often have standard lengths that fall within this range.
(Twenty-two minutes is standard for a film reel.) If your project has an unusually large
number of edits, you might consider dividing your program into even shorter reels.
96 Chapter 4 Importing and Managing Projects and MediaEach reel should begin and end at a good cut point, such as the In point of the first shot
of a scene, the Out point of the last shot of a scene, or the end of the last frame of a fade
to black. As you’re creating your reels, make sure you don’t accidentally omit any frames
in between each reel. This makes it easier to reassemble all of the color-corrected reels
back into a single sequence when you’re finished working in Color.
Tip: Breaking a single program into reels is also the best way for multi-room facilities to
manage simultaneous rendering of projects. If you have multiple systems with identical
graphics cards and identical versions of Color in each room, you can open a reel in each
room and render as many reels simultaneously as you have rooms. Each system must
have identical graphics cards as the type of GPU and amount of VRAM may affect render
quality. For more information, see The Graphics Card You’re Using Affects the Rendered
Output.
Export Self-Contained QuickTime Files for Effects Clips You Need to Color Correct
Color is incapable of either displaying or working with the following types of clips:
• Generators
• Motion projects
If you want to grade such clips in Color, you need to export them as self-contained
QuickTime files and reedit them into the Timeline of your Final Cut Pro sequence to
replace the original effects before you send the sequence to Color.
If you don’t need to grade these effects in Color, then you can simply send the project
with these clips as they are, and ignore any gaps that appear in Color. Even though these
effects won’t appear in Color, they’re preserved within the XML of the Color project and
they will reappear when you send that project back to Final Cut Pro.
Tip: Prior to exporting a project from Final Cut Pro, you can also export a single,
self-contained QuickTime movie of the entire program and then reimport it into your
project and superimpose it over all the other clips in your edited sequence. Then, when
you export the project to Color, you can turn this “reference” version of the program on
and off using track visibility whenever you want to have a look at the offline effects or
color corrections that were created during the offline edit.
Use Uncompressed or Lightly Compressed Still Image Formats
If your Final Cut Pro project uses still image files, then Color supports every still format
that Final Cut Pro supports. (Color supports far fewer image file formats for direct import;
see Compatible Image Sequence Formats for more information.) For the best results, you
should consider restricting stills in your project to uncompressed image formats such as
.tiff, or if using .jpg stills, make sure they’re saved at high quality to avoid compression
artifacts. If you’ve been using low-quality placeholders for still images in your program,
now is the time to edit in the full-resolution versions.
Chapter 4 Importing and Managing Projects and Media 97It’s also important to make sure that the stills you use in your Final Cut Pro project aren’t
any larger then 4096 x 2304, which is the maximum image size that Color supports. If
you’re using larger resolution stills in your project, you may want to export them as
self-contained QuickTime files with which to replace the original effects.
To optimize rendering time, Color only renders a single frame for each still image file.
When your project is sent back to Final Cut Pro, that clip reappears as a still image clip in
the Final Cut Pro Timeline.
Important: If any stills in your project are animated using Scale, Rotate, Center, or Aspect
Ratio parameter keyframes from Final Cut Pro, these keyframes do not appear and are
not editable in Color, but they are preserved and reappear when you send your project
back to Final Cut Pro. For more information, see Exchanging Geometry Settings with
Final Cut Pro.
Make Sure All Freeze Frame Effects Are on Track V1
All freeze frame effects need to be on track V1 for Color to correctly process them. After
rendering, freeze frames continue to appear in the sequence that is sent back to
Final Cut Pro as freeze frame clips.
Important: Freeze frame clips on any other video track will not be rendered, and will
reappear after the sequence is sent to Final Cut Pro as the original, ungraded clip.
Make Sure All Clips Have the Same Frame Rate
It’s not recommended to send a sequence to Color that mixes clips with different frame
rates, particularly when mixing 23.98 fps and 29.97 fps media. The resulting graded media
rendered by Color may have incorrect timecode and in or out points that are off by a
frame. If you have one or more clips in your sequence with a frame rate that doesn’t
match the timebase of the sequence, you can use Compressor to do a standards conversion
of the mismatched clips. For more information, see Rendering Mixed Format Sequences.
Media Manage Your Project, If Necessary
If you’re delivering a Final Cut Pro project to a Color suite at another facility, you may
want to eliminate unused media to save disk space (especially if you’ll be recapturing
uncompressed media), and consolidate all the source media used by your project into a
single directory for easy transport and relinking. This is also a good step to take prior to
recapturing your media, to avoid recapturing unnecessary media.
Recapture Offline Media at Online Quality, If Necessary
If the project was edited at offline quality, you need to recapture all the source media at
the highest available quality before you send it to Color. Be sure you choose a high-quality
codec, either using the native codec that the source footage was recorded with or using
one of the supported uncompressed codecs. For more information on which codecs are
supported by Color, see Compatible Media Formats.
98 Chapter 4 Importing and Managing Projects and MediaImportant: If you’re recapturing or transcoding video clips that were originally recorded
with a Y′CBCR
format, be sure that the codec you use to recapture, export, or transcode
your media doesn’t clamp super-white and overly high chroma components from the
original, uncorrected media. It’s usually better to correct out-of-gamut values within Color
than it is to clamp these levels in advance, potentially losing valuable image data.
Check All Transitions and Effects If You Plan to Render 2K or 4K Image Sequences
for Film Out
When rendering out 2K or 4K DPX or Cineon image sequences, all video transitions are
rendered as linear dissolves when you use the Gather Rendered Media command to
consolidate the finally rendered frames of your project in preparation for film output. This
feature is only intended to support film out workflows. Any other type of transition (such
as a wipe or iris) will be rendered as a dissolve instead, so it’s a good idea to go through
your project and change the type and timing of your transitions as necessary before
sending your project to Color.
Furthermore, effects that would ordinarily reappear in a sequence that is sent back to
Final Cut Pro, such as speed effects, superimpositions, composites, video filters, motion
settings that don’t translate into Pan & Scan parameters, generators, and Motion projects,
will not be rendered if you render 2K or 4K DPX or Cineon image sequences for film output.
In this case, it’s best to export all such clips as self-contained QuickTime files with which
to replace the original effects, before you send the sequence to Color.
Using the Send To Color Command in Final Cut Pro
Once you’ve prepared your sequence, you can use the Send To Color command in
Final Cut Pro to automatically move your sequence into Color (as long as Final Cut Pro
and Color are installed on the same computer).
You can only send whole sequences to Color. It’s not possible to send individual clips or
groups of clips from a sequence unless you first nest them inside a sequence.
To send a sequence from Final Cut Pro to Color
1 Open the project in Final Cut Pro.
2 Select a sequence in the Browser.
3 Do one of the following:
• Choose File > Send To > Color.
• Control-click the selection, then choose Send To > Color from the shortcut menu.
4 Choose a name for the project to be created in Color, then click OK.
A new Color project is automatically created in the default project directory specified in
User Preferences. The shots that appear in the Timeline should match the original
Final Cut Pro sequence that was sent.
Chapter 4 Importing and Managing Projects and Media 99Don’t Reedit Projects in Color
By default, all the video tracks of projects sent from Final Cut Pro are locked. When you’re
grading a project, it’s important to avoid unlocking them or making any editorial changes
to the shots in the Color Timeline if you’re planning to send the project back to
Final Cut Pro.
If you need to make an editorial change, reedit the original sequence in Final Cut Pro,
export a new XML file, and use the Reconform command to update the Color Timeline
to match the changes. For more information, see Reconforming Projects. For more
information about Final Cut Pro XML files, see the Final Cut Pro 7 User Manual.
Importing an XML File into Color
If you need to deliver a Final Cut Pro sequence and its media to another facility to be
graded using Color, you can also use the Export XML command in Final Cut Pro to export
the sequence. For more information about exporting XML from Final Cut Pro, see the
Final Cut Pro 7 User Manual.
In Color, you then use the Import XML command to turn the XML file into a Color project.
To speed up this process, you can copy the XML file you want to import into the default
project directory specified by Color.
To import an XML file into Color
1 Do one of the following:
• Open Color.
• If Color is already open, choose File > Import > XML.
2 Choose an XML file from the Projects dialog.
3 Click Load.
A new Color project is automatically created in the default project directory specified in
User Preferences. The shots that appear in the Timeline should match the original
Final Cut Pro sequence that was exported.
Don’t Reedit Imported XML Projects in Color
By default, all the video tracks of imported XML projects are locked. When you’re grading
a project, it’s important to avoid unlocking them or making any editorial changes to
the shots in the Color Timeline if you’re planning to send the project back to Final Cut Pro.
If you need to make an editorial change, reedit the original sequence in Final Cut Pro,
export a new XML file (see the Final Cut Pro 7 User Manual for more information), and
use the Reconform command to update the Color Timeline to match the changes. For
more information, see Reconforming Projects.
100 Chapter 4 Importing and Managing Projects and MediaImporting EDLs
You can import an EDL directly into Color. There are two reasons to use EDLs instead of
XML files:
• To color correct a video master file: You can approximate a tape-to-tape color correction
workflow by importing an EDL and using the Use As Cut List option to link it to a
corresponding master media file (either a QuickTime .mov file or a DPX image sequence).
Note: If you’re going to work this way, it’s best to work with uncompressed media and
to work in reels of 20 minutes or less to avoid potential performance bottlenecks caused
by sequences with an excessive number of edit points.
• To import a 2K digital intermediate project: EDLs are also the only way to import projects
as part of a 2K digital intermediate workflow when you’re relinking the project to DPX
image sequences from film scans. For more information, see Digital Intermediate
Workflows Using DPX/Cineon Media.
Color imports the following EDL formats:
• Generic
• CMX 340
• CMX 3600
• GVG 4 Plus
To speed up the process of importing an EDL, you can copy all EDL files to the default
project directory specified by Color.
To import an EDL
1 Do one of the following:
• Open Color.
• If Color is already open, Choose File > Import > EDL.
2 Choose an EDL file from the Projects dialog.
Chapter 4 Importing and Managing Projects and Media 101The EDL Import Settings dialog appears, defaulting to the default project directory specified
in the User Prefs tab of the Setup room.
3 Choose the appropriate project properties from the available lists and pop-up menus.
For more information, see EDL Import Settings.
4 When you finish choosing all the necessary settings, click Import.
A new project is created, and the EDL is converted into a sequence of shots in the Timeline.
The position of each shot should match the Timeline of the original project.
Note: If the Source Directory you specified has any potential media conflicts (for example,
two clips with overlapping timecode or a missing reel number), you see a warning dialog
that gives you the option of writing a text file log of all potential conflicts to help you
sort them out.
EDL Import Settings
The settings in this dialog determine the options used when importing an EDL into Color.
• EDL Format: The format of the EDL file you’re importing.
• Project Frame Rate: The frame rate of the Color project you’re about to create. In most
cases, this should match the frame rate of the EDL you’re importing.
• EDL Frame Rate: Choose the frame rate of the EDL you’re importing. If the EDL Frame
Rate is 29.97 fps but you set the Project Frame Rate to 24 fps, Color will automatically
do the necessary conversions to remove 3:2 pull-down from the shots in the project.
Note: This option lets you deal with workflows where the imported EDL was generated
from an offline edit of a project using telecined 29.97 fps video, but the subsequent
scanned 2K image sequences were reacquired at film’s native 24 fps.
• Source Frame Rate: The frame rate of the source media on disk that you’re linking to.
• Use As Cut List: This checkbox lets you specify that this EDL should be used as a cut list
to “notch” a matching video master file.
102 Chapter 4 Importing and Managing Projects and Media• Project Resolution: The resolution of the Color project you’re creating. In general, this
should match the resolution of the source media that you’re linking to.
• Height: The height of the selected frame size.
• Width: The width of the selected frame size.
• Source Directory: The directory specified here directs the EDL parser to the exact path
where the DPX or Cineon scans or QuickTime files associated with that project are
located. You can specify the location of the source media by typing the directory path
into this field, or clicking Browse to use the file browser. There are two methods you
use to link an EDL to the source media it corresponds to.
• If you simply choose a directory that contains media, that media will be linked using
each clip’s timecode track and reel number. If you’re linking to DPX or Cineon scans,
the methods used are described in How Does Color Relink DPX/Cineon Frames to
an EDL?
• Choose a Cinema Tools database, if one is available. When you choose a Cinema Tools
database associated with the Final Cut Pro project that created an EDL, Cinema Tools
is directed to relink the EDL with all associated DPX, Cineon, or even QuickTime media
based on information within the database. The advantage of this method is that, in
the event of problems, you can troubleshoot the Cinema Tools database
independently to resolve the discrepancy before trying to import the EDL into Color.
For more information, see Relinking DPX/Cineon Frames to an EDL Using a
Cinema Tools Database.
After you initiate EDL import, if the Source Directory you specified has any potential
media conflicts (for example, two clips with overlapping timecode or a missing reel
number), you see a warning dialog that gives you the option of writing a text file log
of all potential conflicts to help you sort them out.
After import, a Match column appears in the Shots browser of the Setup room. This
column displays the percentage of confidence that each shot in the Timeline has been
correctly linked to its corresponding DPX, Cineon, or QuickTime source media, based
on the methods used to do the linking. For more information on how EDLs are linked
with DPX or Cineon image sequence frames, see How Does Color Relink DPX/Cineon
Frames to an EDL? For more information on the Match column in the Shots browser,
see Explanation of Percentages in the Match Column.
Chapter 4 Importing and Managing Projects and Media 103Note: The source directory you choose can be either a local volume, or a volume on a
SAN or LAN with sufficient performance to accommodate the data rate of the project’s
media.
• Browse Button: This button opens the file browser, allowing you to set the source
directory for the EDL you want to import. Choosing a directory populates the Source
Directory field.
Relinking Media
If necessary, you can manually relink media to a Color project. When you use the Relink
command, Color matches each shot in the Timeline with its corresponding media file
using the following criteria:
• Starting timecode
• Filename
If neither of these criteria matches, you’re given the following warning:
If you click Yes and proceed with relinking to a different file, then the original Source In
and Source Out values for that shot will be overwritten with those of the new clip.
To relink every shot in your project
1 Choose File > Reconnect Media.
2 Choose the directory where the project’s media is saved from the Choose Media Path
dialog, then click Choose.
If that directory contains all the media used by the project, then every shot in the Timeline
is automatically relinked. If there are still missing media files, you are warned, and these
shots will remain offline; you need to use the Reconnect Media command again to relink
them.
To relink a single shot in the Timeline
1 Control-click or right-click a shot in the Timeline, then choose Relink Media from the
shortcut menu.
2 Choose a clip to relink to from the Select Media To Relink dialog, then click Load.
If the name and starting timecode of the media file matches that of the shot in the
Timeline, the media link is restored.
104 Chapter 4 Importing and Managing Projects and MediaImporting Media Directly into the Timeline
You also have the option of importing media files to the Timeline directly, which lets you
use Color to process digital dailies and convert DPX or Cineon image sequences to suitable
QuickTime formats. You can import individual shots, or entire folders of shots.
For more information on doing batch DPX to QuickTime conversions, see Converting
Cineon and DPX Image Sequences to QuickTime.
To import a single shot into the Timeline
1 Do one of the following:
• Choose File > Import > Clip.
• Click the Setup tab.
2 Use the navigation controls at the top left of the file browser to find the directory
containing the media you want to import.
Tip: If the media you need is on another hard drive, click the Up Directory button
repeatedly until you’re at the top level of your computer’s directory structure, then
double-click the Volumes directory to open it. This will provide you with a list of all the
hard drives and partitions that are currently mounted on your system. From here, it should
be easy to find the media you need.
3 Double-click the directory to open it, then click to select an individual media file to import
into the Timeline.
4 Do one of the following:
• Double-click the shot in the file browser to edit the shot into the Timeline at the position
of the playhead.
• Drag the shot directly into the Timeline.
• Click the Import button below that shot’s preview to edit the shot into the Timeline at
the position of the playhead.
5 If you import a shot into an empty Timeline in Color, you’ll be asked if you want to change
the project settings to match those of the shot you’re importing. Click Yes if you want to
do so. (This is recommended.)
Chapter 4 Importing and Managing Projects and Media 1056 Once shots have been placed into the Timeline, save your project.
To import a folder of shots into the Timeline
1 Do one of the following:
• Choose File > Import > Clip.
• Click the Setup tab.
2 Use the navigation controls at the top left of the file browser to find the directory
containing the media you want to import.
Tip: If the directory you need is on another hard drive, click the Up Directory button
repeatedly until you’re at the top level of your computer’s directory structure, then
double-click the Volumes directory to open it. This will provide you with a list of all the
hard drives and partitions that are currently mounted on your system. From here, it should
be easy to find the directory you need.
3 Click once to select the directory.
An Import Folder button appears within the file browser.
4 Click the Import Folder button to edit every shot within that folder into the Timeline, one
after the other, starting at the position of the playhead.
Important: When you import a folder of shots, all shots that are contained by subfolders
within the selected folder are also imported. This makes it convenient to import an entire
nested hierarchy of image sequence media that has been organized into multiple
individual folders.
Compatible Media Formats
Color is compatible with a wide variety of QuickTime files and image sequences. The
following sections provide information about all of these formats:
• Compatible QuickTime Codecs for Import
• Compatible Third-Party QuickTime Codecs
• Compatible Image Sequence Formats
Compatible QuickTime Codecs for Import
The list of codecs that are supported by Color is limited to high-quality codecs suitable
for media exchange and mastering. Codec support falls into four categories, listed in the
chart that follows:
• QuickTime codecs that are supported by Color when importing projects and media.
(These appear in column 1 of the table below.)
106 Chapter 4 Importing and Managing Projects and Media• A subset of codecs that can be used for rendering your final output when Original
Format is chosen in the Export Codec pop-up menu of the Project Settings tab of the
Setup room. (These appear in column 2.) Original Format is only available when you’ve
used the Send To Color command in Final Cut Pro or when you’ve imported a
Final Cut Pro file that’s been exported as an XML file.
• By default, only seven codecs are available in the Export Codec pop-up menu for
upconverting your source media to a higher-quality format. (These appear in column
3.) These include the Apple ProRes 422, Apple ProRes 422 (HQ), and Apple ProRes 4444
codecs, and the Apple Uncompressed 8-bit 4:2:2 and Apple Uncompressed 10-bit 4:2:2
codecs. Apple ProRes 422 (LT) and Apple ProRes 422 (Proxy) are included for offline
media conversions in digital intermediate and other workflows.
• If you’ve installed a video interface from AJA, you should see an additional option—AJA
Kona 10-bit RGB.
Important: Many of the codecs in column 1 that Color supports for media import, such
as the XDCAM, MPEG IMX, and HDV families of codecs, cannot be rendered using the
Original Format option. If the media in your project uses a codec that’s not supported
for output, every shot in your project will be rendered using one of the supported codecs
listed in column 3. For more information, see Some Media Formats Require Rendering to
a Different Format.
Supported for import Supported as original format Supported as export codec
Animation No No
Apple Intermediate Codec No No
Apple Pixlet Yes No
Apple ProRes 422 (Proxy) Yes Yes
Apple ProRes 422 (LT) Yes Yes
Apple ProRes 422 Yes Yes
Apple ProRes 422 (HQ) Yes Yes
Apple ProRes 4444 Yes Yes
AVCHD No No
AVC-Intra No No
DVCPRO 50 - NTSC Yes No
DVCPRO 50 - PAL Yes No
DV - PAL Yes No
DV/DVCPRO - NTSC Yes No
DVCPRO - PAL Yes No
DVCPRO HD 1080i50 Yes No
DVCPRO HD 1080i60 Yes No
Chapter 4 Importing and Managing Projects and Media 107Supported for import Supported as original format Supported as export codec
DVCPRO HD 1080p25 Yes No
DVCPRO HD 1080p30 Yes No
DVCPRO HD 720p50 Yes No
DVCPRO HD 720p60 Yes No
DVCPRO HD 720p Yes No
H.264 No No
HDV 720p24 No No
HDV 720p25 No No
HDV 720p30 No No
HDV 1080p24 No No
HDV 1080p25 No No
HDV 1080p30 No No
HDV 1080i60 No No
HDV 1080i50 No No
Photo - JPEG Yes No
MPEG IMX 525/60 (30 Mb/s) No No
MPEG IMX 525/60 (40 Mb/s) No No
MPEG IMX 525/60 (50 Mb/s) No No
MPEG IMX 625/50 (30 Mb/s) No No
MPEG IMX 625/50 (40 Mb/s) No No
MPEG IMX 625/50 (50 Mb/s) No No
Uncompressed 8-bit 4:2:2 Yes Yes
Uncompressed 10-bit 4:2:2 Yes Yes
XDCAM EX No No
XDCAM HD 1080i50 (35 Mb/s No No
VBR)
XDCAM HD 1080i60 (35 Mb/s No No
VBR)
XDCAM HD 1080p24 (35 Mb/s No No
VBR)
XDCAM HD 1080p25 (35 Mb/s No No
VBR)
XDCAM HD 1080p30 (35 Mb/s No No
VBR)
XDCAM HD 422 No No
108 Chapter 4 Importing and Managing Projects and MediaCompatible Third-Party QuickTime Codecs
Color supports the following third-party codecs from AJA for import:
• AJA Kona 10-bit Log RGB
• AJA Kona 10-bit RGB
Note: The AJA Kona codecs are not installed by QuickTime by default and are available
only from AJA.
Color also supports native RED QuickTime files when you install the necessary RED software
for Final Cut Studio. For more information, visit http://www.red.com.
Compatible QuickTime Codecs for Output
The purpose of Color is to create high-quality, color-corrected media that can be
reimported into Final Cut Pro for output to tape, QuickTime conversion, or compression
for use by DVD Studio Pro. For this reason, the list of codecs that are supported for
rendering out of Color is limited to high-quality codecs suitable for media exchange and
mastering.
• Apple ProRes 422: A medium-bandwidth, high-quality compressed codec, suitable for
mastering standard definition video. Encodes video at 10 bits per channel with 4:2:2
chroma subsampling. Supports a variable bit rate (VBR) of 35 to 50 mbps. Supports any
frame size.
• Apple ProRes 422 (HQ): A higher-bandwidth version of Apple ProRes 422, suitable for
capturing and mastering high definition video. Supports a variable bit rate (VBR) of 145
to 220 mbps. Supports any frame size.
• Apple ProRes 4444: The highest-bandwidth version of Apple ProRes, suitable for high
definition or digital cinema mastering. Lightly compressed, with a variable bit rate (VBR)
depending on frame size and frame rate. (An example is 330 mbps at 1920x1080 60i
or 1280x720 60p.) Encodes video at up to 10 bits per channel with 4:4:4 chroma
subsampling. Supports a lossless compressed alpha channel, although Color does not
render alpha channel data.
• Uncompressed 8-bit 422: A completely uncompressed, 8-bit per channel codec with
4:2:2 chroma subsampling. Supports any frame size. Suitable for mastering any format
of video.
• Uncompressed 10-bit 422: A completely uncompressed, 10-bit per channel codec with
4:2:2 chroma subsampling. Supports any frame size. Suitable for mastering any format
of video.
Chapter 4 Importing and Managing Projects and Media 109Color also supports the following two offline-quality codecs for workflows in which you
convert DPX or Cineon image sequences to offline-quality QuickTime clips for editing.
Because they’re so highly compressed, these codecs are not suitable for high-quality
mastering. DPX/Cineon conversions to QuickTime clone both the timecode and reel
number of each shot. For more information, see Converting Cineon and DPX Image
Sequences to QuickTime.
• Apple ProRes 422 (LT): A more highly compressed codec than Apple ProRes 422,
averaging 100 Mbps at 1920 x 1080 60i and 1280 x 720 60p. Designed to allow
low-bandwidth editing at full-raster frame sizes, eliminating awkward frame-size
conversions when conforming offline-to-online media for finishing and mastering.
• Apple ProRes 422 (Proxy): An even more highly compressed codec than Apple ProRes
422 (LT), averaging 36 Mbps at 1920 x 1080 24p, or 18 Mbps at 1280 x 720 24p. Designed
to allow extremely low-bandwidth editing at full-raster frame sizes, eliminating awkward
frame-size conversions when conforming offline-to-online media for finishing and
mastering.
Color supports the following third-party codec for rendering.
• AJA Kona 10-bit RGB
Note: The AJA Kona codecs are not installed by QuickTime by default and are available
only from AJA.
You can render your project out of Color using one of several high-quality mastering
codecs, regardless of the codec or level of compression that is used by the source media.
You can take advantage of this feature to facilitate a workflow where you import
compressed media into Color and then export the corrected output as uncompressed
media before sending your project to Final Cut Pro. This way, you reap the benefits of
saving hard disk space and avoiding rerendering times up front, while preserving all the
quality of your high–bit depth adjustments when you render your output media prior to
sending your project back to Final Cut Pro.
110 Chapter 4 Importing and Managing Projects and MediaWhich Codec Should You Use for Export?
When choosing the codec you want to use for rendering the final output, there are four
considerations:
• If you’ll be outputting to a high-bandwidth RGB format (such as HDCAM SR), or are
mastering 2K or 4K RGB media using QuickTime, you should export your media using
the Apple ProRes 4444 codec for the highest-quality result. This format is appropriate
for mastering at a quality suitable for film out, but the results will require a fast
computer and accelerated storage for playback.
• If you’ll be outputting to a high-bandwidth Y′CBCR video format (such as Betacam SP,
Digital Betacam, HDCAM, and DVCPRO HD) and require the highest-quality video data
available, regardless of storage or system requirements, you should export your media
using the Apple Uncompressed 10-bit 4:2:2 codec.
• If you’ll be outputting to one of the above video formats and require high quality,
but need to use a compressed format to save hard disk space and increase
performance on your particular computer, then you can export using the Apple ProRes
422 codec (good for standard definition) or the higher-quality Apple ProRes 422 (HQ)
codec (good for high definition), both of which are 10-bit, 4:2:2 codecs.
• If your system is not set up to output such high-bandwidth video, and your program
uses a source format that’s supported by the Original Format option in the QuickTime
Export Codecs pop-up menu in the Project Settings tab of the Setup room, you’ll be
able to render back to the original codec used by your Final Cut Pro sequence. If your
codec is unsupported, the QuickTime Export Codecs pop-up menu will default to
Apple ProRes 422. For more information on which codecs can be rendered using the
Source Format, see Compatible Media Formats.
Compatible Image Sequence Formats
Although Color supports a wide variety of image formats for clips that are edited into
Final Cut Pro projects that are sent to Color, the list of supported image formats that you
can import directly into Color is much shorter. The following RGB-encoded image formats
are compatible with Color, and are primarily intended for importing image sequences
directly into the Color Timeline.
• Cineon (import and export): A high-quality image format developed by Kodak for digitally
scanning, manipulating, and printing images originated on film. Developed as a 10-bit
log format to better contain the greater latitude of film for exposure.
• DPX (import and export): The Digital Picture eXchange format was derived from the
Cineon format and is also used for high-quality uncompressed digital intermediate
workflows. Color supports 8-bit and 10-bit log DPX and Cineon image files.
Chapter 4 Importing and Managing Projects and Media 111• TIFF (import only): The Tagged Image File Format is a commonly used image format
for RGB graphics on a variety of platforms. Color is compatible with 16-bit TIFF
sequences.
• JPEG (import only): A highly compressed image format created by the Joint Photographic
Experts Group. The amount of compression that may be applied is variable, but higher
compression ratios create visual artifacts, visible as discernible blocks of similar color.
JPEG is usually used for offline versions of image sequences, but in some instances
(with minimal compression) this format may be used in an online workflow. JPEG is
limited to 8-bit encoding.
• JPEG 2000 (import only): Developed as a high-quality compressed format for production
and archival purposes, JPEG 2000 uses wavelet compression to allow compression of
the image while avoiding visible artifacts. Advantages include higher compression
ratios with better visible quality, options for either lossless or lossy compression methods,
the ability to handle both 8- and 16-bit linear color encoding, error checking, and
metadata header standardization for color space and other data.
Important: Only Cineon and DPX are supported for rendering image sequences out of
Color.
Moving Projects from Color to Final Cut Pro
Once you finish grading your project in Color, there are two ways of moving it back to
Final Cut Pro if you’re planning on mastering on video. For more information, see:
• Sending Your Project Back to Final Cut Pro.
• Exporting XML for Final Cut Pro Import.
• Revising Projects After They’re Sent to Final Cut Pro.
Sending Your Project Back to Final Cut Pro
After you grade your project in Color, you need to render it (described in The Render
Queue) and then send it back to Final Cut Pro. This is accomplished using XML, as your
Color project is automatically converted to XML data and then reconverted to a
Final Cut Pro sequence. There are two ways you can initiate this process.
Important: Projects using Cineon or DPX image sequences can’t be sent back to
Final Cut Pro.
To send a graded, rendered project to Final Cut Pro using the Send To command
1 Go through the Timeline and choose which grade you want to use for each of the clips
in your project.
Since each shot in your program may have up to four separately rendered versions of
media in the render directory, the rendered media that each shot is linked to in the
exported XML project file is determined by its currently selected grade.
112 Chapter 4 Importing and Managing Projects and Media2 Choose File > Send To > Final Cut Pro.
There are two possible warnings that may come up at this point:
• If you haven’t rendered every shot in Color at this point, you are warned. It’s a good
idea to click No to cancel the operation and render all of your shots prior to sending
the project back to Final Cut Pro.
• If the codec or frame size has been changed, either by you or as a result of rendering
your media to a mastering quality format, you are presented with the option to change
the sequence settings of the sequence being sent. For more information, see Some
Media Formats Require Rendering to a Different Format.
A new sequence is automatically created within the original Final Cut Pro project from
which the program came. However, if the Final Cut Pro project the program was originally
sent from is unavailable, has been renamed, or has been moved to another location, then
a new Final Cut Pro project will be created to contain the new sequence. Either way, every
clip in the new sequence is automatically linked to the color-corrected media you rendered
out of Color.
Exporting XML for Final Cut Pro Import
Another way of moving a Color project back to Final Cut Pro is to export an XML version
of your Color project.
To export an XML file back to Final Cut Pro for final output
1 Go through the Timeline and choose which grade you want to use for each of the clips
in your project.
Since each shot in your program may have up to four separately rendered versions of
media in the render directory, the rendered media that each shot is linked to in the
exported XML project file is determined by its currently selected grade.
2 Chose File > Export > XML.
3 When the Export XML Options dialog appears, click Browse.
4 Enter a name for the XML file you’re exporting in the File field of the Export XML File
dialog.
5 Choose a location for the file, then click Save.
6 Click OK.
A new XML project file is created, and the clips within are automatically linked to the
media directory specified in the Project Settings tab in the Setup room.
Note: If you haven’t exported rendered media from your Color project yet, the XML file
is linked to the original project media.
Chapter 4 Importing and Managing Projects and Media 113Revising Projects After They’re Sent to Final Cut Pro
If you need to make revisions to the color corrections of a sequence that you’ve already
sent from Color to Final Cut Pro, don’t send the sequence named “from Color” back to Color.
The correct method is to quit Final Cut Pro, reopen the originating Color project, make
your changes, and then do one of the following:
• If you didn’t change the grade number used by any of the shots in Color, simply rerender
the clips you changed, save the Color project, and then reopen the Final Cut Pro project
that has the sequence that was originally sent “from Color.” The rerendered media
overwrites the previous media, and is immediately reconnected when you reopen the
Final Cut Pro project.
• If you do change the grade number of any of the shots in Color, you need to send the
project back to Final Cut Pro, and use the new “from Color” sequence to finish your
program.
This makes it easier to manage your media, easier to keep track of your revisions, and
prevents any of your clips from being rendered twice unnecessarily.
Exporting EDLs
You can export EDLs out of Color, which can be a good way of moving projects back to
other editorial applications. When exporting an EDL, it’s up to the application with which
you’ll be importing the EDL to successfully relink to the media that’s rendered out of
Color.
Note: To help facilitate media relinking, the media path is written to the comment column
in the exported EDL, although not all editing applications support this convention.
To export an EDL
1 Choose File > Export > EDL.
2 When the Export EDL dialog appears, click Browse.
3 Enter a name for the EDL you’re exporting in the File field of the Export EDL File dialog,
choose a location for the file, then click Save.
4 If you didn’t change any of the shot names when you exported the final rendered media
for this project, turn on “Use original media name.”
5 Click OK.
A new EDL file is created, and the clips within are linked to the media directory you
specified.
114 Chapter 4 Importing and Managing Projects and MediaReconforming Projects
Whether your project was sent from Final Cut Pro, or imported via an EDL from any other
editing environment, you have the option of automatically reconforming your Color
project to match any editorial changes made to the original Final Cut Pro sequence, which
can save you hours of tedious labor.
Color matches each project to the sequence that was originally sent to Color using an
internal ID number. Because of this, you can only reconform by reediting the actual
sequence that you originally sent to Color. Any attempt to reconform a duplicate of the
original sequence will not work.
To reconform an XML-based Color project
1 Export an updated XML file of the reedited Final Cut Pro sequence from Final Cut Pro.
2 Open the Color project you need to update, then choose File > Reconform.
3 Select the XML file that was exported in step 1 using the Reconform XML dialog, then
click Load.
The shots in the Timeline should update to reflect the imported changes, and the
Reconform column in the Shots browser is updated with the status of every shot that
was affected by the Reconform operation.
You can also reconform projects that were originally imported using EDLs.
To reconform an EDL-based Color project
1 Export an updated EDL of the reedited sequence from the originating application.
2 Open the Color project you need to update, then choose File > Reconform.
3 Select the EDL file that was exported in step 1 using the Reconform dialog, then click
Load.
As is the case when you reconform an XML-based project, the Reconform column in the
Shots browser in the Setup room is updated with the status of each shot that’s been
modified by the Reconform operation. This lets you identify shots that might need
readjustment as a result of such changes, sorting them by type for fast navigation. For
more information, see Column Headers in the Shots Browser.
Converting Cineon and DPX Image Sequences to QuickTime
You can use Color to convert Cineon and DPX image sequences to QuickTime files to
facilitate a variety of workflows.
• If you’re starting out with 2K or 4K DPX or Cineon film scans or digital camera output,
you can downconvert matching QuickTime media files at offline resolution by choosing
a smaller resolution preset, and choosing ProRes 422 as the QuickTime export codec.
You can then use this media to do an offline edit.
Chapter 4 Importing and Managing Projects and Media 115• Alternately, you can convert 2K and 4K DPX and Cineon image sequences into
finishing-quality QuickTime media files by simply choosing ProRes 4444 as the QuickTime
export codec.
• If your project media is in the QuickTime format, but you want to output a series of
Cineon or DPX image sequences, you can do this conversion as well.
The timecode of converted DPX or Cineon film scans is copied to the new media that’s
created. This allows you to track the correspondence between the QuickTime clips you
generate, and the original image sequences from which they came. This conversion uses
the following rules:
• Timecode header metadata in DPX or Cineon files, if present, is converted into a
timecode track in each converted QuickTime file.
• If there is no timecode header data in the DPX or Cineon files, then the frame numbers
used in the filename of the image sequence are converted into timecode and written
to the timecode track of the converted QuickTime files. (For more information, see
Required Image Sequence Filenaming.)
• If a directory containing DPX or Cineon image sequences has the reel number of those
sequences as its name (highly recommended), that number will be used as the reel
number of the converted QuickTime files.
When converting from Cineon and DPX to high definition or standard definition QuickTime
video (and vice versa), Color automatically makes all necessary color space conversions.
Log media is converted to linear, and Rec. 701 and 601 color spaces are taken into account.
To convert Cineon or DPX image sequences to QuickTime media
1 Create a new, empty project. (For more information, see Creating and Opening Projects.)
2 Using the file browser, select the folder that contains all of the shots you want to convert,
and click the Import Folder button to edit every shot within that folder into the Timeline.
When you import a folder of shots, all shots that are contained by subfolders within the
selected folder are also imported. This makes it convenient to import an entire nested
hierarchy of image sequence media that has been organized into multiple individual
folders. For more information about importing media into the Timeline, see Importing
Media Directly into the Timeline.
3 Open the Project Settings tab of the Setup room, and do the following:
a Click Project Render Directory, choose a render directory for the converted media, then
click Choose.
b Choose QuickTime from the Render File Type pop-up menu.
c Choose a resolution from the Resolution Presets pop-up menu.
116 Chapter 4 Importing and Managing Projects and Mediad Choose the codec you want to convert the image sequences to from the Export Codec
pop-up menu. (For more information about choosing a suitable output codec, see
Compatible QuickTime Codecs for Output.)
4 If necessary, grade the shots to make any corrections to the offline media that you’ll be
generating.
Sometimes, the source media from a particular camera or transfer process needs a specific
color correction or contrast adjustment in order to look good during the offline edit. If
this is the case, you can use a single correction to adjust every shot you’re converting
(the equivalent of a one-light transfer). At other times, you’ll want to individually correct
each shot prior to conversion to provide the best-looking media you can for the editing
process (the equivalent of a best-light transfer).
Tip: To quickly apply a single correction to every shot in the Timeline, grade a
representative shot in the Primary In room, then click Copy to All.
5 Open the Render Queue, then click Add All.
6 Click Start Render.
All of the shots are converted, and the rendered output is written to the currently specified
render directory.
Important: After you’ve rendered the converted output, it’s a good idea to save the Color
project file you created to do the conversion, in case you need to reconvert the media
again. You might do this to improve the “one-light” color correction you applied to the
converted media, or to change the codec used to do the conversion. Keeping the original
conversion project makes it easy to reconvert your media in the same order, with the
same automatically generated file names, so you can easily reconnect a Final Cut Pro
sequence containing previously converted media to a new set of reconverted media.
For more information about options in the Render File Type, Resolution Presets, and
Export Codec pop-up menus, see Resolution and Codec Settings.
Importing Color Corrections
The File > Import > Color Corrections command lets you apply the grades and color
corrections from the shots of one project file to those within the currently open project.
It’s meant to be used with Color projects that are based on the same source, so that a
newly imported version of a project you’ve already been working on can be updated
with all the grades that were applied to the previous version.
For this command to work properly, the project you’re importing the color corrections
from must have the same number of shots in the Timeline as the project you’re applying
the imported color corrections to. The shot numbers in each project are used to determine
which color correction is copied to which shot. For example, the color correction from
shot 145 in the source project is copied to shot 145 in the destination project.
Chapter 4 Importing and Managing Projects and Media 117After using this command, all grades in the destination project are overwritten with those
from the source.
To import the color corrections from one project to another
1 Open the Color project into which you want to import the corrections.
2 Choose File > Import > Color Corrections.
3 In the Projects dialog, select the Color project containing the corrections you want to
import, then click Load.
The shots in the currently open project are updated with the color corrections from the
other project file.
Exporting JPEG Images
Color also provides a way of exporting a JPEG image of the frame at the position of the
playhead. JPEG images are exported at the current size of the Preview area of the Scopes
window.
To export a JPEG image of the frame at the current position of the playhead
1 Move the playhead to the frame you want to export.
2 Choose Export > JPEG Still.
3 Enter a name in the File field and select a directory using the Save Still As dialog.
Note: This defaults to the Still Store subdirectory inside the project bundle.
4 Click Save.
The frame is saved as a JPEG image to the location you selected. JPEG images are exported
with a frame size that matches the size of the Preview area of the Scopes window.
118 Chapter 4 Importing and Managing Projects and MediaBefore you start working on your project, take a moment to configure your Color working
environment and project settings in the Setup room.
The Setup room serves many purposes. It’s where you import media files, sort and manage
saved grades, organize and search through the shots used in your program, choose your
project’s render and broadcast safe settings, and adjust user preferences.
This chapter covers the following:
• The File Browser (p. 119)
• Using the Shots Browser (p. 122)
• The Grades Bin (p. 128)
• The Project Settings Tab (p. 129)
• The Messages Tab (p. 135)
• The User Preferences Tab (p. 135)
The File Browser
The file browser, occupying the left half of the Setup room, lets you directly navigate the
directory structure of your hard disk. It's like having a miniature Finder right there in the
Setup room. Keep in mind that the file browser is not a bin. The files displayed within the
file browser are not associated with your Color project in any way unless you drag them
into the Timeline manually or relink the shots of an imported project to their associated
media files on disk using the Relink Media or Reconnect Media command.
119
Configuring the Setup Room
5By default, the file browser displays the contents of the default media directory when
Color opens.
For more information on how to use the file browser, see Importing Media Directly into
the Timeline. For more information on importing project data from other applications,
see Importing and Managing Projects and Media.
File Browser Controls
These two buttons are at the top of the file browser.
• Up Directory button: Moves to the next directory up the current file path.
• Home Directory button: Moves to the currently specified default media directory.
120 Chapter 5 Configuring the Setup RoomMedia Information and DPX/Cineon Header Metadata
When you click a shot to select it, an enlarged thumbnail appears to the right of the list
of media.
Underneath the thumbnail, information appears about the shot, including its name,
duration, resolution, frame rate, and timecode. If it’s an image sequence, its white point,
black point, and transfer mode metadata also appear. Depending on the type of media,
one or two buttons may appear at the bottom of the file browser.
Fix Headers Button
If the selected shot (or shots) is an image sequence, the Fix Headers button appears.
Clicking it opens the DPX Header Settings window, which lets you change the transfer
mode (Linear or Logarithmic), the Low Reference (black point) and High Reference (white
point), of DPX and Cineon image sequences that may have incorrect data in the headers.
Change the parameters to the necessary settings and click Fix to rewrite this header data
in all of the currently selected shots.
Chapter 5 Configuring the Setup Room 121Import Button
Selecting one or more shots and clicking Import edits the selection into the end of the
current Timeline for an unlocked project. This is useful if you’re using Color to convert
DPX or Cineon image sequences to QuickTime, or vice versa. For more information, see
Importing Media Directly into the Timeline.
Note: You cannot import media into locked projects. This includes any project sent from
Final Cut Pro.
Using the Shots Browser
The Shots browser lists every shot used by the current program that appears in the
Timeline.
This bin can be used for sorting the shots in your program using different criteria, selecting
a group of shots to apply an operation to, or selecting a shot no matter where it appears
in the Timeline. For more information, see:
• Shots Browser Controls
• Column Headers in the Shots Browser
• Customizing the Shots Browser
• Adding Notes to Shots in the Shots Browser
• Selecting Shots and Navigating Using the Shots Browser
Shots Browser Controls
These controls are used to control both what and how items are viewed in the Shots
browser.
• Icon View button: Click to put the shot area into icon view.
• List View button: Click to put the shot area into list view.
• Shots browser: Each shot in your project appears here, either as a thumbnail icon or as
an entry (in list view).
122 Chapter 5 Configuring the Setup RoomChoosing the Current Shot and Selecting Shots in the Shots Browser
Icons or entries in the Shots browser are colored based on their selected state.
• Dark gray: The shot is not currently being viewed, nor is it selected.
• Light gray: The shot at the current position of the playhead is considered to be the
current shot and is highlighted with gray in both the Timeline (at the bottom of the
screen) and the Shots browser. The current shot is the one that's viewed and that is
corrected when the controls in any room are adjusted.
• Cyan: You can select shots other than the current shot. Selected shots are highlighted
with cyan in both the Timeline and the Shots browser. To save time, you can apply
grades and corrections to multiple selected shots at once.
Goto Shot and Find Fields in the Shots Browser
The Goto Shot and Find fields let you jump to and search for specific shots in your project.
These fields work with the Shots browser in either icon or list view modes.
To go to a specific shot
µ Enter a number in the Goto Shot field, then press Enter.
The list scrolls down to reveal the shot with that number, which is automatically selected,
and the playhead moves to the first frame of that shot in the Timeline.
To search for a specific shot
1 Click the header of the column of data you want to search.
2 Enter a name in the Search field.
As soon as you start typing, the Shots browser empties except for those items that match
the search criteria. As you continue to type, the Shots browser dynamically updates to
show the updated list of corresponding items.
Note: All searches are performed from the first character of data in the selected column,
read from left to right. The Find function is not case-sensitive.
To reveal all shots after a Find operation
µ Select all of the text in the Find field, then press Delete.
All shots should reappear in the Shots browser.
Chapter 5 Configuring the Setup Room 123Column Headers in the Shots Browser
When the Shots browser is in list view, up to nine columns of information are visible.
• Shots Browser Column Headers: These columns appear when the Shots browser is in
list view.
• Number: Lists a shot's position in the edit. The first shot is 1, the second is 2, and so
on.
• Shot Name: The name of that shot, based on its filename.
• Colorist: Lists the name that occupied the Colorist field in the Project Settings when
that shot was last corrected. This column is useful for keeping track of who worked
on which shots when multiple colorists are assigned to a project.
• Status: Shows that shot's rendered status. You can right-click on this column for any
selected shot and choose a new state from the shortcut menu. For more information
on the five possible render states, see Possible Render States in the Status Column.
• Reconform: Lists whether that shot has been affected by a Reconform operation. For
example, you can sort by this column to quickly identify and navigate to new shots
that aren't yet graded because they were added to the Timeline as a result of a
Reconform operation. For more information on reconforming a project, see
Reconforming Projects. For more information on the four possible Reconform flags,
see Possible Flags in the Reconform Column.
• Time Spent: This column appears only when the Show Time button below the Shots
browser is turned on. It shows how much time has been spent grading that particular
shot. Color keeps track of how long you spend working on each shot in each program,
in order to let you track how fast you've been working.
• Notes: The Notes column provides an interface for storing and recalling text notes
about specific shots. Shots with notes appear with a checkmark in this column.
• Match: The Match column only appears when a project has been created by importing
an EDL into Color. This column displays the percentage of confidence that each shot
in the Timeline has been correctly linked to its corresponding DPX, Cineon, or
QuickTime source media. The confidence value is based on the methods used to do
the linking. For more information, see Explanation of Percentages in the Match
Column.
Possible Render States in the Status Column
Each shot has one of five possible render states that appear in the Status column of the
Shots browser:
• Queued: The shot has been added to the Render Queue.
• Rendering: The shot is currently being rendered.
• Rendered: The shot has been successfully rendered.
124 Chapter 5 Configuring the Setup Room• To Do: The shot has not yet been corrected in any room.
• Aborted: Rendering of this shot has been stopped.
Possible Flags in the Reconform Column
Each shot that has been affected by a Reconform operation has one of four possible flags
that appear in the Reconform column of the Shots browser:
• Shorten: The shot has been shortened.
• Content Shift: The shot's duration and position in the Timeline are the same, but its
content has been slipped.
• Moved: The shot has been moved to another position in the Timeline.
• Added: This shot has been added to the project.
Explanation of Percentages in the Match Column
The Match column displays the percentage of confidence that each shot in the Timeline
has been correctly linked to the corresponding DPX, Cineon, or QuickTime source media,
based on the methods used to do the linking. The percentages displayed correspond to
the following linking methods:
• 100% confidence means the timecode for that shot in the EDL matched the timecode
found in the header data of the corresponding DPX or Cineon frame, and the EDL reel
number matched the name of the directory in which that frame appears.
• 75% confidence means the timecode for that shot in the EDL matched the frame
number of that DPX or Cineon frame, and the EDL reel number matched the name of
the directory in which that frame appears. For more information on timecode–to–frame
number conversions, see Required Image Sequence Filenaming.
• 50% confidence means the timecode for that shot in the EDL matched the timecode
found in the header data of the corresponding DPX or Cineon frame, but the reel
number could not be matched to the name of the directory in which that frame appears.
• 25% confidence means the timecode for that shot in the EDL matched the frame
number of that DPX or Cineon frame, but the reel number could not be matched to
the name of the directory in which that frame appears. For more information on
timecode–to–frame number conversions, see Required Image Sequence Filenaming.
• 0% confidence means that no media could be found to match the timecode for that
shot in the EDL, and the shot is offline in the Color Timeline.
Customizing the Shots Browser
The following procedures describe ways you can sort and modify the Shots browser.
Chapter 5 Configuring the Setup Room 125To sort the Shots browser by any column
µ Click a column's header to sort by that column.
Shots are sorted in descending order only. Numbers take precedence over letters, and
uppercase takes precedence over lowercase.
To resize a column in the Shots browser
µ Drag the right border of the column you want to resize.
To reveal or hide the Time Spent column
µ Click Show Time, located underneath the Shots browser.
Adding Notes to Shots in the Shots Browser
Color provides an interface for keeping track of client or supervisor notes on specific shots
as you work on a project.
To add a note to a shot, or to read or edit an existing note
1 Open the Setup room, then click the Shots tab.
2 Control-click or right-click the Notes column of the Shots browser, then choose Edit File
from the shortcut menu.
A plain text editing window appears.
3 Enter your text.
4 To save the note and close it, do one of the following:
• Press Command-S, then close the window.
• Close the window and click Save in the dialog that appears.
126 Chapter 5 Configuring the Setup RoomWhen you've added a note to a shot, a checkmark appears in the Notes column.
To remove a note from a shot
µ Control-click or right-click the Notes column of the Shots browser, then choose Delete
File from the shortcut menu.
Note: Notes are saved within the subdirectory for that particular shot, within the /shots/
subdirectory inside that project bundle. Removing a note deletes the note file.
Selecting Shots and Navigating Using the Shots Browser
You can use the Shots browser to quickly find and select specific shots—for example, to
apply a single grade to a group of shots at once. You can also use the Shots browser to
quickly navigate to a particular shot in the Timeline. These procedures work whether the
Shots browser is in icon or list view.
To select one or more shots
Do one of the following:
µ Click any shot in the Shots browser to select that shot.
µ Command-click any group of shots to select a noncontiguous group of shots.
Chapter 5 Configuring the Setup Room 127µ Click any shot, and then Shift-click a second shot to select a contiguous range of shots
from the first selection to the second.
Selected shots appear with a cyan overlay.
To navigate to a specific shot in the Timeline using the Shots browser
Do one of the following:
µ Double-click any shot.
µ Type a number into the Goto Shot field.
The new current shot turns gray in the Shots browser, and the playhead jumps to the
first frame of that shot in the Timeline. That shot is now ready to be corrected using any
of the Color rooms.
The Grades Bin
The Grades bin in the Setup room lets you save and manage grades that you can use in
your programs.
A grade, as described in Using the Color Interface, can contain one or more of the following
individual corrections:
• Primary
• Secondary
• Color FX
• Primary Out
128 Chapter 5 Configuring the Setup RoomBy applying a grade to one or more shots, you can apply multiple corrections all at once.
Grades saved into the Grades bin are available to all Color projects opened while logged
into that user account. The Grades bin can display grades in either icon or list view, and
shares the same controls as the other bins in Color. For more information on using the
Grades bin controls, see Using Organizational Browsers and Bins.
For more information on saving and applying grades, see Saving Grades into the Grades
Bin.
The Project Settings Tab
The options in the Project Settings (Prjct Settings) tab are saved individually on a
per-project basis. They let you store additional information about that project, adjust how
the project is displayed, and specify how the shots in that project will be rendered.
For more information, see:
• Informational and Render Directory Settings
• Resolution and Codec Settings
• Broadcast Safe Settings
• Handles
Informational and Render Directory Settings
These settings provide information about Color and your project and let you set up the
directory into which media generated by that project is written.
• Project Name: The name of the project. This defaults to the name of the project file on
disk, but you can change it to anything you like. Changing the project name does not
change the name of the project file.
Chapter 5 Configuring the Setup Room 129• Render Dir: The render directory is the default directory path where media files rendered
for this project are stored. (For more information about rendering Color projects, see
The Render Queue.) It’s always best to choose the appropriate location for the render
directory before you add items to the Render Queue, to make sure your shots are
rendered in the correct location. If the specified render directory becomes unavailable
the next time you open a project, you will be prompted to choose a new one.
• Project Render Dir button: Clicking this button lets you select a new project render
directory using the Choose Project Render Directory dialog.
• Colorist: This field lets you store the name of the colorist currently working on the
project. This information is useful for identifying who is working on what in multi-suite
post-production facilities, or when moving a project file from one facility to another.
• Client: This field lets you store the name of the client of the project.
Resolution and Codec Settings
These settings let you set up the display and render properties of your project. They affect
how your program is rendered both for display purposes, and when rendering the final
output.
• Display LUT: A display LUT (look up table) is a file containing color adjustment
information that's typically used to modify the monitored image that's displayed on
the preview and broadcast displays. LUTs can be generated to calibrate your display
using hardware probes, and they also let you match your display to other characterized
imaging mediums, including digital projection systems and film printing workflows. If
you've loaded a display LUT as part of a color management workflow, this field lets you
see which LUT file is being used. For more information on LUT management, see
Monitoring Your Project.
130 Chapter 5 Configuring the Setup Room• Frame Rate: This field displays the frame rate that the project is set to. Your project's
frame rate is set when the project is created, and it can be changed by a pop-up menu
so long as no shots appear in the Timeline. Once one or more shots have been added
to the Timeline, the project's frame rate cannot be changed.
• Resolution Presets pop-up menu: This pop-up menu lists all of the project resolutions
that Color supports, including PAL and NTSC standard definition, high definition, 2K
and 4K frame sizes. The options that are available in this menu are sometimes limited
by the currently selected QuickTime export codec.
If you change the Resolution Preset to a different frame size than the one the project
was originally set to, how that frame size affects the final graded media that is rendered
depends on the source media you’re using, and the Render File Type you’ve chosen:
• If you’re rendering QuickTime media, each shot in your project is rendered at the
same frame size as the original source media. The new Resolution Preset you choose
only affects the resolution of the sequence that is sent back to Final Cut Pro. Pan &
Scan settings are converted to Motion tab settings when the project is sent back to
Final Cut Pro.
• If your project uses 4K native RED QuickTime media, each shot in your project is
rendered at the new resolution you’ve specified. Any Pan & Scan tab adjustments
you’ve made are also rendered into the final media. (2K native RED QuickTime media
is rendered the same as other QuickTime media.)
• If the Render File Type pop-up menu is set to DPX or Cineon, then each shot in your
project is rendered at the new resolution you’ve specified. Any Pan & Scan tab
adjustments you’ve made are also rendered into the final media.
Important: Whenever you change resolutions, a dialog appears asking “Would you like
Color to automatically scale your clips to the new resolution?” Clicking Yes automatically
changes the Scale parameter in the Pan & Scan tab of the Geometry room to conform
each clip to the new resolution, letterboxing or pillarboxing clips as necessary to avoid
cropping. Clicking No leaves the Scale parameter of each clip unchanged, but may
result in the image being cropped if the new resolution is smaller than the previous
resolution.
If the QuickTime export codec allows custom frame sizes, the width and height fields
below can be edited. Otherwise, they remain uneditable. If these fields are set to a
user-specified frame size, the Resolution Presets pop-up menu displays "custom."
• Width: The currently selected width of the frame size
• Height: The currently selected height of the frame size
Chapter 5 Configuring the Setup Room 131• Printing Density pop-up menu: This pop-up menu can only be manually changed when
the Render File Type is set to DPX. It lets you choose how to map 0 percent black and
100 percent white to the minimum and maximum numeric ranges that each format
supports. Additionally, the option you choose determines whether or not super-white
values are preserved. For more information, see Choosing Printing Density When
Rendering DPX Media.
Note: Choosing Cineon as the Render File Type limits the Printing Density to Film (95
Black - 685 White : Logarithmic), while choosing QuickTime as the Render File Type
limits it to Linear (0 Black - 1023 White).
• Render File Type pop-up menu: This parameter is automatically set based on the type
of media your project uses. If you send a project from Final Cut Pro, this parameter is
set to QuickTime, and is unalterable. If you create a Color project from scratch, this
pop-up menu lets you choose the format with which to render your final media. When
working on 2K and 4K film projects using image sequences, you'll probably choose
Cineon or DPX, while video projects will most likely be rendered as QuickTime files.
• Deinterlace Renders: Turning this option on deinterlaces all shots being viewed on the
preview and broadcast displays and also deinterlaces media that's rendered out of
Color.
Note: Deinterlacing in Color is done very simply, by averaging both fields together to
create a single frame. The resulting image may appear softened. There is also a
deinterlacing parameter available for each shot in the Shot Settings tab next to the
Timeline, which lets you selectively deinterlace individual shots without deinterlacing
the entire program. For more information, see The Settings 2 Tab.
• Deinterlace Previews: Turning this option on deinterlaces all shots being viewed on the
preview and broadcast displays but media rendered out of Color remains interlaced.
• QuickTime Export Codecs pop-up menu: If QuickTime is selected in the Render File Type
pop-up menu, this pop-up menu lets you choose the codec with which to render media
out of your project. If this menu is set to Original Format, the export codec will
automatically match the codec specified in the sequence settings of the originating
Final Cut Pro sequence. (This option is only available when using the Send To Color
command or when importing an exported Final Cut Pro XML file.)
The QuickTime Export codec does not need to match the codec used by the source
media. You can use this menu to force Color to upconvert your media to a minimally
compressed or uncompressed format. The options in this pop-up menu are limited to
the QuickTime codecs that are currently supported for rendering media out of Color.
132 Chapter 5 Configuring the Setup RoomNote: You can render your project out of Color using one of several high-quality
mastering codecs, regardless of the codec or level of compression that is used by the
source media. You can use the QuickTime Export Codecs pop-up menu to facilitate a
workflow where you import compressed media into Color and then export the corrected
output as uncompressed media before sending your project to Final Cut Pro. This way,
you reap the benefits of saving hard disk space and avoiding rerendering times up
front, while preserving all the quality of your high–bit depth adjustments when you
render your output media prior to sending your project back to Final Cut Pro. The
codecs most suitable for mastering include Apple Uncompressed 8-bit 4:2:2, Apple
Uncompressed 10-bit 4:2:2, Apple ProRes 422, and Apple ProRes 422 (HQ). For more
information, see Compatible QuickTime Codecs for Output.
Broadcast Safe Settings
When color correcting any program destined for broadcast, it's important to obtain the
specific quality control (QC) guidelines from the broadcaster. There are varying standards
for the maximum and minimum allowed IRE, chroma, and composite amplitude, and
some broadcasters are more conservative than others.
The Broadcast Safe settings let you set up Color to limit the minimum and maximum
luma, chroma, and composite values of shots in your program. These settings are all
completely customizable to accommodate any QC standard and prevent QC violations.
• Broadcast Safe button: Turning on Broadcast Safe enables broadcast legalization for
the entire project, affecting both how it's displayed on your secondary display and
broadcast monitor and how it's rendered for final output. This button turns the following
settings on and off:
• Ceiling IRE: Specifies the maximum luma that's allowable, in analog IRE units. Signals
with luma above this limit will be limited to match this maximum value.
• Floor IRE: Specifies the minimum luma that's allowable, in analog IRE units. Signals
with luma below this limit will be limited to match this minimum value.
• Amplitude: This is not a limiting function. Instead, it lets you apply an adjustment to
the amplitude of the chroma. The default value of 0 results in no change.
Chapter 5 Configuring the Setup Room 133• Phase: Lets you adjust the phase of the chroma. If Amplitude is set to 0, no change
is made.
• Offset: Lets you adjust the offset of a chroma adjustment. If Amplitude is set to 0, no
change is made.
• Chroma Limit: Sets the maximum allowable saturation. The chroma of signals with
saturation above this limit will be limited to match this maximum value.
• Composite Limit: Sets the maximum allowable combination of luma and chroma.
Signals exceeding this limit will be limited to match this maximum value.
Ways of Using Broadcast Safe
The Broadcast Safe parameters can be set to match the required QC guidelines for your
program. When enabled, they guarantee that your program will not exceed these
standards while you monitor your program and when you render the finally corrected
media. There are three ways you can limit broadcast levels in your program.
Turn Broadcast Safe On, and Leave It Turned On While You Make Your Adjustments
The safest way to work (and the default behavior of new projects) is to simply turn
Broadcast Safe on at the beginning of your work, and leave it on throughout your entire
color correction pass. With practice, you can tell if a highlight or shadow is being crushed
too much by looking at the image on the monitor and watching for clumping exhibited
at the top and bottom of the graphs in the Waveform scope. If the image is being clipped
more than you prefer, you can make a correction to adjust the signal.
Turn Broadcast Safe Off While Making an Adjustment, Then Turn It Back On to Render
Output
If you leave Broadcast Safe on, illegal portions of the signal are always limited, and it
can be difficult to see exactly how much data is being clipped. When you're color
correcting media that was consistently recorded with super-white levels and high
chroma, you may find that it's sometimes a good idea to turn the Broadcast Safe settings
off while you do an initial color correction pass, so you can more easily see which parts
of the signal are out of bounds and make more careful judgments about how you want
to legalize it.
Turn Enable Clipping On for Individual Shots in Your Program
The Enable Clipping button in the Basic tab of the Primary Out room lets you set ceiling
values for the red, green, and blue channels for individual shots in your program (RGB
clipping). This lets you prevent illegal broadcast values in shots to which you're applying
extreme primary, secondary, or Color FX corrections, without turning on Broadcast Safe
for the entire program. If Enable Clipping and Broadcast Safe are both on, the lowest
standard is applied. For more information, see Ceiling Controls.
134 Chapter 5 Configuring the Setup RoomHandles
This field lets you specify a duration of extra media to be added to the head and tail of
each media file that's rendered out of Color. When a project is sent back to Final Cut Pro,
handles allow editors to make small adjustments without running out of corrected media.
The default value is 00:00:00:00.
Note: Although Color doesn’t allow you to preview transition effects as you work, shots
that are joined by transitions are automatically rendered with handles in order to provide
the necessary overlap for the transitions to work. This is true whether or not you’ve set
handles greater than zero.
The Messages Tab
The Messages tab contains a running list of all the warnings and error messages that are
generated by Color while it operates. Messages highlighted in yellow are warnings.
Messages highlighted in red signify that an error has occurred (for example, "Directory
not writable trying to re-save a project."). There are no controls in the Messages tab.
The User Preferences Tab
The User Preferences (User Prefs) tab contains settings that affect the operation of Color
with any project you open. It includes options for customizing control surface sensitivity,
Timeline display, playback behavior, video output, and the bit depth that's used for both
display and rendering.
The state of each of these settings is automatically saved whenever they're changed. If
necessary, you can restore the settings to their original defaults.
To reset the default user preferences
µ Click Reset Preferences, at the bottom of the User Preferences tab.
Chapter 5 Configuring the Setup Room 135For more information, see:
• Media and Project Directories
• Control Surface Settings
• User Interface Settings
• Grade and Scope Color Controls
• Limit Shadow Adjustments and Show Control Surface Controls
• Using Proxies
• Playback, Processing, and Output Settings
• How Do Bit Depth and Channel Data Correspond?
• Auto Save Settings
Media and Project Directories
The Media and Project directories let you control where new files are saved by default.
• Default Project Dir.: The default directory where all new Color projects are saved. This
is also the default directory that appears in the dialogs for the Import EDL and Import
XML commands. Click the Browse button to choose a new directory.
• Default Media Dir.: The default directory for the file browser. This is also the default
media location used by the Import EDL and Import XML commands. Click the Browse
button to choose a new directory.
• Default Render Dir.: The default directory for media that's rendered by Color for export.
Click the Browse button to choose a new directory.
136 Chapter 5 Configuring the Setup RoomControl Surface Settings
If you're using a control surface with Color, the following parameters let you adjust how
motion applied to a particular control corresponds to the resulting adjustment that's
made.
• Hue Wheel Angle: This parameter specifies the angle at which colors appear on the
color wheel of color controls in the Color interface and the corresponding angle at
which these colors are adjusted when using the joyballs of a control surface. This is
customizable in order to accommodate colorists who are used to working with different
systems:
• 122 is the default angle of red for DaVinci color correction systems, which corresponds
to the angle at which red appears on a Vectorscope. This is the default Color setting.
• 0 is the default angle of red for Pogle color correction systems, which corresponds
to the orientation of the controls of the older Mk III telecine.
Hue wheel angle at 122 Hue wheel angle at 0
• Encoder Sensitivity: This parameter controls the speed with which the rotation of knobs
on a control surface changes the value of their associated Color controls.
• Jog/Shuttle Sensitivity: This parameter controls the speed at which the playhead moves
relative to the amount of rotation that's applied to a control surface's Jog/Shuttle wheel.
Chapter 5 Configuring the Setup Room 137• Joyball Sensitivity: This parameter controls how quickly color balance controls are
adjusted when using a control surface's joyballs to adjust the Shadow, Midtone, and
Highlight color controls in the Primary In, Secondary, and Primary Out rooms. The
default setting is 1, which is extremely slow. Raise this value to increase the rate at
which corrections are made with the same amount of joyball motion.
User Interface Settings
The following settings let you customize the Color interface.
• UI Saturation: This value controls how saturated the Color user interface controls appear.
Many colorists lower the UI saturation to avoid eye fatigue and the potential for biasing
one's color perception during sessions. UI saturation also affects the intensity of colors
displayed by the Scopes window when the Monochrome Scopes option is turned off.
• Frames/Seconds/Minutes/Hours: These buttons let you choose how time is displayed
in the Timeline ruler. They do not affect how time is represented in the other timecode
fields in Color.
• Show Shot Name: Turning this option on displays each shot's name in the Timeline.
• Show Shot Number: Turning this option on displays the shot number for each shot in
the Timeline.
• Show Thumbnail: With this setting turned on, single frame thumbnails appear within
every shot in the Timeline.
• Loop Playback: Turning this option on loops playback from the current In point to the
Out point of the Timeline. How this affects playback depends on how the Playback
Mode is set. For more information, see Switching the Playback Mode.
138 Chapter 5 Configuring the Setup Room• Maintain Framerate: This setting determines whether or not frames are dropped in
order to maintain the project's frame rate during playback.
• If Maintain Framerate is turned on (the default): The current frame rate is maintained
no matter what the current processing workload is. If the currently playing grade is
processor-intensive, then frames will be dropped during playback to maintain the
project's frame rate. If not, playback occurs in real time.
• If Maintain Framerate is turned off: Every frame is always played back. If the currently
playing grade is processor-intensive, playback will slow to avoid dropping frames. If
not, playback may actually occur faster than real time.
• Synchronize Refresh (slower): Turning this option on eliminates video refresh artifacts
in the monitored image. (These may appear as "tearing" of the video image.) It affects
playback performance, but only slightly, resulting in a playback penalty of approximately
1 fps.
Grade and Scope Color Controls
The following parameters use miniature color controls that operate identically to those
described in Color Casts Explained.
• Grade Complete color control: The color that's displayed in the Timeline render bar for
rendered shots. The default color is green.
• Grade Queued color control: The color that's displayed in the Timeline render bar for
shots that have been added to the Render Queue, but that are not yet rendered. The
default color is yellow.
Chapter 5 Configuring the Setup Room 139• Grade Aborted color control: The color that's displayed in the Timeline render bar for
shots that have had their rendering stopped. The default color is red.
• Monochrome Scopes: Turning this option on draws the video scope graticules with a
single color (specified by the Scope Color option, below). Many colorists prefer this
display to avoid eye fatigue. On the other hand, it also eliminates the full-color display
in the Vectorscope. Another option for those wishing to have color feedback in the
scopes is to lower the UI Saturation setting to a less vivid intensity.
• Scope Color: This color control lets you adjust the color that's used to draw the video
scope graticules when Monochrome Scopes is turned on.
Limit Shadow Adjustments and Show Control Surface Controls
These controls are used to limit shadow adjustments and display the Control Surface
Startup dialog.
• Limit Shadow Adjustments: When this option is turned on, a falloff is applied to the
Shadows color and contrast adjustments such that 0 percent values (pure black) receive
100 percent of the correction, while 100 percent values (pure white) receive 0 percent
of the correction. When this option is turned off, adjustments made to the Shadows
color and contrast controls are applied uniformly to the entire image.
• Show Control Surface Dialog: Turning this option on immediately opens the Control
Surface Startup dialog, from which you can choose a Color-compatible control surface
with which to work. While this option is turned on, the Control Surface Startup dialog
appears every time you open Color. If you don't have a control surface, turn this option
off.
140 Chapter 5 Configuring the Setup RoomUsing Proxies
If you're working with a project that uses Cineon or DPX image sequences, you can use
the Color proxy mechanism to work faster with high-resolution media. The proxy
mechanism in Color is not available to projects using QuickTime media, unless you’re
using native RED QuickTime media. RED QuickTime media is capable of generating proxy
data on the fly depending on how the Render Proxy, Grading Proxy, and Playback Proxy
pop-up menus are set.
• Enable Proxy Support: Turning this button on enables the use of lower-resolution
substitute media, called proxies, in place of the source media in your project. Using
proxies increases playback, grading, and rendering performance, although your shots
are displayed at lower quality. If you’re grading DPX or Cineon media, proxies may only
be used once they've been generated; proxies are generated using the same format
as the source media. (For more information on how to generate proxies, see Generating
and Deleting Proxies.)
If you’re grading native RED QuickTime media, you can turn on proxy resolutions at
any time, without the need to generate proxy media; they’re generated on the fly.
Note: In all cases, while resolution may be reduced, proxies are completely
color-accurate.
• Render Proxy pop-up menu: Lets you choose a proxy resolution with which to render
your output media. This can be useful if you want to quickly render a set of media to
test the return trip of a roundtrip workflow. This menu defaults to Half Resolution and,
in most cases, should be left at that setting.
• Grading Proxy pop-up menu: Lets you choose a proxy resolution to use while adjusting
the controls in any of the rooms. This increases the interactivity of the user interface
and the speed with which the image being worked on updates while you adjust different
grading controls. When you finish making an adjustment, the image goes back to its
full resolution.
• Playback Proxy pop-up menu: Lets you choose a proxy resolution to use during playback,
increasing your playback frame rate by lowering the quality of the image. When playback
stops, the image goes back to its full resolution.
Chapter 5 Configuring the Setup Room 141Generating and Deleting Proxies
In order to use proxies while working on projects using DPX and Cineon media, you need
to first generate a set of half- and quarter-resolution proxy media for your project.
To generate a set of proxy media for your project
µ Choose File > Proxies > Generate Proxies.
To delete all the proxies that have been generated for a project
µ Choose File > Proxies > Delete Proxies.
Important: The proxy mechanism is not available for projects using QuickTime files, unless
they’re native RED QuickTime media. Native RED QuickTime media uses the proxy
mechanism, but proxies are generated on the fly, so you don’t have to use the Generate
Proxies command.
Playback, Processing, and Output Settings
The following settings affect playback quality, render quality, and performance.
• Video Output pop-up menu: The options in this pop-up menu correspond to the video
output options available to the broadcast video interface that's installed on your
computer. Choose Disabled to turn off video output altogether.
Note: Currently, Digital Cinema Desktop previews and Apple FireWire output are not
available for monitoring the output from Color.
142 Chapter 5 Configuring the Setup Room• Force RGB: This option is disabled for standard definition projects. This setting is meant
to be used when you're working with high definition Y′CBCR
source media that you're
monitoring on an external broadcast monitor via a supported broadcast video interface.
It determines how the RGB image data that's calculated internally by Color is converted
to Y′CBCR
image data for display:
• If Force RGB is turned off: This conversion is done by Color in software. This consumes
processor resources and may noticeably reduce your real-time performance as a
result.
• If Force RGB is turned on: Color sends RGB image data straight to the broadcast video
interface that's installed on your computer and relies on the interface to do the
conversion using dedicated hardware. This lightens the processing load on your
computer and is recommended to optimize your real-time performance. When
monitoring legalized video between 0 and 100 IRE, there should be a minimal
difference between the image that's displayed with Force RGB turned on or off. When
Force RGB is turned on, super-white and out-of-gamut chroma values will not be
displayed by your broadcast display, nor will they appear on external video scopes
analyzing your broadcast video interface's output. This limitation only affects
monitoring; the internal image processing performed by Color retains this data. As
a result, you will always see super-white image data on the Color software scopes
when it's present, and uncorrected super-white and out-of-gamut chroma levels are
always preserved when you export your final media. If Broadcast Safe is turned on
in the Project Settings, you may not notice any difference in the display of these
"illegal" levels, since they're being limited by Color.
• Disable Vid-Out During Playback: Turning this option on disables video output via your
broadcast interface during playback. While paused, the frame at the position of the
playhead is still output to video. This is useful if your project is so effects-intensive that
video playback is too slow to be useful. With this option turned on, you can make
adjustments and monitor the image while paused and then get a look at the program
in motion via the preview display, which usually plays faster.
• Update UI During Playback: Turning this option on allows selected windows of the Color
interface to update dynamically as the project plays back. This updates the controls
and scopes during playback from grade to grade, but potentially slows playback
performance, so it's off by default. There are two options:
• Update Primary Display: Updates the main interface controls in the Primary In,
Secondaries, Color FX, Primary Out, and Geometry rooms. Turning this option on lets
you see how the controls change from grade to grade and how they animate if you
have keyframed grades.
• Update Secondary Display: Updates the Scopes window. This is the way to get updated
video scopes during playback. With this option turned off, the video preview still
plays, but the video scopes disappear.
Chapter 5 Configuring the Setup Room 143• Radial HSL Interpolation:: This setting affects how keyframed color adjustments are
interpolated from one hue to another.
• Turning this setting on causes keyframed changes in hue to be animated radially,
with the hue cycling through all hues on the color wheel in between the current and
target hues. This results in visible color cycling if you're animating a change from
one hue to any other that's not directly adjacent on the color wheel. This is the
method that Final Cut Pro uses when animating color adjustments in the Color
Corrector and Color Corrector 3-way filters.
• With this setting turned off (the default state), keyframed changes in hue are animated
linearly, directly from one point on the color wheel to another. This results in the
most direct animated adjustments and minimizes unwanted color cycling. This is the
method that the DaVinci and Pogle systems use to animate color adjustments.
Animated Color Control Adjustment
with Radial Interpolation turned on
Animated Color Control Adjustment
with Radial Interpolation turned off
• Internal Pixel Format pop-up menu: The options available in this pop-up menu depend
on the graphics card you have installed in your computer. The option you choose from
this pop-up menu determines the bit depth Color uses for the internal processing of
color, both during real-time playback and when rendering the final output. Bit depth
is expressed as the number of bits per color channel and describes the total number
of values used to display the range of color by every pixel of an image. Higher bit depths
result in a higher-quality image, but are more processor-intensive to play back and
render.
144 Chapter 5 Configuring the Setup RoomTip: Depending on your system's performance, you may find it advantageous to work
at a lower bit depth in order to maximize real-time performance. Then, you can switch
to the desired bit depth prior to rendering your final output to maximize image quality.
However, If you graded your program with the Internal Pixel Format pop-up menu set
to 8- through 16-bit, changing it to Floating Point may alter how certain Color FX
operations work. If you intend to work at a lower bit depth but render at Floating Point,
it’s a good idea to double-check all shots with Color FX corrections applied to them
prior to rendering to make sure that they look the way you intended.
• 8-bit: The lowest bit depth at which Color can operate, and the least
processor-intensive.
• 10-bit: The minimum recommended bit depth for projects incorporating secondary
color correction and vignetting, regardless of the source.
• 12-bit: A higher bit depth supported by some video cards.
• 16-bit: An extremely high-quality bit depth. It has been suggested that 16-bit is the
best linear equivalent to 10-bit log when working on images scanned from film.
• Floating Point: The highest level of image-processing quality available in Color, and
recommended if your graphics card doesn’t support 10- through 16-bit image
processing. Refers to the use of floating-point math to store and calculate fractional
data. This means that values higher than 1 can be used to store data that would
otherwise be rounded down using the integer-based 8-bit, 10-bit, 12-bit, and 16-bit
depths. Floating Point is a processor-intensive bit depth to work with, so plan for
longer rendering times. Floating Point is not available on systems with 128 MB or
less of VRAM.
Chapter 5 Configuring the Setup Room 145How Does Working in Floating Point Affect Image Processing?
Aside from providing a qualitative edge when processing high-resolution, high–bit
depth images, setting the Internal Pixel Format to Floating Point changes how image
data is handed off from one room to the next, specifically in the Color FX and Primary
Out rooms.
At 8- through 16-bit, out-of-range image data (luma or chroma going below the Floor
IRE or above the Ceiling IRE of the Broadcast Safe settings, or below 0 and above 110 if
Broadcast Safe is turned off) is clipped as your image goes from one room to another.
Out-of-range image data is also clipped as the image is handed off from one node to
another in the Color FX room.
If you set the Internal Pixel Format to Floating Point, out-of-range image data is still
clipped as it moves from the Primary In room to the Secondaries room, and from the
Secondaries room to the Color FX room. However, starting with the Color FX room,
out-of-range image values are preserved as image data is handed off from node to
node. Furthermore, out-of-range image data is preserved when the image goes from
the Color FX room to the Primary Out room.
Here’s an example of how this works. At 16 bit, if you raise the highlights of an image
beyond 110 percent in the Color FX room, then lower the highlights in the Primary Out
room, your highlights stay clipped.
Original image Still clipped in Primary Out room
when signal is compressed
Signal clipped in Color FX room
At Floating Point, if you raise the highlights beyond 110 percent, and then lower them
again in the Primary Out room, all of the image data is retrievable.
Original image Highlights and shadows
preserved in Primary Out room
when signal is compressed
Signal clipped in Color FX room
146 Chapter 5 Configuring the Setup RoomBecause of this, you may occasionally notice differences between images that were
initially corrected at less than 16-bit, and the same images changed to render at Floating
Point. This is particularly true in the Color FX room.
For more information about bit depth, see How Do Bit Depth and Channel Data
Correspond?
How Do Bit Depth and Channel Data Correspond?
The actual range of values used by each channel for every pixel at a given bit depth is
calculated by taking 2 to the nth power, where n is the bit depth itself. For example, the
range of values used for 8-bit color is 2 to the 8th power, or 256 values per channel. The
range of values for 16-bit color is 2 to the 10th power, or 65536 values per channel.
However, this isn't the whole story. How much of the available numeric range is actually
used depends on how the image data is encoded.
• Full Range: Image data using the RGB color space encodes each color channel using
the full numeric range that's available. This means that 8-bit video color channels use
a value in the range of 0–255 and 10-bit channels use a range of 1–1023.
• Studio Range: 8- and 10-bit video image data that's stored using the Y′CBCR color space
uses a range of values for each channel. This means that a subset of the actual range
of available values is used, in order to leave the headroom for super-black and
super-white that the video standard requires.
For example, the luma of 8-bit Y′CBCR uses the range of 16–236, leaving 1–15 and
235–254 reserved for headroom in the signal. The luma of 10-bit Y′CBCR uses the range
of 64–940, with 4–63 and 941–1019 reserved for headroom.
Furthermore, the lowest and highest values are reserved for non-image data, and the
chroma components (CB and CR
) use a wider range of values (16–240 for 8-bit video,
and 64–960 for 10-bit video).
Auto Save Settings
Two settings let you turn on or off automatic saving in Color.
• Auto-Save Projects: Turning this option on enables automatic saving.
• Auto-Save Time (Minutes): Specifies how many minutes pass before the project is saved
again. This is set to 5 minutes by default.
Chapter 5 Configuring the Setup Room 147Auto Saving saves only the current project. It does not create an archived copy of the
project. For more information about creating and recalling archives, see Saving and
Opening Archives.
148 Chapter 5 Configuring the Setup RoomThe equipment and methods with which you monitor your work are critical to producing
an accurate result.
The importance of proper monitoring for color correction cannot be overemphasized.
This chapter covers the monitoring options available in Color, including the configuration
of the Scopes window, options for broadcast video output, the generation and use of
LUTs for calibration and simulation, and how the Still Store is output to video for
monitoring and evaluation.
This chapter covers the following:
• The Scopes Window and Preview Display (p. 149)
• Monitoring Broadcast Video Output (p. 151)
• Using Display LUTs (p. 153)
• Monitoring the Still Store (p. 159)
The Scopes Window and Preview Display
The simplest way to monitor your work in Color is with the Scopes window. This is the
second of the two windows that comprise the Color interface. You can configure Color
to use one or two displays.
Using two displays, the Scopes window is viewed on the second one, occupying its own
display. Using one display, the Scopes window shares the screen with the Color window.
To switch between the Color and Scopes windows
µ Choose Window > Composer (or press Command-Shift-1) to switch to the Color user
interface.
µ Choose Window > Viewer (or press Command-Shift-2) to switch to the Color Scopes
window.
To switch between single and dual display modes
Do one of the following:
µ Choose Window > Single Display Mode or Dual Display Mode.
149
Monitoring Your Project
6µ Press Command-Shift-0 to switch between both modes.
The Scopes window provides a preview display of the image that you’re working on, and
it can also show either two (in single-display mode) or three (in dual-display mode) video
scopes to aid you in image evaluation. For more information, see Analyzing Signals Using
the Video Scopes.
The preview display shows you either the frame at the current position of the playhead
in the Timeline, as it appears with all the corrections you’ve applied in all rooms (unless
you choose Grade > Disable Grade), or the currently enabled Still Store image. Whichever
image is shown in the preview display is mirrored on the broadcast monitor that’s
connected to the video output of your computer. The preview display is also affected by
LUTs that you import into your Color project.
Note: The only other time the current frame is not displayed is when one of the alternate
secondary display methods is selected in the Previews tab of the Secondaries room. For
more information, see Controls in the Previews Tab.
The preview display in the Scopes window can be switched between full- and
partial-screen modes.
To switch the preview image between full- and quarter-screen
Do one of the following:
µ Control-click or right-click the preview image in the Scopes window, then choose Full
Screen from the shortcut menu.
µ Double-click the image preview in the Scopes window.
All video scopes are hidden while the preview display is in full-screen mode.
Using the Preview Display as Your Evaluation Monitor
Whether or not the preview display in the Scopes window is appropriate to use as your
evaluation monitor depends on a number of factors, the most important of which is
the amount of confidence you have in the quality of your preview display.
Many users opt to use the preview display as an evaluation monitor, especially when
grading scanned film in a 2K workflow, but you need to make sure that you’re using a
monitor capable of displaying the range of contrast and color necessary for maintaining
accuracy to your facility’s standards. Also, success depends on proper monitor calibration,
combined with color profiling and simulation of the eventual film output using LUT
management. (See What Is a LUT? for more information.)
150 Chapter 6 Monitoring Your ProjectMonitoring Broadcast Video Output
For the most accurate monitoring of broadcast programs, Color ouputs standard and
high definition video using supported third-party video interfaces. The drivers installed
for the interface you have determine what resolutions, bit depths, and frame rates are
available for outputting to an external monitor.
To turn on external video monitoring
µ Choose an option from the Video Output pop-up menu, in the User Prefs tab of the Setup
room.
To turn off external video monitoring
µ Choose Disabled from the Video Output pop-up menu, in the User Prefs tab of the Setup
room.
For more information about monitoring, see:
• Mixing and Matching Program and Viewing Resolutions
• Bit Depth and Monitoring
• Choose Your Monitor Carefully
• Set Up Your Viewing Environment Carefully
• Calibrate Your Monitor Regularly
• Adjust the Color Interface for Your Monitoring Environment
Mixing and Matching Program and Viewing Resolutions
Ideally, you should monitor your program at its native resolution (in other words, the
resolution of its source media). However, Color will do its best to output the video at
whatever resolution is set in the Video Output pop-up menu of the User Prefs tab. If the
Video Output pop-up menu is set to a different resolution than the currently selected
Resolution Preset, then Color will automatically scale the image up or down as necessary
to fit the image to the display size.
Bit Depth and Monitoring
The working bit depth can have a significant impact on the quality of your monitored
image. The monitored bit depth depends on three factors:
• The bit depth of the source media
• The bit depth selected in the Video Output pop-up menu
• The bit depth selected in the Internal Pixel Format pop-up menu
Chapter 6 Monitoring Your Project 151Other than specifying or choosing the initial shooting or transfer format, the bit depth
of the source media on disk is predetermined (usually 8-bit, 10-bit, or 10-bit log). Since
low bit depths can be prone to banding and other artifacts during the color correction
process (especially when gradients are involved), it’s usually advantageous to process
the video at a higher bit depth than that of the original source media (secondary
corrections and vignettes can especially benefit).
Color will process and output your video at whatever bit depth you select. However, most
broadcast video interfaces max out at 10-bit resolution. For maximum quality while
monitoring, you should set the Internal Pixel Format to the highest bit depth you want
to work at and make sure the Video Output pop-up menu is set to a 10-bit option.
Note: Video noise and film grain often minimize the types of artifacts caused by color
correction operations at low bit depths, so the advantages of working at higher bit depths
are not always obvious to the naked eye.
Monitoring at high bit depths is processor-intensive, however, and can reduce your
real-time performance. For this reason, you also have the option of lowering the bit depth
while you work and then raising it when you’re ready to render the project’s final output.
For more information about the monitoring options available in the User Prefs tab, see
Playback, Processing, and Output Settings.
Choose Your Monitor Carefully
It’s important to choose a monitor that’s appropriate to the critical evaluation of the type
of image you’ll be grading. At the high end of the display spectrum, you can choose from
CRT-based displays, a new generation of flat-panel LCD-based displays, and high-end
video projectors utilizing a variety of technologies.
You should choose carefully based on your budget and needs, but important characteristics
for critical color evaluation include:
• Compatibility with the video formats you’ll be monitoring
• Compatibility with the video signal you’ll be monitoring, such as Y′PBPR
, SDI, HD-SDI,
or HDMI
• Suitable black levels (in other words, solid black doesn’t look like gray)
• A wide contrast range
• Appropriate brightness
• User-selectable color temperature
• Adherence to the Rec. 601 (SD) or 709 (HD) color space standards as appropriate
• Proper gamma (also defined by Rec. 709)
• Controls suitable for professional calibration and adjustment
152 Chapter 6 Monitoring Your ProjectNote: For all these reasons, consumer televisions and displays are not typically appropriate
for professional work, although they can be valuable for previewing how your program
might look in an average living room.
Set Up Your Viewing Environment Carefully
The environment in which you view your monitor also has a significant impact on your
ability to properly evaluate the image.
• There should be no direct light spilling on the front of your monitor.
• Ambient room lighting should be subdued and indirect, and there should be no direct
light sources within your field of view.
• Ambient room lighting should match the color temperature of your monitor (6500K
in North and South America and Europe, and 9300K in Asia).
• There should be indirect lighting behind the viewing monitor that’s between 10–25%
of the brightness of the installed monitor set to display pure white.
• The ideal viewing distance for a given monitor is approximately five times the vertical
height of its screen.
• The color of the room within your working field of vision should be a neutral gray.
These precautions will help to prevent eye fatigue and inadvertent color biasing while
you work and will also maximize the image quality you’ll perceive on your display.
Calibrate Your Monitor Regularly
Make sure you calibrate your monitor regularly. For maximum precision, some monitors
have integrated probes for automatic calibration. Otherwise, you can use third-party
probes and calibration software to make the same measurements. In a purely broadcast
setting, you can also rely on the standard color bars procedure you are used to.
For more information on adjusting a monitor using color bars, see Calibrating Your
Monitor.
Adjust the Color Interface for Your Monitoring Environment
The Color interface is deliberately darkened in order to reduce the amount of light spill
on your desktop. If you want to subdue the interface even further, the UI Saturation
setting in the User Prefs tab of the Setup room lets you lower the saturation of most of
the controls in the Primary In, Secondaries, and Primary Out rooms, as well as the color
displayed by the video scopes.
Using Display LUTs
Color supports the use of 3D look up tables (LUTs) for calibrating your display to match
an appropriate broadcast standard or to simulate the characteristics of a target output
device (for example, how the image you’re correcting will look when printed to film).
Chapter 6 Monitoring Your Project 153Color is represented on CRTs, LCD flat panels, video projectors, and film projectors using
very different technologies. If you show an identical test image on two different types of
displays—for example, a broadcast display and a video projector—you can guarantee
there will be a variation in color between the two. This variation may not be noticeable
to the average viewer, but as a colorist, you need a predictable viewing environment that
adheres to the standards required for your format, and to make sure that you aren’t driven
crazy by changes being requested as a result of someone’s viewing the program on a
display showing incorrect color.
There is also variation within a single category of device:
• CRT monitors from different manufacturers use different phosphor coatings.
• Digital projectors are available using many types of imaging systems.
• Projected film is output using a variety of printing methods and film stocks.
All these variables inevitably result in significant color variation for any image going from
one viewing environment to another. One solution to this is calibration using LUTs.
What Is a LUT?
Simply put, look up tables (LUTs) are precalculated sets of data that are used to adjust
the color of an image being displayed with the gamut and chromaticity of device A to
match how that image would look using the gamut and chromaticity of device B.
The gamut of a particular device represents the total range of colors that can be displayed
on that device. Some types of displays are capable of displaying a greater range of colors
than others. Furthermore, different video and film standards specify different gamuts of
color, such that colors that are easily represented by one imaging medium are out of
bounds for another. For example, film is capable of representing far more color values
than the broadcast video standard.
154 Chapter 6 Monitoring Your ProjectChromaticity refers to the exact values a display uses to represent each of the three primary
colors. Different displays use different primary values; this can be seen on a chromaticity
diagram that plots the three primaries as points against a two-dimensional graph
representing hue and saturation within the visible spectrum. Since all colors represented
by a particular display are a mix of the three primaries, if the three primary points vary
from display to display, the entire gamut of color will shift.
While the chromaticity diagram shown above is useful for comparing displays on paper,
to truly represent the hue (color), saturation (intensity of color), and lightness (luminance
from black to white) that defines a complete gamut, you need to use a 3D color space.
When extruded into 3D space, the gamut and chromaticity of different devices create
different shapes. For example, the standard RGB color space can be represented with a
simple cube (as seen in the ColorSync Utility application):
Chapter 6 Monitoring Your Project 155Each corner of the cube represents a different mix of the R,G,B tristimulus values that
represent each color. The black corner is (0,0,0), the opposing white corner is (1,1,1), the
blue corner is (0,0,1), the red corner is (1,0,0), and so forth. The RGB color cube is an
idealized abstraction, however. Actual display devices appear with much different shapes,
defined by their individual gamut and chromaticity.
To accurately transform one device’s gamut to match that of another involves literally
projecting its gamut into a 3D representation and then mathematically changing its shape
to match that of the other device or standard. This process is referred to as characterizing
a device and is the standard method used by the color management industry. Once
calculated, the method of transformation is stored as a 3D LUT file.
Once a device has been characterized and the necessary LUT has been calculated, the
hard computational work is done, and the LUT can be used within Color to modify the
output image without any significant impact on real-time performance.
When Do You Need a LUT?
The following examples illustrate situations in which you should consider using LUTs:
• If you’re matching multiple displays in a facility: LUTs can be useful for calibrating multiple
displays to match a common visual standard, ensuring that a program doesn’t look
different when you move it to another room.
• If you’re displaying SD or HD video on a nonbroadcast monitor: You can use a LUT to
emulate the Rec. 601 (SD) or 709 (HD) color space and gamma setting that’s appropriate
to the standard of video you’re viewing.
• If you’re displaying video or film images using a video projector: You can use a LUT to
calibrate your device to match, as closely as possible, the gamut of the broadcast or
film standard you’re working to.
• If you’re grading images destined to be printed to film: You can use a LUT to profile the
characteristics of the film printing device and film stock with which you’ll be outputting
the final prints, in order to approximate the look of the final projected image while you
work.
156 Chapter 6 Monitoring Your ProjectImportant: LUTs are no substitute for a high-quality display. In particular, they’ll do nothing
to improve muddy blacks, an inherently low contrast range, or a too-narrow gamut.
When Don’t You Need a LUT?
If you’re color correcting video and monitoring using a properly calibrated broadcast
display that’s compatible with the standard of video that you’re displaying, it’s not
generally necessary to use a LUT.
Generating LUTs
There are several ways you can generate a LUT.
Create One Yourself Using Third-Party Software
There are third-party applications that work in conjunction with hardware monitor probes
to analyze the characteristics of individual displays and then generate a LUT in order to
provide the most accurate color fidelity possible. Because monitor settings and
characteristics drift over time, it’s standard practice to periodically recalibrate displays
every one to two weeks.
If you’re creating a LUT to bring another type of display into line with broadcast standards
(such as a digital projector), you’ll then use additional software to modify the calibration
LUT to match the target display characteristics you require.
Have One Created for You
At the high end of digital intermediate for film workflows, you can work with the lab that
will be doing the film print and the company that makes your monitor calibration software
to create custom LUTs based on profiles of the specific film recorders and film stocks that
you’re using for your project.
This process typically involves printing a test image to film at the lab and then analyzing
the resulting image to generate a target LUT that, together with your display’s calibration
LUT (derived using a monitor probe and software on your system), is used to generate a
third LUT, which is the one that’s used by Color for monitoring your program as you work.
Creating LUTs in Color
In a pinch, you can match two monitors by eye using the controls of the Primary In room
and generating a LUT to emulate your match directly out of Color.
You can also export a grade as a “look” LUT to see how a particular correction will affect
a digitally recorded image while it’s being shot. To do this, the crew must be using a field
monitor capable of loading LUTs in the .mga format.
To create your own LUT
1 Arrange your Color preview display and the target monitor so that both can be seen at
the same time.
Chapter 6 Monitoring Your Project 1572 Load a good evaluation image (such as a Macbeth chart) into the Timeline.
3 Display the same image on the target display using a second reliable video source.
4 Open the Primary In room and adjust the controls appropriate to make the two images
match.
5 Choose File > Export > Display LUT.
6 When the Save LUT As dialog appears, enter a name for that LUT into the File field, choose
a location to save the file, and click Save.
By default, LUTs are saved to the /Users/username/Library/Application Support/Color/LUTs
directory.
Important: If your project is already using a LUT when you export a new one, the currently
loaded LUT is concatenated with your adjustments, and the combination is exported as
the new LUT.
Using LUTs
All LUTs used and generated by Color are 3D LUTs. Color uses the .mga LUT format
(originally developed by Pandora), which is compatible with software by Rising Sun
Research, Kodak, and others. If necessary, there are also applications available to convert
LUTs from one format into another.
LUTs don’t impact processing performance at all.
To use a LUT
1 Choose File > Import > Display LUT.
2 Select a LUT file using the Load LUT dialog, then click Load.
Note: By default, LUTs are saved to the /Users/username/Library/Application
Support/Color/LUTs directory.
The LUT immediately takes effect, modifying the image as it appears on the preview and
broadcast displays. LUTs that you load are saved in a project’s settings until you specifically
clear the LUT from that project.
To stop using a LUT
µ Choose File > Clear Display LUT.
To share a LUT with other Color users, you must provide them with a copy of the LUT file.
For ease of use, it’s best to place all LUT files into the /Users/username/Library/Application
Support/Color/LUTs directory.
158 Chapter 6 Monitoring Your ProjectMonitoring the Still Store
The Still Store lets you save and recall images from different parts of your project that
you can use to compare to shots you’re working on. The Still Store is basically an image
buffer that lets you go back and forth between the currently loaded Still Store image and
the current image at the position of the playhead. You have options for toggling between
the full image and a customizable split-screen view that lets you see both images at once.
When you enable the Still Store, the full-screen or split-screen image is sent to both the
preview and broadcast displays. To go back to viewing the frame at the position of the
playhead by itself, you need to disable the Still Store.
Enabled Still Store images are analyzed by the video scopes, and they are affected by
LUTs. For more information on using the Still Store, see The Still Store.
Chapter 6 Monitoring Your Project 159The Timeline provides you with an interface for navigating through your project, selecting
shots to grade, and limited editing.
The Timeline and the Shots browser (in the Setup room) both provide ways of viewing
the shots in your project. The Shots browser gives you a way to nonlinearly sort and
organize your shots, while the Timeline provides a sequential display of the shots in your
program arranged in time. In this chapter, you’ll learn how to use the Timeline to navigate
and play through the shots in your program, as well as how to perform simple edits.
This chapter covers the following:
• Basic Timeline Elements (p. 162)
• Customizing the Timeline Interface (p. 163)
• Working with Tracks (p. 165)
• Selecting the Current Shot (p. 166)
• Timeline Playback (p. 166)
• Zooming In and Out of the Timeline (p. 169)
• Timeline Navigation (p. 170)
• Selecting Shots in the Timeline (p. 171)
• Working with Grades in the Timeline (p. 172)
• The Settings 1 Tab (p. 174)
• The Settings 2 Tab (p. 175)
• Editing Controls and Procedures (p. 176)
161
Timeline Playback, Navigation, and
Editing 7Basic Timeline Elements
The Timeline is divided into a number of tracks that contain the shots, grades, and
keyframes used by your program.
• Render bar: The render bars above the Timeline ruler show whether or not a shot is
unrendered (red), or has been rendered (green).
• Timeline ruler: Shows a time scale for the Timeline. Dragging within the Timeline ruler
lets you move the playhead, scrubbing through the program.
• Playhead: Shows the position of the currently displayed frame in the Timeline. The
position of the playhead also determines the current shot that’s being worked on.
• Video tracks and shots: Each shot in the program is represented within one of the video
tracks directly underneath the Timeline ruler. Color only allows you to create up to five
video tracks when you’re assembling a project from scratch, but will accommodate
however many superimposed video tracks there are in imported projects.
Note: Color does not currently support compositing operations. During playback,
superimposed clips take visual precedence over clips in lower tracks.
• Track resize handles: The tracks can be made taller or shorter by dragging their resize
handles up or down.
• Lock icon: The lock icon shows whether or not a track has been locked.
• Grades tracks: Color allows you to switch among up to four primary grades applied to
each shot. This option lets you quickly preview different looks applied to the same shot,
without losing your previous work. Each grade is labeled Grade 1–4.
162 Chapter 7 Timeline Playback, Navigation, and EditingEach of the four grades may include one or more Primary, Secondary, Color FX, and
Primary Out corrections. By default, each grade appears with a single primary grade
bar, but additional correction bars appear at the bottom if you’ve made adjustments
to any of the other rooms for that grade. Each correction bar has a different color.
• P(rimary) bar: Shows whether a primary correction has been applied.
• S(econdary) bar: Shows whether one or more secondary corrections have been
applied.
• CFX (color FX) bar: Shows whether a Color FX correction has been applied.
• PO (primary out) bar: Shows whether a Primary Out correction has been applied.
• Tracker area: If you add a motion tracker to a shot and process it, the tracker’s In and
Out points appear in this area, with a green bar showing how much of the currently
selected tracker has been processed. If no tracker is selected in the Tracking tab of the
Geometry room, nothing appears in this area. For more information, see The Tracking
Tab.
• Keyframe graph: This track contains both the keyframes and the curves that interpolate
the change from one keyframe’s value to another. For more information about
keyframing corrections and effects, see Keyframing.
Customizing the Timeline Interface
There are a number of ways you can customize the visual interface of the Timeline. See
the following sections for specifics:
• Customizing Unit and Information Display
• Resizing Tracks in the Timeline
Customizing Unit and Information Display
The following options in the User Prefs tab of the Setup room let you change how shots
are shown in the Timeline.
Chapter 7 Timeline Playback, Navigation, and Editing 163To change the units used in the Timeline ruler
Do one of the following:
µ Click the Setup room tab, then click the User Prefs tab and click the Frames, Seconds,
Minutes, or Hours button corresponding to the units you want to use.
µ Press one of the following keys:
• Press F to change the display to frames.
• Press S to change the display to seconds.
• Press M to change the display to minutes.
• Press H to change the display to hours.
To customize the way shots are displayed in the Timeline
µ Click the Setup room tab, then click the User Prefs tab. Turn on or off individual shot
appearance settings to suit your needs.
Three settings in the User Prefs tab of the Setup room let you customize the way shots
appear in the Timeline.
• Show Shot Name: Turning this on displays each shot’s name in the Timeline.
• Show Shot Number: Turning this on displays each shot’s number in the Timeline.
• Show Shot Thumbnail: With this setting turned on, single frame thumbnails appear
within every shot in the Timeline.
Resizing Tracks in the Timeline
You can also resize the tracks in the Timeline, making them taller or shorter, as you prefer.
Video tracks, the grades track, and the keyframe graph are all resized individually.
To resize all video tracks, the grades track, or the keyframe graph
µ Drag the center handle of the gray bar at the bottom of any track in the Timeline until
all tracks are the desired height.
164 Chapter 7 Timeline Playback, Navigation, and EditingTo resize individual tracks
µ Hold down the Shift key, then drag the center handle of the gray bar at the bottom of
the track you want to resize until it’s the desired height.
Note: The next time you resize all video tracks together, individually resized tracks snap
to match the newly adjusted track size.
Working with Tracks
This section describes different ways you can change the state of tracks in the Timeline
as you work.
Note: The tracks of imported XML projects are automatically locked. For the best roundtrip
results, these tracks should not be unlocked.
To lock or unlock a track
µ Control-click or right-click anywhere within a track, then choose one of the following
from the shortcut menu:
• Lock Track: Locks all the shots so that they can’t be moved or edited.
• Unlock Track: Allows shots to be moved and edited.
Note: You can also lock the grades track in the Timeline using the same methods.
To hide or show a track
µ Control-click or right-click anywhere within a track, then choose one of the following:
• Hide Track: Disables a track such that superimposed shots are neither visible nor
selectable when the playhead passes over them.
• Show Track: Makes a track visible again. Superimposed shots take precedence over
shots on lower tracks and are selected by default whenever that track is visible.
Tip: Prior to exporting a project from Final Cut Pro, you can export a self-contained
QuickTime movie of the entire program and superimpose it over the other clips in your
edited sequence. Then, when you export the project to Color, you can turn this “reference”
version of the program on and off using track visibility whenever you want to have a look
at effects or color corrections that were created during the offline edit.
To add a track
µ Control-click or right-click anywhere within a track, then choose New Track from the
shortcut menu.
To remove a track
µ Control-click or right-click anywhere within a track, then choose Remove Track from the
shortcut menu.
Note: You cannot remove the bottom track.
Chapter 7 Timeline Playback, Navigation, and Editing 165Selecting the Current Shot
Whichever shot you move the playhead to becomes the current shot. The current shot is
the one that’s adjusted whenever you manipulate any of the controls in the Primary In,
Secondary, Color FX, Primary Out, or Geometry room. There can only be one current shot
at a time. It’s the only one that’s highlighted in light gray.
As you move the playhead through the Timeline, the controls and parameters of all rooms
automatically update to match the grade of the current shot at the position of the
playhead.
If there is more than one shot stacked in multiple video tracks at any point in the Timeline,
the topmost shot becomes the current shot except in the following two cases:
• Shots on hidden tracks cannot become the current shot. If there’s a superimposed shot
that doesn’t let you expose the settings of a shot underneath, you can hide the
superimposed track.
• Offline shots are invisible, and any shots appearing underneath in the Timeline
automatically have their settings exposed in the Color interface.
To make a shot in the Timeline the current shot
Do one of the following:
µ Double-click any shot in the Timeline.
µ Move the playhead to a new shot.
Note: When you double-click a shot, the Timeline moves so that the shot is centered in
the Timeline, and it becomes the current shot.
Timeline Playback
In general, the purpose of playback in Color is to preview how your various corrections
look when the shot you’re working on is in motion or how the grades that are variously
applied to a group of clips look when they’re played together. For this reason, playback
works somewhat differently than in applications like Final Cut Pro.
166 Chapter 7 Timeline Playback, Navigation, and EditingIn Color, playback is always constrained to the area of the Timeline from the In point to
the Out point. If the playhead is already within this area, then playback begins at the
current position of the playhead, and ends at the Out point. If the playhead happens to
be outside of this area, it automatically jumps to the In point when you next initiate
playback. This makes it faster to loop the playback of a specific shot or scene in the
Timeline, which is a common operation during color correction sessions. For more
information, see:
• Starting and Stopping Playback
• Switching the Playback Mode
• Loop Playback
• Maintain Framerate
Starting and Stopping Playback
The following controls let you play and stop your program.
Important: When you start playback, you enter a mode in which you’re unable to work
with the Color controls until you stop playback.
To play the program
Do one of the following:
µ Press the Space bar.
µ Press J to play backward, or L to play forward.
µ Click the Play Forward or Play Backward button.
To stop the program
Do one of the following:
µ Press the Space bar while the program is playing.
µ Press Escape.
µ Press K.
Color and JKL
Color has a partial implementation of the JKL playback controls found in other editing
applications. However, the finer points of JKL, such as slow-motion and frame-by-frame
playback, are not implemented.
Chapter 7 Timeline Playback, Navigation, and Editing 167Switching the Playback Mode
The playback mode lets you choose whether the In and Out points are automatically
changed to match the duration of the current shot whenever you move the playhead or
whether they remain set to a larger portion of your program.
Shot Mode
Shot mode is the default playback method. Whenever the playhead moves to a new shot,
the Timeline In and Out points are automatically changed to match that shot’s Project In
and Project Out points. As a result, playback is constrained to just that shot. If Loop
Playback is turned on, the playhead will loop repeatedly over the current shot until
playback is stopped.
Note: You can still click other shots in the Timeline to select them, but the In and Out
points don’t change until the playhead is moved to intersect another shot.
Movie Mode
When you first enter movie mode, the Timeline In point is set to the first frame of the
first shot in the Timeline, and the Out point is set to the last frame of the last shot. This
allows you to play through as many shots as you like, previewing whole scenes of your
project. While in movie mode, you can also set your own In and Out points wherever you
want, and they won’t update when you move the playhead to another shot.
Placing Your Own In and Out Points
Regardless of what playback mode you’ve chosen, you can always manually set new In
and Out points wherever you want to. When you set your own In and Out points, the
playback mode changes to movie mode automatically.
To switch the playback mode
Do one of the following:
µ Choose Timeline > Toggle Playback Mode.
µ Press Shift-Control-M.
To customize the playback duration
1 Move the playhead to the desired In point, then press I.
2 Move the playhead to the desired Out point, then press O.
Loop Playback
If Loop Playback is turned on, the playhead jumps back to the In point whenever it reaches
the Out point during playback.
To turn on loop playback
1 Click the Setup room tab, then click the User Prefs tab.
2 Click the Loop Playback button to turn it on.
168 Chapter 7 Timeline Playback, Navigation, and EditingMaintain Framerate
The Maintain Framerate setting in the User Prefs tab of the Setup room determines
whether or not frames are dropped in order to maintain the project’s frame rate during
playback.
• If Maintain Framerate is turned on (the default): The current frame rate is maintained no
matter what the current processing workload is. If the currently playing grade is
processor-intensive, then frames will be dropped during playback to maintain the
project’s frame rate. If not, playback occurs in real time.
• If Maintain Framerate is turned off: Every frame is always played back. If the currently
playing grade is processor-intensive, playback will slow in order to avoid dropping
frames. If not, playback may actually occur at faster than real time.
Zooming In and Out of the Timeline
The following controls let you zoom in and out of your program in the Timeline, changing
how many shots are visible at once.
How far you can zoom in to the Timeline depends on what units the Timeline ruler is set
to display. The larger the units the Timeline is set to display, the farther you can zoom
out. For example, in order to view more shots in the Timeline simultaneously, you can
zoom out farther when the Timeline ruler is set to Minutes than when it’s set to Frames.
Note: Zooming using the mouse allows you to zoom in or out as far as you want to go;
the Timeline ruler’s units change automatically as you zoom.
To zoom in to and out of the Timeline
1 Move the playhead to a position in the Timeline where you want to center the zooming
operation.
2 With the pointer positioned within the Timeline, do one of the following:
• Choose Timeline > Zoom In, or press Minus Sign (–) to zoom in.
• Choose Timeline > Zoom Out, or press Equal Sign (=) to zoom out.
Note: You can also use the Plus Sign (+) and Minus Sign (–) keys in the numeric keypad
to zoom in to or out of the Timeline.
To zoom in to and out of the Timeline using the mouse
µ Right-click in the Timeline ruler, then drag right to zoom in, or left to zoom out.
To fit every shot of your program into the available width of the Timeline
µ Press Shift-Z.
Chapter 7 Timeline Playback, Navigation, and Editing 169Timeline Navigation
The following procedures let you navigate around your program in the Timeline, scrolling
through it, and moving the playhead from shot to shot.
To move the playhead from shot to shot
Do one of the following:
µ Drag within the Timeline ruler to scrub the playhead from shot to shot.
µ Press Up Arrow to move to the first frame of the next shot to the left.
µ Press Down Arrow to move to the first frame of the next shot to the right.
µ Click the Next Shot or Previous Shot buttons.
To move from frame to frame
Do one of the following:
µ Press Left Arrow to go to the previous frame.
µ Press Right Arrow to go to the next frame.
To go to the first or last frame of your project
µ Press Home to go to the first frame.
µ Press End to go to the last frame.
To go to the current In or Out point
µ Press Shift-I to go to the In point.
µ Press Shift-O to go to the Out point.
When there are more tracks than can be displayed within the Timeline at once, small
white arrows appear either at the top, the bottom, or both, to indicate that there are
hidden tracks in the direction that’s indicated.
When this happens, you can scroll vertically in the Timeline using the middle mouse
button.
To scroll around the Timeline horizontally or vertically without moving the playhead
Do one of the following:
µ Middle-click and drag the contents of the Timeline left, right, up, or down.
µ To scroll more quickly, hold down the Option key while middle-clicking and dragging.
170 Chapter 7 Timeline Playback, Navigation, and EditingSelecting Shots in the Timeline
There are certain operations, such as copying primary corrections, that you can perform
on selected groups of shots. Color provides standard methods of selecting one or more
shots in the Timeline.
Note: You can also select shots using the Shots browser. For more information, see Using
the Shots Browser.
To select a shot in the Timeline
µ Click any shot.
Selected shots appear with a cyan highlight in the Timeline.
To select a contiguous number of shots
1 Click the first of a range of shots you want to select.
2 Shift-click another shot at the end of the range of shots.
All shots in between the first and second shots you selected are also selected.
To select a noncontiguous number of shots
µ Command-click any number of shots in the Timeline.
Note: Command-clicking a selected shot deselects it.
To select all shots in the Timeline
µ Choose Edit > Select All (or press Command-A).
Chapter 7 Timeline Playback, Navigation, and Editing 171To deselect all shots in the Timeline
Do one of the following:
µ Choose Edit > Deselect All (or press Command-Shift-A).
µ Select a previously unselected shot to clear the current selection.
µ Click in an empty area of the Timeline.
Important: If the current shot at the position of the playhead is not selected, it will not
be automatically included in the selection when you apply saved corrections or grades
from a bin.
Working with Grades in the Timeline
Each shot in the Timeline can be switched among up to four different grades, shown in
the grades track.
These four grades let you store different looks for the same shot. For example, if you’ve
created a satisfactory grade, but you or your client would like to try “one other thing,”
you can experiment with up to three different looks, knowing that you can instantly recall
the original, if that’s what’s ultimately preferred.
Only one grade actually affects a shot at a time—whichever grade is selected in the
Timeline is the grade you will see on your preview and broadcast displays. All unselected
grades are disabled. For more information on creating and managing grades, see Managing
Corrections and Grades.
By default, each shot in a new project starts off with a single empty grade, but you can
add another one at any time.
To add a new grade to a shot
Do one of the following:
µ Move the playhead to the shot you want to add a new grade to, then press Control-1
through Control-4.
µ Control-click or right-click the grade you want to switch to, then choose Add New Grade
from the shortcut menu.
172 Chapter 7 Timeline Playback, Navigation, and EditingIf there wasn’t already a grade corresponding to the number of the grade you entered,
one will be created. Whenever a new grade is added, the grades track expands, and the
new grade becomes the selected grade. New grades are clean slates, letting you begin
working from the original state of the uncorrected shot.
To select the current grade
1 Move the playhead to the shot you want to switch the grade of.
2 Do one of the following:
• Click the grade you want to switch to.
• Press Control-1 through Control-4.
• Control-click or right-click the grade you want to switch to, then choose Select Grade
[x] from the shortcut menu, where x is the number of the grade you’re selecting.
That shot in the Timeline is updated with the newly selected grade.
To reset a grade in the Timeline
1 Move the playhead to the shot you want to switch the grade of.
2 Control-click or right-click the grade you want to reset to in the grades track of the
Timeline, then choose Reset Grade [x] from the shortcut menu, where x is the number of
the grade.
When you reset a grade, every room associated with that grade is reset, including the
Primary In, Secondary, Color FX, and Primary Out rooms. The Geometry room is unaffected.
For more information, see Managing Corrections and Grades.
To delete a grade in the Timeline
1 Move the playhead to the shot you want to remove the grade from.
2 Control-click or right-click the grade you want to reset to in the grades track of the
Timeline, then choose Remove Grade [x] from the shortcut menu, where x is the number
of the grade.
Note: If the grades track is locked, you cannot delete grades.
Chapter 7 Timeline Playback, Navigation, and Editing 173The Settings 1 Tab
The timing properties listed in the Settings 1 tab are not editable. Instead, they reflect
each shot’s position in the Timeline and the properties of the source media that each
shot is linked to.
• Project In and Project Out: Defines the location of the shot in the Timeline.
• Trim In and Trim Out: Defines the portion of source media that’s actually used in the
project, relative to the total available duration of the source media file on disk. The
Trim In and Trim Out timecodes cannot be outside the range of Source In and Source
Out parameters.
• Source In and Source Out: Defines the start and end points of the original source media
on disk. If Trim In is equal to Source In and Trim Out is equal to Source Out, there are
no unused handles available in the source media on disk—you are using all available
media.
• Frame Rate pop-up menu: This pop-up menu lets you set the frame rate of each clip
individually. This setting overrides the Frame Rate setting in the Project Settings tab.
For most projects using source media in the QuickTime format, this should be left at
the default settings. For projects using DPX image sequences as the source media, this
pop-up menu lets you change an incorrect frame rate in the DPX header data.
174 Chapter 7 Timeline Playback, Navigation, and EditingThe Settings 2 Tab
The Settings 2 tab contains additional settings that let you modify the header data of
DPX and Cineon image files.
• Override Header Settings: Selecting this button enables the Printing Density pop-up
menu to be manually changed, so that you can override the printing density settings
in the DPX header for the current shot.
• Printing Density pop-up menu: This pop-up menu is initially disabled, displaying the
numeric range of values that 0 percent black and 100 percent white are mapped to in
the source media. There are three options:
• Film (95 Black - 685 White : Logarithmic)
• Video (65 Black - 940 White : Linear)
• Linear (0 Black - 1023 White)
If you’re working with logarithmic DPX and Cineon film scans, the default black point
is typically 95, and the default white point is typically 685. When you first load a project
that uses scanned film media, it’s important to make sure that the Black Point and
White Point settings aren’t filled with spurious data. Check with your lab to verify the
appropriate settings, and if the settings in your source media don’t match, turn on
Override Header Settings, and then choose a new printing density from this pop-up
menu. For more information, see Choosing Printing Density When Rendering DPX
Media.
• DeInterlace: Selecting this button lets you individually deinterlace clips. This setting
overrides the Deinterlace Renders and Deinterlace Previews settings in the Project
Settings tab. When DeInterlace is turned on, both video fields are averaged together
to create a single frame.
• Copy To All: Copies the current header settings to every single shot in the Timeline.
This is useful if you find that the header data for all of the film scan media your program
uses is incorrect. Use this with extreme caution.
• Copy To Selected: Copies the current header settings to all currently selected shots in
the Timeline. Useful if your project consists of a variety of scanned media from different
sources with different header values.
Chapter 7 Timeline Playback, Navigation, and Editing 175Editing Controls and Procedures
Color is not intended to be an editing environment, and as a result its editing tool set
isn’t as complete as that of an application like Final Cut Pro. In fact, most of the time you
want to be careful not to make any editorial changes at all to your project in Color, for a
variety of reasons:
• If you unlock the tracks of projects that were imported via XML or sent from Final Cut Pro
and that will be returning to Final Cut Pro, you risk disrupting the project data, which
will prevent you from successfully sending the project back to Final Cut Pro.
• If you make edits to a project that was sent from Final Cut Pro, you’ll only be able to
send a simplified version of that project back to Final Cut Pro which contains only the
shots and transitions in track V1, and the Pan & Scan settings in the Geometry room.
• If you import an EDL and make edits, you can export an EDL from Color that incorporates
your changes; however, that EDL will only contain the shots and transitions in track V1.
• If the project you’ve imported is synchronized to an audio mix, making any editorial
changes risks breaking the audio sync.
However, if you’re working on a project where these issues aren’t important, you can use
editing tools and commands in Color to edit shots in unlocked tracks in the Timeline.
Tip: If you need to make an editorial change, you can always reedit the original sequence
in Final Cut Pro, export a new XML file, and use the Reconform command to update the
Color Timeline to match the changes you made.
Select Tool
The Select tool is the default state of the pointer in Color. As the name implies, this tool
lets you select shots in the Timeline, move them to another position in the edit, or delete
them.
It’s a good idea to reselect the Select tool immediately after making edits with any of the
other tools, to make sure you don’t inadvertently continue making alterations in the
Timeline that you don’t intend.
To reposition a shot in the Timeline
µ Drag the shot to another position in the Timeline.
When you move a shot in the Timeline, where it ends depends on the In point’s relation
to shots that are already there. Shots you move in Color never overwrite other shots.
Instead, the other shots in the Timeline are moved out of the way to make way for the
incoming shot, and the program is rippled as a result.
• If the In point of the moved shot overlaps the first half of another shot, nothing is
changed.
176 Chapter 7 Timeline Playback, Navigation, and Editing• If the In point of the moved shot overlaps the second half of another shot, the shot
you’re moving will be insert edited, and all other shots in the Timeline will be rippled
to the right to make room.
• If you’re moving a shot into an area of the Timeline where it doesn’t overlap with any
other shot, it’s simply moved to that area of the Timeline without rippling any other
shots.
To delete a shot in the Timeline
1 Select one or more shots in the Timeline.
2 Do one of the following:
• Press Delete.
• Press Forward Delete.
The result is a lift edit, which leaves a gap in the Timeline where that shot used to be. No
other shots move as a result of deleting a shot.
Roll Tool
The Roll tool lets you adjust the Out point and In point of two adjacent shots
simultaneously. If you like where two shots are placed in the Timeline, but you want to
change the cut point, you can use the Roll tool. No shots move in the Timeline as a result;
only the edit point between the two shots moves. This is a two-sided edit, meaning that
two shots’ edit points are affected simultaneously; the first shot’s Out point and the next
shot’s In point are both adjusted by a roll edit. However, no other shots in the sequence
are affected.
Note: When you perform a roll edit, the overall duration of the sequence stays the same,
but both shots change duration. One gets longer while the other gets shorter to
compensate. This means that you don’t have to worry about causing sync problems
between linked shot items on different tracks.
A B C
A B C
Before edit
After edit
In the example above, shot B gets shorter while shot C becomes longer, but the combined
duration of the two shots stays the same.
To perform a roll edit
1 Do one of the following to choose the Roll edit tool:
• Choose Timeline > Roll Tool.
• Press Control-R.
Chapter 7 Timeline Playback, Navigation, and Editing 1772 Move the pointer to the edit point between the two shots that you want to roll, and drag
it either left or right to make the edit.
The Timeline updates to reflect the edit you’re making.
Ripple Tool
A ripple edit adjusts a shot’s In or Out point, making that shot longer or shorter, without
leaving a gap in the Timeline. The change in duration of the shot you adjusted ripples
through the rest of the program in the Timeline, moving all shots that are to the right of
the one you adjusted either earlier or later in the Timeline.
A ripple edit is a one-sided edit, meaning that you can only use it to adjust the In or Out
point of a single shot. All shots following the one you’ve adjusted are moved—to the left
if you’ve shortened it or to the right if you’ve lengthened it. This is a significant operation
that can potentially affect the timing of your entire program.
A B C
A B C
Before edit
After edit
Important: Ripple edits can be dangerous if you are trying to maintain sync between
your program in Color and the original audio in the Final Cut Pro sequence or source EDL
that is being mixed somewhere else entirely, since the shots in your Color project may
move forward or backward while the externally synced audio doesn’t.
To perform a ripple edit
1 Do one of the following to choose the Ripple edit tool:
• Choose Timeline > Ripple Tool.
• Press Control-T.
2 Move the pointer to the In or Out point of the shot you want to shorten or lengthen, then
drag it either left or right to make the edit.
The Timeline updates to reflect the edit you’re making, with all the shots following the
one you’re adjusting moving to the left or right to accommodate the change in timing.
Slip Tool
Performing a slip edit doesn’t change a shot’s position or duration in the Timeline; instead
it changes what portion of that shot’s media appears in the Timeline by letting you change
its In and Out points simultaneously.
178 Chapter 7 Timeline Playback, Navigation, and EditingThis means that the portion of the shot that plays in the Timeline changes, while its
position in the Timeline stays the same. No other shots in the Timeline are affected by a
slip edit, and the overall duration of the project remains unaffected.
A B C
A B C
Before edit
00:00:10:00 00:00:30:00
00:00:17:00 00:00:37:00
After edit
In the example above, the slip edit changes the In and Out points of shot B, but not its
duration or position in the sequence. When the sequence plays back, a different portion
of shot B’s media will be shown.
To perform a slip edit
1 Move the playhead to the shot you want to adjust, in order to be able to view the change
you’re making as you work.
2 Do one of the following to choose the Slip edit tool:
• Choose Timeline > Slip Tool.
• Press Control-Y.
3 Move the pointer to the shot you want to slip, then drag it either left or right to make
the edit.
Unlike Final Cut Pro, Color provides no visual feedback showing the frames of the new
In and Out points you’re choosing with this tool. The only image that’s displayed is the
frame at the current position of the playhead being updated as you drag the shot back
and forth. This is why it’s a good idea to move the playhead to the shot you’re adjusting
before you start making a slip edit.
Split Tool
The Split tool lets you add an edit point to a shot by cutting it into two pieces. This edit
point is added at the frame you click in the Timeline. This can be useful for deleting a
section of a shot or for applying an effect to a specific part of a shot.
To split one shot into two
1 Do one of the following to choose the Split tool:
• Choose Timeline > Split Tool.
• Press Control-X.
Chapter 7 Timeline Playback, Navigation, and Editing 1792 Move the pointer to the Timeline ruler, and when the split overlay appears (a vertical
white line intersecting the shots in the Timeline), drag it to the frame of the shot where
you want to add an edit point.
3 Click to add an edit point.
The Timeline updates to reflect the edit you’ve made, with a new edit point appearing
at the frame you clicked.
Splice Tool
Whenever you cut a shot with the Split tool, the original shot is split into two shots
separated by a through edit. There is no visual indication of through edits in the Color
Timeline, but any edit point that splits an otherwise contiguous range of frames is
considered to be a through edit, which can be joined back together with the Splice tool.
Joining two shots separated by a through edit merges them back into a single shot. You
cannot join two shots that aren’t separated by a through edit; if you try you’ll simply get
a warning message.
Important: When you splice two shots that have different grades and corrections, the
grades and corrections of the shot to the left overwrite those of the shot to the right.
To splice two shots into one
1 Do one of the following to choose the Splice tool:
• Choose Timeline > Splice Tool.
• Press Control-Z.
2 Move the pointer to the Timeline ruler, and when the splice overlay appears (a vertical
white line intersecting the shots in the Timeline), drag it to the edit point you want to
splice.
3 Click to splice that edit point.
The Timeline updates to reflect the edit you’ve made, and the two shots that were
previously separated by a through edit are spliced into one.
Create an Edit Command
The Create an Edit command in the Timeline menu (Control-V) is similar to the Split tool.
It cuts a single shot in the Timeline into two at the current position of the playhead. Using
this command eliminates the need to choose a tool.
To create an edit point
1 Move the playhead to the frame where you want to add an edit point.
2 Do one of the following:
• Choose Timeline > Create an Edit.
• Press Control-V.
180 Chapter 7 Timeline Playback, Navigation, and EditingThe Timeline updates to reflect the edit you’ve made, with a new edit point appearing
at the position of the playhead.
Merge Edits Command
The Merge Edits command (Control-B) is similar to the Splice tool. It joins two shots
separated by a through edit at the current position of the playhead into a single shot.
Using this command eliminates the need to choose a tool.
To merge two shots into one at a through edit point
1 Move the playhead to the frame at the through edit you want to merge.
2 Do one of the following:
• Choose Timeline > Merge Edits.
• Press Control-B.
The Timeline updates to reflect the edit you’ve made, and the two shots that were
previously separated by a through edit are merged into one.
Important: When you splice two shots that have different grades and corrections, the
grades and corrections of the shot to the left overwrite those of the shot to the right.
Snapping
When snapping is on, clips “snap to” the 00:00:00:00 time value in the Timeline.
To turn snapping on or off
µ Choose Timeline > Snapping.
Chapter 7 Timeline Playback, Navigation, and Editing 181In addition to a well-calibrated broadcast display, video scopes provide a fast and accurate
way to quantitatively evaluate and compare images.
Color provides most of the video scope displays that you’d find in other online video and
color correction suites and includes a few that are unique to software-based image
analysis. Together, these scopes provide graphic measurements of the luma, chroma,
and RGB levels of the image currently being monitored, helping you to unambiguously
evaluate the qualities that differentiate one shot from another. This feature lets you make
more informed decisions while legalizing or comparing shots in Color.
This chapter covers the following:
• What Scopes Are Available? (p. 183)
• Video Scope Options (p. 185)
• Analyzing Images Using the Video Scopes (p. 187)
What Scopes Are Available?
The following video scopes are available in the Scopes window:
• The Waveform Monitor
• The Parade Scope
• The Overlay Scope
• The Red/Green/Blue Channels Scopes
• The Luma Scope
• The Chroma Scope
• The Y′CBCR Scope
• The Vectorscope
• The Histogram
• The RGB Histogram
• The R, G, and B Histograms
183
Analyzing Signals Using the Video
Scopes 8• The Luma Histogram
• The 3D Scope
• The RGB Color Space
• The HSL Color Space
• The Y′CBCR Color Space
• The IPT Color Space
The location where the video scopes appear depends on whether Color is configured to
single- or dual-display mode:
• In single-display mode: Two video scopes are displayed underneath the video preview
in the Scopes window, which is positioned to the left of the Color interface window.
184 Chapter 8 Analyzing Signals Using the Video Scopes• In dual-display mode: Up to three video scopes are displayed in the Scopes window, in
addition to the video preview.
The Accuracy of Color Video Scopes
To create a real-time analysis of the video signal (even during adjustment and playback),
Color downsamples the current image to a resolution of 384 x 192. The downsampled
image is then analyzed and the resulting data displayed by the currently selected scopes.
This same downsampled resolution is used regardless of the original resolution of the
source media.
Using this method, every pixel contributes to the final analysis of the image. In tests,
the graphs produced by the Color video scopes closely match those produced by
dedicated video scopes and are extremely useful as an aid to evaluating and matching
shots while you work in Color. However, you should be aware that the Color analysis is
still an approximation of the total data. Dedicated video scopes are still valuable for
critical evaluation.
Note: If you’re concerned about catching stray out-of-gamut pixels while you make
adjustments for QC purposes, you can turn on the Broadcast Safe settings to protect
yourself from QC violations. For more information, see Broadcast Safe Settings.
Video Scope Options
You can modify the display and behavior of the video scopes in a number of ways.
To turn on real-time video scope updates
1 Open the User Prefs tab located inside the Setup room.
2 Select Update UI During Playback.
Chapter 8 Analyzing Signals Using the Video Scopes 1853 To set the video scopes to update during playback, select Update Secondary Display.
Tip: You can turn off Update Primary Display to improve playback performance.
Some scopes can be switched among different modes.
To change a scope to a different mode
µ Click the button corresponding to the mode you want at the top of that scope.
Any quadrant containing a video scope can also be switched to a different kind of scope.
To switch the layout of the Scopes window
Do one of the following:
µ Control-click or right-click within any scope, then choose a different scope from the
shortcut menu.
µ Move the pointer within any region of the Scopes window, and press W (Waveform), V
(Vectorscope), H (Histogram), or C (3D scopes) to change scopes.
You can zoom in to all scopes to get a closer look at the graph.
To zoom a scope’s display
Do one of the following:
µ Roll the scroll wheel or scroll ball of your mouse down to zoom in to a particular scope’s
display, and up to zoom out.
µ Click one of the percentage buttons in the upper-left corner of the Vectorscope to scale
the scope’s display.
The 3D video scopes can also be rotated in space so that you can view the analysis from
any angle.
To reposition any 3D scope
Do one of the following:
µ Drag horizontally or vertically to rotate the scope model in that direction.
µ Hold down the middle mouse button and drag to reposition the scope model in that
direction.
To reset any scope to its original scale and orientation
µ Control-click or right-click within any scope, then choose Reset from the shortcut menu.
186 Chapter 8 Analyzing Signals Using the Video ScopesSome scopes can be displayed in color.
To turn video scope color on and off
1 Open the User Prefs tab, located inside the Setup room.
2 Click Monochrome Scopes to turn scope color on or off.
Scope color is affected by the following customizable parameters:
• When Monochrome Scopes is turned off: The UI Saturation parameter determines how
intense the scope colors are.
• When Monochrome Scopes is turned on: The Scope Color control directly underneath
controls the color of the scope graticules.
Analyzing Images Using the Video Scopes
The following sections describe the use of each scope that Color provides:
• The Waveform Monitor
• The Vectorscope
• The Histogram
• The 3D Scope
• Sampling Color for Analysis
The Waveform Monitor
The Waveform Monitor is actually a whole family of scopes that shows different analyses
of luma and chroma using waveforms.
What Is a Waveform?
To create a waveform, Color analyzes lines of an image from left to right, with the resulting
values plotted vertically on the waveform graticule relative to the scale that’s used—for
example, –20 to 110 IRE (or –140 to 770 mV) on the Luma graph. In the following image,
a single line of the image is analyzed and plotted in this way.
Chapter 8 Analyzing Signals Using the Video Scopes 187To produce the overall analysis of the image, the individual graphs for each line of the
image are superimposed over one another.
Because the waveform’s values are plotted in the same horizontal position as the portion
of the image that’s analyzed, the waveform mirrors the image to a certain extent. This
can be seen if a subject moves from left to right in an image while the waveform is playing
in real time.
With all the waveform-style scopes, high luma or chroma levels show up as spikes on the
waveform, while low levels show up as dips. This makes it easy to read the measured
levels of highlights or shadows in the image.
Changing the Graticule Values
The Waveform Monitor is the only scope in which you can change the numeric values
used to measure the signal. By default, the Waveform Monitor is set to measure in IRE,
but you can also switch the scope to measure using millivolts (mV) instead by clicking
one of the buttons to the right of the waveform selection buttons.
188 Chapter 8 Analyzing Signals Using the Video ScopesWaveform Analysis Modes
The Waveform Monitor has eight different modes. For more information, see:
• The Parade Scope
• The Overlay Scope
• The Red/Green/Blue Channels Scopes
• The Luma Scope
• The Chroma Scope
• The Y′CBCR Scope
The Parade Scope
The Parade scope displays separate waveforms for the red, green, and blue components
of the image side by side. If Monochrome Scopes is turned off, the waveforms are tinted
red, green, and blue so you can easily identify which is which.
Note: To better illustrate the Parade scope’s analysis, the examples in this section are
shown with Broadcast Safe disabled so that image values above 100 percent and below
0 percent won’t be clipped.
Chapter 8 Analyzing Signals Using the Video Scopes 189The Parade scope makes it easy to spot color casts in the highlights and shadows of an
image, by comparing the contours of the top and the bottom of each waveform. Since
whites, grays, and blacks are characterized by exactly equal amounts of red, green, and
blue, neutral areas of the picture should display three waveforms of roughly equal height
in the Parade scope. If not, the correction is easy to make by making adjustments to level
the three waveforms.
Before color correction
After color correction
190 Chapter 8 Analyzing Signals Using the Video ScopesThe Parade scope is also useful for comparing the relative levels of reds, greens, and blues
between two shots. If one shot has more red than another, the difference shows up as
an elevated red waveform in the one and a depressed red waveform in the other, relative
to the other channels. In the first shot, the overall image contains quite a bit of red. By
comparison, the second shot has substantially less red and far higher levels of green,
which can be seen immediately in the Parade scope. If you needed to match the color
of these shots together, you could use these measurements as the basis for your correction.
An elevated red channel betrays the degree of the color cast.
An elevated green channel reveals a different correction to be made.
The Parade scope also lets you spot color channels that are exceeding the chroma limit
for broadcast legality, if the Broadcast Safe settings are turned off. This can be seen in
waveforms of individual channels that either rise too high or dip too low.
Chapter 8 Analyzing Signals Using the Video Scopes 191The Overlay Scope
The Overlay scope presents information that’s identical to that in the Parade scope, except
that the waveforms representing the red, green, and blue channels are superimposed
directly over one another.
This can make it easier to spot the relative differences or similarities in overlapping areas
of the three color channels that are supposed to be identical, such as neutral whites,
grays, or blacks.
Another feature of this display is that when the video scopes are set to display color (by
turning off the Monochrome Scopes parameter), areas of the graticule where the red,
green, and blue waveforms precisely overlap appear white. This makes it easy to see
when you’ve eliminated color casts in the shadows and highlights by balancing all three
channels.
The Red/Green/Blue Channels Scopes
These scopes show isolated waveforms for each of the color channels. They’re useful
when you want a closer look at a single channel’s values.
192 Chapter 8 Analyzing Signals Using the Video ScopesThe Luma Scope
The Luma scope shows you the relative levels of brightness within the image. Spikes or
drops in the displayed waveform make it easy to see hot spots or dark areas in your
picture.
The difference between the highest peak and the lowest dip of the Luma scope’s graticule
shows you the total contrast ratio of the shot, and the average thickness of the waveform
shows its average exposure. Waveforms that are too low are indicative of images that are
dark, while waveforms that are too high may indicate overexposure.
Overexposed waveform
Underexposed waveform Well-exposed waveform
If you’re doing a QC pass of a program with the Broadcast Safe settings turned off, you
can also use the scale to easily spot video levels that are over and under the recommended
limits.
Chapter 8 Analyzing Signals Using the Video Scopes 193The Chroma Scope
This scope shows the combined CB and CR color difference components of the image. It’s
useful for checking whether or not the overall chroma is too high, and also whether it’s
being limited too much, as it lets you see the result of the Chroma Limit setting being
imposed when Broadcast Safe is turned on.
For example, the following graph shows extremely saturated chroma within the image:
When you turn Broadcast Safe on with the default Chroma Limit value of 50, you can see
that the high chroma spikes have been limited to 50.
The Y′CBCR Scope
This scope shows the individual components of the Y′CBCR encoded signal in a parade
view. The leftmost waveform is the luma (Y′) component, the middle waveform is the CB
color difference component, and the rightmost waveform is the CR color difference
component.
194 Chapter 8 Analyzing Signals Using the Video ScopesThe Vectorscope
The Vectorscope shows you the overall distribution of color in your image against a
circular scale. The video image is represented by a graph consisting of a series of connected
points that all fall at about the center of this scale. For each point within the analyzed
graph, its angle around the scale indicates its hue (which can be compared to the color
targets provided), while its distance from the center of the scale represents the saturation
of the color being displayed. The center of the Vectorscope represents zero saturation,
and the farther from the center a point is, the higher its saturation.
If the Monochrome Scopes option is turned off in the User Prefs tab of the Setup room,
then the points of the graph plotted by the Vectorscope will be drawn with the color
from that part of the source image. This can make it easier to see which areas of the graph
correspond to which areas of the image.
Comparing Saturation with the Vectorscope
The Vectorscope is useful for seeing, at a glance, the hue and intensity of the various
colors in your image. Once you learn to identify the colors in your shots on the graph in
the Vectorscope, you will be better able to match two images closely because you can
see where they vary. For example, if one image is more saturated than another, its graph
in the Vectorscope will be larger.
Chapter 8 Analyzing Signals Using the Video Scopes 195Spotting Color Casts with the Vectorscope
You can also use the Vectorscope to spot whether there’s a color cast affecting portions
of the picture that should be neutral (or desaturated). Crosshairs in the Vectorscope
graticule indicate its center. Since desaturated areas of the picture should be perfectly
centered, an off-center Vectorscope graph representing an image that has portions of
white, gray, or black clearly indicates a color imbalance.
The Color Targets
The color targets in the Vectorscope line up with the traces made by the standard color
bar test pattern, and can be used to check the accuracy of a captured video signal that
has recorded color bars at the head.
These targets also correspond to the angles of hue in the color wheels surrounding the
Color Balance controls in the Primary In and Out and Secondaries rooms. If the hues of
two shots you’re trying to match don’t match, the direction and distance of their offset
on the Vectorscope scale give you an indication of which direction to move the balance
control indicator to correct for this.
At a zoom percentage of 75 percent, the color targets in the Vectorscope are calibrated
to line up for 75 percent color bars. Zooming out to 100 percent calibrates the color
targets to 100 percent color bars. All color is converted by Color to RGB using the Rec.
709 standard prior to analysis, so color bars from both NTSC and PAL source video will
hit the same targets.
Note: If Broadcast Safe is turned on, color bars’ plots may not align perfectly with these
targets.
196 Chapter 8 Analyzing Signals Using the Video ScopesThe I Bar
The –I bar (negative I bar) shows the proper angle at which the hue of the dark blue box
in the color bars test pattern should appear. This dark blue box, which is located to the
left of the 100-percent white reference square, is referred to as the Inphase signal, or I for
short.
The I bar (positive I bar) overlay in the Vectorscope is also identical to the skin tone line
in Final Cut Pro. It’s helpful for identifying and correcting the skin tones of actors in a shot.
When recorded to videotape and measured on a Vectorscope, the hues of human skin
tones, regardless of complexion, fall along a fairly narrow range (although the saturation
and brightness vary). When there’s an actor in a shot, you’ll know whether or not the skin
tones are reproduced accurately by checking to see if there’s an area of color that falls
loosely around the I bar.
If the skin tones of your actors are noticeably off, the offset between the most likely nearby
area of color in the Vectorscope graph and the skin tone target will give you an idea of
the type of correction you should make.
Chapter 8 Analyzing Signals Using the Video Scopes 197The Q Bar
The Q bar shows the proper angle at which the hue of the purple box in the color bars
test pattern should appear. This purple box, which is located at the right of the 100-percent
white reference square, is referred to as the +Quadrature signal, or Q for short.
When troubleshooting a video signal, the correspondence between the Inphase and
+Quadrature components of the color bars signal and the position of the –I and Q bars
shows you whether or not the components of the video signal are being demodulated
correctly.
The Histogram
The Histogram provides a very different type of analysis than the waveform-based scopes.
Whereas waveforms have a built-in correspondence between the horizontal position of
the image being analyzed and that of the waveform graph, histograms provide a statistical
analysis of the image.
Histograms work by calculating the total number of pixels of each color or luma level in
the image and plotting a graph that shows the number of pixels there are at each
percentage. It’s really a bar graph of sorts, where each increment of the scale from left
to right represents a percentage of luma or color, while the height of each segment of
the histogram graph shows the number of pixels that correspond to that percentage.
The RGB Histogram
The RGB histogram display shows separate histogram analyses for each color channel.
This lets you compare the relative distribution of each color channel across the tonal
range of the image.
198 Chapter 8 Analyzing Signals Using the Video ScopesFor example, images with a red color cast have either a significantly stronger red histogram,
or conversely, weaker green and blue histograms. In the following example, the red cast
in the highlights can be seen clearly.
The R, G, and B Histograms
The R, G, and B histograms are simply isolated versions of each channel’s histogram graph.
The Luma Histogram
The Luma histogram shows you the relative strength of all luminance values in the video
frame, from black to super-white. The height of the graph at each step on the scale
represents the number of pixels in the image at that percentage of luminance, relative
to all the other values. For example, if you have an image with few highlights, you would
expect to see a large cluster of values in the Histogram display around the midtones.
The Luma histogram can be very useful for quickly comparing the luma of two shots so
you can adjust their shadows, midtones, and highlights to match more closely. For
example, if you were matching a cutaway shot to the one shown above, you can tell just
by looking that the image below is underexposed, but the Histogram gives you a reference
for spotting how far.
Chapter 8 Analyzing Signals Using the Video Scopes 199The shape of the Histogram is also good for determining the amount of contrast in an
image. A low-contrast image, such as the one shown above, has a concentrated clump
of values nearer to the center of the graph. By comparison, a high-contrast image has a
wider distribution of values across the entire width of the Histogram.
The 3D Scope
This scope displays an analysis of the color in the image projected within a 3D area. You
can select one of four different color spaces with which to represent the color data.
The RGB Color Space
The RGB color space distributes color in space within a cube that represents the total
range of color that can be displayed:
• Absolute black and white lie at two opposing diagonal corners of the cube, with the
center of the diagonal being the desaturated grayscale range from black to white.
• The three primary colors—red, green, and blue—lie at the three corners connected to
black.
• The three secondary colors—yellow, cyan, and magenta—lie at the three corners
connected to white.
In this way, every color that can be represented in Color can be assigned a point in three
dimensions using hue, saturation, and lightness to define each axis of space.
200 Chapter 8 Analyzing Signals Using the Video ScopesThe sides of the cube represent color of 100-percent saturation, while the center diagonal
from the black to white corners represents 0-percent saturation. Darker colors fall closer
to the black corner of the cube, while lighter colors fall closer to the diagonally opposing
white corner of the cube.
The HSL Color Space
The HSL (Hue, Saturation, and Luminance) color space distributes a graph of points within
a two-pointed cone that represents the range of color that can be displayed:
• Absolute black and white lie at two opposing points at the top and bottom of the
shape.
• The primary and secondary colors are distributed around the familiar color wheel, with
100-percent saturation represented by the outer edge of the shape, and 0-percent
saturation represented at the center.
In this way, darker colors lie at the bottom of the interior, while lighter colors lie at the
top. More saturated colors lie closer to the outer sides of the shape, while less saturated
colors fall closer to the center of the interior.
The Y′CBCR Color Space
The Y′CBCR color space is similar to the HSL color space, except that the outer boundary
of saturation is represented with a specifically shaped six-sided construct that shows the
general boundaries of color in broadcast video.
Chapter 8 Analyzing Signals Using the Video Scopes 201The outer boundary does not identify the broadcast-legal limits of video, but it does
illustrate the general range of color that’s available. For example, the following image
has illegal saturation and brightness.
If you turn on the Broadcast Safe settings, the distribution of color throughout the Y′CBCR
color space becomes constricted.
The IPT Color Space
The IPT color space is a perceptually weighted color space, the purpose of which is to more
accurately represent the hues in an image distributed on a scale that appears uniformly
linear to your eye.
While the RGB, HSL, and Y′CBCR color spaces present three-dimensional analyses of the
image that are mathematically accurate, and allow you to see how the colors of an image
are transformed from one gamut to another, they don’t necessarily show the distribution
of colors as your eyes perceive them. A good example of this is a conventionally calculated
hue wheel. Notice how the green portion of the hue wheel presented below seems so
much larger than the yellow or red portion.
202 Chapter 8 Analyzing Signals Using the Video ScopesThe cones of the human eye that are sensitive to color have differing sensitivities to each
of the primaries (red, green, and blue). As a result, a mathematically linear distribution of
analyzed color is not necessarily the most accurate way to represent what we actually
see. The IPT color space rectifies this by redistributing the location of hues in the color
space according to tests where people chose and arranged an even distribution of hues
from one color to another, to define a spectrum that “looked right” to them.
In the IPT color space, I corresponds to the vertical axis of lightness (desaturated black
to white) running through the center of the color space. The horizontal plane is defined
by the P axis, which is the distribution of red to green, and the T axis, which is the
distribution of yellow to blue.
Here’s an analysis of the test image within this color space.
Sampling Color for Analysis
The 3D video scope also provides controls for sampling and analyzing the color of up to
three pixels within the currently displayed image. Three swatches at the bottom of the
video scope let you sample colors for analysis by dragging one of three correspondingly
numbered crosshairs within the image preview area. A numerical analysis of each sampled
color appears next to the swatch control at the bottom of the 3D video scope.
The color channel values that are used to analyze the selected pixel change depending
on which color space the 3D scope is set to. For example, if the 3D scope is set to RGB,
then the R, G, and B values of each selected pixel will be displayed. If the 3D scope is
instead set to Y′CBCR
, then the Y′, CB
, and CR values of the pixel will be displayed.
You can choose different samples for each shot in the Timeline, and the position of each
shot’s sampling crosshairs is saved as you move the playhead from clip to clip. This makes
it easy to compare analogous colors in several different shots to see if they match.
Chapter 8 Analyzing Signals Using the Video Scopes 203This analysis can be valuable in situations where a specific feature within the image needs
to be a specific value. For example, you can drag swatches across the frame if you’re
trying to adjust a black, white, or colored background to be a uniform value, or if you
have a product that’s required to be a highly specific color in every shot in which it
appears.
Note: These controls are visible only when the 3D scope is occupying an area of the
Scopes window.
To sample and analyze a color
1 Click one of the three color swatch buttons at the bottom of the 3D scope.
2 Click or drag within the image preview area to move the color target to the area you
want to analyze.
As you drag the color target over the image preview, four things happen:
• The color swatch updates with that color.
• The H, S, and L values of the currently analyzed pixel are displayed to the right of the
currently selected swatch.
204 Chapter 8 Analyzing Signals Using the Video Scopes• Crosshairs identify that value’s location within the three-dimensional representation
of color in the 3D scope itself.
Each color target is numbered to identify its corresponding color swatch.
• A vertical line appears within the Hue, Sat, and Lum curves of the Secondaries room,
showing the position of the sample pixels relative to each curve.
Chapter 8 Analyzing Signals Using the Video Scopes 205The Primary In room provides your main interface for color correcting each shot. For every
shot, this is where you begin, and in many cases this may be all you need.
Simply speaking, primary corrections are color corrections that affect the entire image at
once. The Primary In room provides a variety of controls that will be familiar to anyone
who’s worked with other image editing and color correction plug-ins and applications.
Each of these controls manipulates the contrast and color in the image in a different way.
Note: Many of the controls in the Primary In room also appear in the Secondaries and
Primary Out rooms, in which they have identical functionality.
This chapter covers the following:
• What Is the Primary In Room Used For? (p. 207)
• Where to Start in the Primary In Room? (p. 208)
• Contrast Adjustment Explained (p. 210)
• Using the Primary Contrast Controls (p. 212)
• Color Casts Explained (p. 222)
• Using Color Balance Controls (p. 224)
• The Curves Controls (p. 234)
• The Basic Tab (p. 245)
• The Advanced Tab (p. 249)
• Using the Auto Balance Button (p. 251)
• The RED Tab (p. 252)
What Is the Primary In Room Used For?
Typically, you'll use the Primary In room to do tasks such as the following:
• To adjust image contrast, so that the shadows are deep enough, the highlights are
bright enough, and the overall lightness of the image is appropriate to the scene.
207
The Primary In Room
9• To adjust color in the highlights and midtones to correct for unwanted color casts due
to a video camera's incorrect white balance settings, or lighting that was inappropriate
for the type of film stock that was used.
• To make changes to the overall color and contrast of an image in order to change the
apparent time of day. For example, you might need to alter a shot that was
photographed in the late afternoon to look as if it were shot at high noon.
• To adjust the color and contrast of every shot in a scene so that there are no irregularities
in exposure or color from one shot to the next.
All these tasks and more can be performed using the tools that are available in the Primary
In room. In fact, when working on shows that require relatively simple corrections, you
may do all your corrections right here, including perhaps a slight additional adjustment
to warm up or cool down the image for purely aesthetic purposes. (On the other hand,
you can also perform different stages of these necessary corrections in other rooms for
organizational purposes. For more information about how to split up and organize
corrections in different ways, see Managing a Shot’s Corrections Using Multiple Rooms.)
The Primary In room also lets you make specific adjustments. Even though the Primary
In room applies corrections to the entire image, you can target these corrections to specific
aspects of the picture. Many of the controls in the Primary In room are designed to make
adjustments to specific regions of tonality. In other words, some controls adjust the color
in brighter parts of the picture, while other controls only affect the color in its darker
regions. Still other types of controls affect specific color channels, such that you can lower
or raise the green channel without affecting the red or blue channels.
Where to Start in the Primary In Room?
Many colorists use the tools in the Primary In room in a specific order. This order is used
to organize the sections of this document to provide you with a workflow with which to
get started. In general, you'll probably find that you work on most images using the
following steps.
• Stage 1: Adjusting the Contrast of the Image
• Stage 2: Adjusting the Color Balance of the Image
• Stage 3: Adjusting the Saturation of the Image
• Stage 4: Making More Specific Adjustments
208 Chapter 9 The Primary In RoomStage 1: Adjusting the Contrast of the Image
Most colorists always begin by correcting the contrast of an image before moving on to
adjusting its color. This adjustment can be made using the primary contrast controls, the
Luma curve control, and the Master Lift, Master Gain, and Master Gamma controls in the
Basic tab.
Stage 2: Adjusting the Color Balance of the Image
Once the black and white points of the image have been determined, the color balance
is tackled. Fast adjustments to the color balance in the shadows, midtones, and highlights
can be made using the primary color balance controls. More detailed adjustments can
be made using the red, green, and blue curves controls, and specific numeric adjustments
can be made using the Red, Green, and Blue Lift, Gamma, and Gain controls in the
Advanced tab.
Stage 3: Adjusting the Saturation of the Image
Once you're happy with the quality of the color, you can make adjustments to raise or
lower the saturation, or intensity, of the colors in the image. The Saturation, Highlight
Sat., and Shadow Sat. controls in the Basic tab let you adjust the overall saturation or only
the saturation within specific tonal regions.
Chapter 9 The Primary In Room 209Stage 4: Making More Specific Adjustments
If you still feel that there are specific aspects of the image that need further adjustment
after Stages 1 through 3, you can turn to the curves controls, which let you make targeted
adjustments to the color and contrast of the image within specifically defined zones of
tonality. Past a certain point, however, it may be easier to move on to the Secondaries
room, covered in The Secondaries Room.
Contrast Adjustment Explained
If you strip away the color in an image (you can do this by setting the Saturation control
to 0), the grayscale image that remains represents the luma component of the image,
which is the portion of the image that controls the lightness of the image. As explained
in The Y′CBCR Color Model Explained, the luma of an image is derived from a weighted
ratio of the red, green, and blue channels of the image which corresponds to the eye's
sensitivity to each color.
Although luma was originally a video concept, you can manipulate the luma component
of images using the contrast controls in Color no matter what the originating format.
These controls let you adjust the lightness of an image more or less independently of its
color.
Note: Extreme adjustments to image contrast will affect image saturation.
210 Chapter 9 The Primary In RoomWhat Is the Contrast Ratio of a Shot?
One of the most important adjustments you can make to an image is to change its contrast
ratio. The contrast ratio of an image is the difference between the darkest pixel in the
shadows (the black point) and the lightest pixel in the highlights (the white point). The
contrast ratio of an image is easy to quantify by looking at the Waveform Monitor or
Histogram set to Luma. High-contrast images have a wide distribution of values from the
black point to the white point.
Low-contrast images, on the other hand, have a narrower distribution of values from the
black point to the white point.
The Shadows, Midtones, and Highlights contrast sliders let you make individual
adjustments to each of the three defining characteristics of contrast.
Note: Contrast adjustments made with the primary contrast sliders can affect the saturation
of the image. Raising luma by a significant amount can reduce saturation, while reducing
luma can raise image saturation. This behavior is different from that of the Color Corrector
3-way filter in Final Cut Pro, in which changes to contrast have no effect on image
saturation.
Chapter 9 The Primary In Room 211Using the Primary Contrast Controls
The primary contrast controls consist of three vertical sliders that are used to adjust the
black point, the distribution of midtones, and the white point of the image.
Adjusts black point Adjusts midtones distribution Adjusts white point
Output: 0.00h 0.00s 0.001 Output: 0.00h 0.00s 0.501 Output: 0.00h 0.00s 1.001
Shadow Midtone Highlight
Each slider is a vertical gradient. Dragging down lowers its value, while dragging up raises
its value. A blue bar shows the current level at which each slider is set, while the third
number in the Output display (labeled L) below each color control shows that slider's
numeric value. Contrast adjustment is a big topic. For more information, see:
• Adjusting the Black Point with the Shadow Slider
• Adjusting the Midtones with the Midtone Slider
• Adjusting the White Point with the Highlight Slider
• Expanding and Reducing Image Contrast
• Contrast Affects Color Balance Control Operation
Using Contrast Sliders with a Control Surface
In the Primary In, Secondaries, and Primary Out rooms, the three contrast sliders usually
correspond to three contrast rings, wheels, or knobs on compatible control surfaces.
Whereas you can adjust only one contrast slider at a time using the onscreen controls
with a mouse, you can adjust all three contrast controls simultaneously using a hardware
control surface.
When you’re using a control surface, the Encoder Sensitivity parameter in the User Prefs
tab of the Setup room lets you customize the speed with which these controls make
adjustments. For more information, see Control Surface Settings.
212 Chapter 9 The Primary In RoomAdjusting the Black Point with the Shadow Slider
The behavior of the Shadow contrast slider depends on whether or not the Limit Shadow
Adjustments preference (in the User Prefs tab of the Setup room) is turned on. (For more
information, see User Interface Settings.)
• If Limit Shadow Adjustments is turned off: Contrast adjustments with the Shadow slider
are performed as a simple lift operation. The resulting correction uniformly lightens or
darkens the entire image, altering the shadows, midtones, and highlights by the same
amount. This can be seen most clearly when adjusting the black point of a linear
black-to-white gradient, which appears in the Waveform Monitor as a straight diagonal
slope. Notice how the entire slope of the gradient in the Waveform Monitor moves up.
Chapter 9 The Primary In Room 213• If Limit Shadow Adjustments is turned on: The black point is raised, but the white point
remains at 100 percent. This means that when you make any adjustments with the
Shadow contrast slider, all midtones in the image are scaled between the new black
point and 100 percent. Notice how the top of the slope in the Waveform Monitor stays
in place while the black point changes.
You'll probably leave the Limit Shadow Adjustments control turned on for most of your
projects, since this setting gives you the most control over image contrast (and color, as
you'll see later) in your programs.
214 Chapter 9 The Primary In RoomContrast adjustments to the shadows are one of the most frequent operations you'll
perform. Lowering the blacks so that the darkest shadows touch 0 percent (seen in the
bottom of the Waveform Monitor's graph or on the left of the Histogram's graph when
either is set to Luma) deepens the shadows of your image. Deeper shadows can enrich
the image and accentuate detail that was being slightly washed out before.
Lowering the blacks even more, (called crushing the blacks because no pixel can be darker
than 0 percent), creates even higher-contrast looks. Crushing the blacks comes at the
expense of losing detail in the shadows, as larger portions of the image become uniformly
0 percent black. This can be seen clearly in the black portion of the gradient at the bottom
of the image.
Note: Even if Limit Shadow Adjustments is turned on, you can still make lift adjustments
to the image using the Master Lift parameter in the Basic tab. See Master Contrast Controls.
Adjusting the Midtones with the Midtone Slider
The Midtone contrast slider lets you make a nonlinear adjustment to the distribution of
midtones in the image (sometimes referred to generically as a gamma adjustment). What
this means is that you can adjust the middle tones of the image without changing the
darkness of the shadows or the lightness of the highlights.
Chapter 9 The Primary In Room 215Here are two examples of using the Midtone contrast slider. The midtones have been
lowered in the following image. Notice how the overall image has darkened, with more
of the picture appearing in the shadows; however, the highlights are still bright, and the
shadow detail has not been lost. The top and bottom of the gradient's slope in the
Waveform Monitor remain more or less in place, and the slope itself curves downward,
illustrating the nonlinear nature of the adjustment.
Next, the Midtone slider is raised. The image has clearly lightened, and much more of the
picture is in the highlights. Yet the deepest shadows remain rich and dark, and the detail
in the highlights isn't being lost since the highlights are staying at their original level.
Again, the top and bottom of the gradient's slope in the Waveform Monitor remain more
or less in place, but this time the slope curves upward.
216 Chapter 9 The Primary In RoomNo matter what contrast ratio you decide to employ for a given shot, the Midtone slider
is one of your main tools for adjusting overall image lightness when creating mood,
adjusting the perceived time of day, and even when simply ensuring that the audience
can see the subjects clearly.
Note: Even though midtones adjustments leave the black and white points at 0 and 100
percent respectively, extreme midtones adjustments will still crush the blacks and flatten
the whites, eliminating detail in exchange for high-contrast looks.
Adjusting the White Point with the Highlight Slider
The Highlight slider is the inverse of the Shadow slider. Using this control, you can raise
or lower the white point of the image, while leaving the black point relatively untouched.
All the midtones of the image are scaled between your new white point and 0 percent.
If the image is too dark and the highlights seem lackluster, you can raise the Highlight
slider to brighten the highlights, while leaving the shadows at their current levels. Notice
that the black point of the gradient's slope in the Waveform Monitor remains at 0 percent
after the adjustment.
Note: In this example, Broadcast Safe has been turned off, and you can see the white
level of the gradient clipping at the maximum of 109 percent.
Chapter 9 The Primary In Room 217If the highlights are too bright, you can lower the Highlight slider to bring them back
down, without worrying about crushing the blacks.
Overly bright highlights are often the case with images shot on video, where super-white
levels above the broadcast legal limit of 100 percent frequently appear in the source
media (as seen in the previous example). If left uncorrected, highlights above 100 percent
will be clipped by the Broadcast Safe settings when they're turned on, resulting in a loss
of highlight detail when all pixels above 100 percent are set to 100 percent.
By lowering the white point yourself, you can bring clipped detail back into the image.
Note: Values that are clipped or limited by Color are preserved internally and may be
retrieved in subsequent adjustments. This is different from overexposed values in source
media, which, if clipped at the time of recording, are lost forever.
218 Chapter 9 The Primary In RoomWhile modest adjustments made with the Highlight slider won't affect the black point,
they will have an effect on the midtones that is proportional to the amount of your
adjustment. The influence of the Highlight slider falls off toward the shadows, but it's fair
to say that adjustments made with the Highlight slider have a gradually decreasing effect
on approximately the brightest 80 percent of the image.
For this reason, you may find yourself compensating for a Highlight slider adjustment's
effect on the midtones of your image by making a smaller inverse adjustment with the
Midtone slider.
The suitable white point for your particular image is highly subjective. In particular, just
because something is white doesn't mean that it's supposed to be up at 100 percent.
Naturally bright features such as specular highlights, reflected glints, and exposed light
sources are all candidates for 100 percent luma. (Chances are these areas are at super-white
levels already, so you'll be turning the brightness down if broadcast legality is an issue.)
On the other hand, if you're working on an interior scene with none of the previously
mentioned features, the brightest subjects in the scene may be a wall in the room or the
highlights of someone's face, which may be inappropriately bright if you raise them to
100 percent. In these cases, the brightness at which you set the highlights depends largely
on the kind of lighting that was used. If the lighting is subdued, you'll want to keep the
highlights lower than if the lighting is intentionally bright.
Expanding and Reducing Image Contrast
For a variety of reasons, it's often desirable to stretch the contrast ratio of an image so
that it occupies the widest range of values possible, without introducing unwanted noise.
(This can sometimes happen in underexposed images that require large contrast
adjustments.)
Chapter 9 The Primary In Room 219Most images don't start out with the highest-contrast ratio possible for the shot. For
example, even in well-exposed shots, video cameras often don't record black at 0 percent,
instead recording black levels at around 3 to 4 percent. For this reason alone, small
adjustments to lower the black point often impress without the need to do much more.
In other cases, an image that is slightly over or underexposed may appear washed out
or muddy, and simple adjustments to lower the darkest pixels in the image and raise the
brightest pixels in the image to widen the contrast ratio have an effect similar to “wiping
a layer of grime off the image” and are often the first steps in simply optimizing a shot.
In other cases, you may choose to deliberately widen the contrast ratio even further to
make extreme changes to image contrast. This may be because the image is severely
underexposed, in which case you need to adjust the Highlight and Midtone sliders in an
effort to simply make the subjects more visible. You might also expand the contrast ratio
of an otherwise well-exposed shot to an extreme, crushing the shadows and clipping the
highlights to create an extremely high-contrast look.
Important: When you expand the contrast of underexposed shots, or make other extreme
contrast adjustments, you may accentuate film grain and video noise in the image. This
is particularly problematic when correcting programs that use video formats with low
chroma subsampling ratios. For more information, see Chroma Subsampling Explained.
220 Chapter 9 The Primary In RoomOf course, you also have the option to lower the contrast ratio of an image. This might
be done as an adjustment to change the apparent time of day (dulling shadows while
maintaining bright highlights for a noon-time look) or simply as a stylistic choice (lighter
shadows and dimmer highlights for a softer look).
What Exactly Is Image Detail?
Image detail is discussed frequently in this and other chapters, mainly within the context
of operations that enhance perceived detail, and those that result in the loss of image
detail. Simply put, image detail refers to the natural variation in tone, color, and contrast
between adjacent pixels.
Because they occur at the outer boundaries of the video signal, the shadows and
highlights of an image are most susceptible to a loss of image detail when you make
contrast adjustments. This results in the "flattening" of areas in the shadows or highlights
when larger and larger groups of pixels in the picture are set to the same value (0 in
the shadows and 100 in the highlights).
It's important to preserve a certain amount of image detail in order to maintain a natural
look to the image. On the other hand, there's no reason you can't discard a bit of image
detail to achieve looks such as slightly crushed blacks, or widely expanded contrast for
a "high-contrast look" with both crushed blacks and clipped whites. Just be aware of
what, exactly, is happening to the image when you make these kinds of adjustments.
Chapter 9 The Primary In Room 221Contrast Affects Color Balance Control Operation
There's another reason to expand or otherwise adjust the contrast ratio of an image before
making any other color corrections. Every adjustment you make to the contrast of an
image changes which portions of that image fall into which of the three overlapping
tonal zones the color balance controls affect (covered in Using Color Balance Controls).
For example, if you have a low-contrast image with few shadows, and you make an
adjustment with the Shadow color balance control, the resulting correction will be small,
as you can see in the following gradient.
If, afterward, you adjust the Shadow or Midtone contrast sliders to lower the shadows,
you'll find more of the image becoming affected by the same color correction, despite
the fact that you've made no further changes to that color control.
This is not to say that you shouldn't readjust contrast after making other color corrections,
but you should keep these interactions in mind when you do so.
Color Casts Explained
A color cast is an unwanted tint in the image due to the lighting, the white balance of
the video camera, or the type of film stock used given the lighting conditions during the
shoot. Color casts exist because one or more color channels is inappropriately strong or
weak. Furthermore, color casts aren't usually uniform across an entire image. Often, color
casts are stronger in one portion of the image (such as the highlights) and weaker or
nonexistent in others (the shadows, for example).
222 Chapter 9 The Primary In RoomIf you examine an image with a color cast in the Waveform Monitor set to Parade, you
can often see the disproportionate levels of each channel that cause the color cast when
you examine the tops of the waveforms (representing the highlights) and the bottoms
of the waveforms (representing the shadows).
Note: For clarity, the Parade scope is shown with the tinted red, green, and blue waveforms
that appear when Monochrome Scopes is turned off in the User Prefs tab.
When Is a Color Cast a Creative Look?
It's important to bear in mind that color casts aren't always bad things. In particular, if
the director of photography is being creative with the lighting, there may in fact be
color casts throughout the tonal range of the image. It's important to distinguish between
color casts that are there either accidentally or because of conditions of the shoot and
the stylistic choices made when lighting each scene. In all cases, clear communication
between the director of photography and the colorist is essential.
Chapter 9 The Primary In Room 223Using Color Balance Controls
The color balance controls (which are sometimes referred to as hue wheels) work as virtual
trackballs on the screen; however, they consist of three separate controls.
Saturation slider
Hue slider
Output display Luma reset button
Color balance reset
button
• Color Balance wheel: A virtual trackball that lets you adjust the hue (set by the handle's
angle about the center) and saturation (set by the handle's distance from the center)
of the correction you're using to rebalance the red, green, and blue channels of the
image relative to one another. A handle at the center of the crosshairs within the wheel
shows the current correction. When the handle is centered, no change is made.
• Hue slider: This slider lets you change the hue of the adjustment without affecting the
saturation.
• Saturation slider: This slider lets you change the saturation of the adjustment without
affecting the hue. Drag up to increase the saturation, and down to decrease it.
• H, S reset button: Clicking the H, S reset button resets the color balance control for that
tonal zone. If you're using a control surface, this corresponds to the color reset control
for each zone. (These are usually one of a pair of buttons next to each color balance
trackball.)
• L reset button: Clicking the L reset button resets the contrast slider for that tonal zone.
If you're using a control surface, this corresponds to the contrast reset control for each
zone. (These are usually one of a pair of buttons next to each color balance trackball.)
• Output display: The output display underneath each color control shows you the current
hue and saturation values of the color balance control and the lightness value of the
contrast slider for that zone.
Note: The color balance controls can be accelerated to 10x their normal speed by
pressing the Option key while you drag.
224 Chapter 9 The Primary In RoomUsing Color Balance Controls with a Control Surface
The three color balance controls correspond to the three trackballs, or joyballs, on
compatible control surfaces. Whereas you can only adjust one color balance control at
a time using the onscreen controls with a mouse, you can adjust all three color balance
controls simultaneously using a hardware control surface.
When you’re using a control surface, the Hue Wheel Angle and Joyball Sensitivity
parameters in the User Prefs tab of the Setup room let you customize the operation of
these controls. For more information on adjusting these parameters, see Control Surface
Settings.
Rebalancing a Color Cast
By dragging the handle of a color balance control, you can rebalance the strength of the
red, green, and blue channels of an image to manipulate the quality of light in order to
either correct such color casts or introduce them for creative purposes. The color balance
controls always adjust all three color channels simultaneously.
In the following example, the image has a red color cast in the highlights, which can be
confirmed by the height of the top of the red channel in the Parade scope.
Chapter 9 The Primary In Room 225To correct this, you need to simultaneously lower the red channel and raise the blue
channel, which you can do by dragging the Highlight color balance control. The easy
way to remember how to make a correction of this nature is to drag the color balance
control handle toward the secondary of the color that's too strong. In this case, the color
cast is a reddish/orange, so dragging the color control in the opposite direction, toward
bluish/cyan, rebalances the color channels in the appropriate manner. The Midtone color
balance control is used because the majority of the image that's being adjusted lies
between 80 and 20 percent.
If you watch the Parade scope while you make this change, you can see the color channels
being rebalanced, while you also observe the correction affecting the image on your
broadcast display.
There are three color balance controls in the Primary In, Secondaries, and Primary Out
rooms. Each one lets you make adjustments to specific tonal regions of the image.
226 Chapter 9 The Primary In RoomAbout Shadows, Midtones, and Highlights Adjustments
Like many other color correction environments, Color provides a set of three color balance
controls for the specific adjustment of color that falls within each of three overlapping
zones of image tonality. These tonal zones are the shadows, midtones, and highlights of
the image. If you were to reduce the tonality of an image into these three zones, it might
look something like the following illustration.
Original color image Simulated tonal zones, shadows, midtones,
and highlights
Areas most affected by the Shadow color
balance control
Areas most affected by the Highlight color
balance control
Areas most affected by the Midtone color
balance control
Three zone controls allow you to make targeted adjustments to the color that falls within
the highlights of an image, without affecting color in the shadows. Similarly, they allow
you to make separate adjustments to differently lit portions of the image to either make
corrections or achieve stylized looks.
Chapter 9 The Primary In Room 227To prevent obvious banding or other artifacts, adjustments to the three tonal zones
overlap broadly, with each color balance control's influence over the image diminishing
gradually at the edges of each zone. This overlap is shown in the following graph.
Shadow control influence
Midtone influence
Highlight control influence
The ways in which these zones overlap are based on the OpenCDL standard, and their
behavior is described below.
Important: If you're used to the way the Color Corrector 3-way filter works in Final Cut Pro,
you'll want to take some time to get used to the controls of the Primary In room, as they
respond somewhat differently. Also, unlike adjustments using the Color Corrector 3-way
filter in Final Cut Pro, adjustments made using the color balance control affect the luma
of the image, altering its contrast ratio.
228 Chapter 9 The Primary In RoomShadows Color Adjustments
The behavior of the Shadow color balance control depends on whether or not the Limit
Shadow Adjustments preference is turned on. (For more information, see User Interface
Settings.)
• If Limit Shadow Adjustments is turned off: Color adjustments made using the Shadow
control are performed as a simple add operation. (The color that's selected in the
Shadow color control is simply added to that of every pixel in the image.) The resulting
correction affects the entire image (and can be seen clearly within the gradient at the
bottom of the image), producing an effect similar to a tint.
Chapter 9 The Primary In Room 229• If Limit Shadow Adjustments is turned on: A linear falloff is applied to color adjustments
made with the Shadow control such that black receives 100 percent of the adjustment
and white receives 0 percent of the adjustment. This is the method to use if you want
to be able to selectively correct shadows while leaving highlights untouched.
Note: To better illustrate the effect of the Shadow color control, the previous examples
were shown with Broadcast Safe turned off so that image values below 0 percent wouldn't
be clipped.
Midtones Color Adjustments
Adjustments made with the Midtone color balance control apply the correction using a
power operation (the new pixel value = old pixel value ^ adjustment). The result is that
midtones adjustments have the greatest effect on color values at 50 percent lightness
and fall off as color values near 0 and 100 percent lightness.
230 Chapter 9 The Primary In RoomThis lets you make color adjustments that exclude the shadows and highlights in the
image. For example, you could add a bit of blue to the midtones to cool off an actor's
skin tone, while leaving your shadows deep and untinted and your highlights clean and
pure.
Highlights Color Adjustments
Adjustments made using the Highlight color balance control apply a multiply operation
to the image—the color that's selected in the Highlight color control is simply multiplied
with that of every pixel in the image. By definition, multiply color correction operations
fall off in the darker portions of an image and have no effect whatsoever in regions of 0
percent black.
Chapter 9 The Primary In Room 231The Highlight color control is extremely useful for correcting color balance problems
resulting from the dominant light source that's creating the highlights, without
inadvertently tinting the shadows. In the following example, a bit of blue is added to the
highlights to neutralize the orange from the tungsten lighting.
Color Balance Control Overlap Explained
The broadly overlapping nature of color correction adjustments made with the three
color balance controls is necessary to ensure a smooth transition from adjustments made
in one tonal zone to another, in order to prevent banding and other artifacts. In general,
adjustments made to the color in one tonal zone also affect other tonal zones in the
following ways:
• Adjustments made to the Shadow color controls overlap the midtones and the darker
portion of the highlights but exclude areas of the image at the highest percentages.
• Adjustments made to the midtones affect the broadest area of the image but don't
affect the lowest percentages of the shadows or the highest percentages of the
highlights.
• Adjustments made to the highlights affect the midtones as well, but not the lowest
percentages of the shadows.
Controlling Color Balance Control Overlap
While the tonal zones that are affected by the three color balance controls are predefined
by the mathematical operations they perform, it is possible to exert some control over
what areas of an image are being affected by the corrections of a particular color balance
control. This is done by applying opposing corrections with other color balance controls.
232 Chapter 9 The Primary In RoomThe following example shows this principal in action. If you adjust the Highlight color
balance control to add blue to a linear gradient, you'll see the following preview.
As you can see, this change affects both the whites and midtones. If you want to restrict
the correction that's taking place in the midtones, while leaving the correction at the
upper portion of the whites, you can take advantage of the technique of using
complementary colors to neutralize one another, making a less extreme, opposite
adjustment with the Midtone color balance control.
The result is that the highlights correction that had been affecting the midtones has been
neutralized in the lower portion of the midtones.
Although making opposing adjustments to multiple color balance controls may seem
contradictory, it's a powerful technique. With practice, you'll find yourself instinctively
making adjustments like this all the time to limit the effect of corrections on neighboring
zones of tonality.
Chapter 9 The Primary In Room 233The Curves Controls
The curves controls, located underneath the color controls in the Primary In room, provide
an additional method for adjusting the color and contrast of your images. If you're familiar
with image editing applications such as Photoshop, chances are you've used curves
before.
The three main differences between the curves controls and the color balance controls
are:
• The curves controls let you make adjustments to as many specific tonal ranges that
you choose to define, while the color balance controls affect three predefined tonal
ranges.
• Each curves control affects only a single color channel, while the color balance controls
let you quickly adjust all three color channels simultaneously.
• Curves cannot be animated with keyframes, although every other parameter in the
Primary In and Primary Out rooms can be.
Color balance controls are usually faster to use when making broad adjustments to the
shadows, midtones, and highlights of the image. Curves, on the other hand, often take
more time to adjust, but they allow extremely precise adjustments within narrow tonal
zones of the image, which can border on the kinds of operations typically performed
using secondary color correction.
Important: While the power of curves can be seductive, be wary of spending too much
time finessing your shots using the curves controls, especially in client sessions where
time is money. It's easy to get lost in the minutiae of a single shot while the clock is ticking,
and such detail work may be faster to accomplish with other tools.
How Curves Affect the Image
Curves work by remapping the original color and luma values to new values that you
choose, simply by changing the height of the curve. The x axis of the graph represents
the source values that fall along the entire tonal range of the original image, from black
(left) to white (right). The y axis of the graph represents the tonal range available for
adjustment, from black (bottom) to white (top).
234 Chapter 9 The Primary In RoomWithout any adjustments made, each curve control is a flat diagonal line; in other words,
each source value equals its adjustment value, so no change is made.
Source Value
Adjustment Value If part of a curve is raised by one or more control points, then the tonal area of the image
that corresponds to that part of the curve is adjusted to a higher value. In other words,
that part of the image is lightened.
Effect of raising midtones using the Luma curve
If part of a curve is lowered with one or more control points, then the tonal area of the
image that corresponds to that part of the curve is adjusted to a lower value. In other
words, that part of the image is darkened.
Effect of lowering midtones using the Luma curve
Chapter 9 The Primary In Room 235Curve Editing Control Points and B-Splines
By default, each curve has two control points. The bottom-left control point is the black
point and the top-right control point is the white point for that channel. These two control
points anchor the bottom and top of each curve.
Curves in Color are edited using B-Splines, which use control points that aren't actually
attached to the curve control to "pull" the curve into different shapes, like a strong magnet
pulling thin wire. For example, here's a curve with a single control point that's raising the
highlights disproportionately to the midtones:
The control point hovering above the curve is pulling the entire curve upward, while the
ends of the curve are pinned in place.
The complexity of a curve is defined by how many control points are exerting influence
on the curve. If two control points are added to either side and moved down, the curve
can be modified as seen below.
236 Chapter 9 The Primary In RoomTo make curves sharper, move their control points closer together. To make curves more
gentle, move the control points farther away from one another.
The following procedures describe how to create, remove, and adjust the control points
that edit curves controls.
To add control points to a curve
µ Click anywhere on the curve itself.
To adjust a control point
µ Drag it anywhere within the curve control area.
To remove control points from a curve
µ Drag a point up or down until it's outside the curve control area.
To remove all control points from a curve
µ Click the reset button (at the upper-left side of each curve graph) for the curve from which
you want to clear control points.
Using Curves to Adjust Contrast
One of the most easily understood ways of using curves is to adjust contrast with the
Luma curve. The Luma curve actually performs a simultaneous adjustment to the red,
green, and blue channels of the image (as you can see if you take a look at the Parade
scope while making Luma curve adjustments), so the overall effect is to adjust the lightness
of the image.
Note: Adjustments made to the Luma curve may affect its saturation. Raising luma by a
significant amount can reduce its saturation.
Chapter 9 The Primary In Room 237You can draw a general correspondence between the controls described in Contrast
Adjustment Explained and the black point, midtones, and white point of the Luma curve.
For example, moving the black point of the curve up raises the black point.
Moving the white point of the curve down lowers the white point of the image.
These two control points roughly correspond to the Shadow and Highlight contrast
controls. If you add a third control point to the Luma curve somewhere in the center, you
can adjust the distribution of midtones that fall between the black and white points. This
adjustment is similar to that of using the Midtone contrast control. Moving this middle
control point up raises the distribution of midtones, lightening the image while leaving
the white and black points pinned in place.
238 Chapter 9 The Primary In RoomMoving the same control point down lowers the distribution of midtones, darkening the
image while leaving the white and black points pinned in place.
While these three control points can mimic the functionality of the Shadow, Midtone,
and Highlight contrast controls, the true power of curves comes from the ability to add
several control points to make targeted adjustments to the lightness of specific tonal
regions in the image.
The Luma Curve Limits the Range of the Primary Contrast Sliders
One important aspect of the curves controls is that they can limit the range of subsequent
adjustments with the primary contrast sliders in the same room. This can be clearly seen
when you make an adjustment to lower the white point of the image using the Luma
curve. Afterward, you'll find yourself unable to use the Highlight contrast slider to raise
the image brightness above the level that's set by the Luma curve. You can still make
additional contrast adjustments in other rooms.
An Example of the Luma Curve in Use
The following example illustrates how to make very specific changes to the contrast of
an image using the Luma curve. In this shot, the sky is significantly brighter than the rest
of the image. In order to bring viewer attention more immediately to the subject sitting
at the desk, you need to darken the sky outside the window, without affecting the
brightness of the rest of the image.
Chapter 9 The Primary In Room 239To make adjustments to a Luma curve
1 Before making any actual adjustments, pin down the midtones and shadows of the image
by adding a control point to the curve without moving it either up or down.
Adding control points to a portion of a curve that you don't want to adjust, and leaving
them centered, is a great way to minimize the effect of other adjustments you're making
to specific areas of an image. When you add additional control points to adjust the curve,
the unedited control points you placed will help to limit the correction.
Tip: When adding multiple control points to a curve, you can use the grid to identify
where to position parts of a curve you want to be at the original, neutral state of the
image. At its uncorrected state, each curve passes through the diagonal intersections of
the background grid.
2 To make the actual adjustment, drag the white point at the upper-right corner down to
darken the sky.
You want to make sure that you don't drag the new control point down too far, since it's
easy to create adjustments that look unnatural or solarized using curves, especially when
part of a curve is inverted.
240 Chapter 9 The Primary In RoomThat was a very targeted adjustment, but you can go further. Now that the sky is more
subdued, you may want to brighten the highlights of the man's face by increasing the
contrast in that part of the image.
3 Add a control point below the first control point you created, and drag it up until the
man's face lightens.
The man's face is now brighter, but the shadows are now a bit washed out.
4 Add one last control point underneath the last control point you created, and drag it
down just a little bit to deepen the shadows, without affecting the brighter portions of
the image.
As you can see, the Luma curve is a powerful tool for making extremely specific changes.
Using Curves to Adjust Color
Unlike the color balance controls, which adjust all three color channels simultaneously,
each of the color curves controls affects a single color channel. Additionally, the red,
green, and blue color curves let you make adjustments within specific areas of tonality
defined by the control points you add to the curve. This means that you can make very
exact color adjustments that affect regions of the image that are as narrow or broad as
you define.
Chapter 9 The Primary In Room 241What Is Color Contrast?
Contrast in this documentation usually describes the differences between light and dark
tones in the image. There is another way to describe contrast, however, and that is the
contrast between different colors in an image. Color contrast is a complex topic, touching
upon hue, color temperature, lightness, and saturation. To greatly simplify this diverse
topic, color contrast can pragmatically refer to the difference in color that exists in
different regions of the image.
In the following example, the image starts out with an indiscriminate color cast; in other
words, there is red in the shadows, red in the midtones, and red in the highlights, so
there aren’t many clearly contrasting colors in different areas of the image. By removing
this color cast from some parts of the image, and leaving it in others, you can enhance
the color contrast between the main subject and the background. In images for which
this is appropriate, color contrast can add depth and visual sophistication to an otherwise
flat image.
Correcting a Color Cast Using Curves
In the following example, you'll see how to make a targeted correction to eliminate a
color cast from the lower midtones, shadows, and extreme highlights of an image, while
actually strengthening the same color cast in the lower highlights.
The following image has a distinct red color cast from the shadows through the highlights,
as you can see by the elevated red waveform in the Parade scope.
Note: For clarity, Broadcast Safe has been turned off so you can better see the bottoms
of the waveforms in the Parade scope.
In this particular shot, you want to keep the red fill light on the woman's face, as it was
intentionally part of the look of the scene. However, to deepen the shadows of the scene
and make the subject stand out a little more from the background, you'd like to remove
some of the red from the shadows.
242 Chapter 9 The Primary In RoomTo make a targeted color cast correction
1 Add a control point to the red curve near the bottom of the curve, and pull down until
the red color cast becomes subdued.
This should coincide with the bottom of the red waveform in the Parade scope lining up
with the bottoms of the green and blue waveforms.
This operation certainly neutralizes the red in the shadows; unfortunately, because this
one control point is influencing the entire curve, the correction also removes much of
the original red from the midtones as well.
Tip: If you're wondering where you should place control points on a curve to make an
alteration to a specific area of the image, you can use the height of the corresponding
graphs in the Waveform Monitor set to either Parade (if you're adjusting color) or Luma
(if you're adjusting the Luma curve). For example, if you want to adjust the highlights of
the image, you'll probably need to place a control point in the curve at approximately
the same height at which the highlights appear in the Waveform graph.
Chapter 9 The Primary In Room 2432 Add another control point near the top of the red curve, and drag it up until some red
"fill" reappears on the side of the woman's face.
This adjustment adds the red back to the woman's face, but now you've added red to
the highlights of the key light source, as well.
Since the key light for this shot is the sun coming in through the window, this effect is
probably inappropriate and should be corrected.
3 Drag the control point for the white point in the red curve control down until the red in
the brightest highlights of the face is neutralized, but not so far that the lighting begins
to turn cyan.
244 Chapter 9 The Primary In RoomAt this point, the correction is finished. The red light appears in the fill light falling on the
woman's face, while the shadows and very brightest highlights from the sun are nice and
neutral, enhancing the color contrast of the image.
Here is a before-and-after comparison so you can see the difference.
The Basic Tab
The Basic tab contains the controls for Saturation, as well as Master Lift, Gamma, and Gain
parameters that let you make additional adjustments to the contrast of your image.
For more information, see:
• Saturation Controls
• Master Contrast Controls
Chapter 9 The Primary In Room 245Saturation Controls
Saturation describes the intensity of the color in an image. Image saturation is controlled
using three parameters which, similar to the other controls in the Primary In room, let
you make individual adjustments to different tonal zones of an image. Like the contrast
and color controls, tonality-specific saturation adjustments fall off gently at the edges of
each correction to ensure smooth transitions.
• Saturation: This parameter controls the saturation of the entire image. The default value
of 1 makes no change to image saturation. Reducing this value lowers the intensity of
the color of every pixel in the image; at 0 the image becomes a grayscale monochrome
image showing only the luma. Raising the saturation increases the intensity of the color.
The maximum saturation you can obtain by adjusting the “virtual slider” of this
parameter with the mouse is 4. However, you can raise this parameter to even higher
values by entering a number directly into this field.
Saturation reduced by more than half
Original image
246 Chapter 9 The Primary In RoomBeware of raising image saturation too much; this can result in colors that start to
"bleed" into one another and a signal that's illegal for broadcast.
A dramatically oversaturated image
If the Broadcast Safe settings are turned on, the legality of the image will be protected,
but you may see some flattening in particularly colorful parts of the image that results
from the chroma of the image being limited at the specified value. You can see this in
the Vectorscope by the bunching up at the edges of the graph. Even if you're not
working on a project for video, severely oversaturated colors can cause problems and
look unprofessional.
• Highlight Sat.: This parameter controls the saturation in the highlights of your image.
You can selectively desaturate the highlights of your image, which can help legalize
problem clips, as well as restore some white to the brightest highlights in an image.
Highlight saturation turned all the way down
Highlight saturation turned up
Chapter 9 The Primary In Room 247• Shadow Sat.: This parameter controls the saturation in the shadows of your image. You
can selectively desaturate the shadows on your image to create deeper looking blacks
and to eliminate inappropriate color in the shadows of your images for a more cinematic
look.
Shadow saturation turned up
Shadow saturation turned all the way down Master Contrast Controls
Three additional parameters also affect image contrast. For more information on contrast
adjustments, see Contrast Adjustment Explained.
• Master Lift: Unlike the primary Shadow contrast slider, the Master Lift parameter only
functions as an add or subtract operator, making an overall luma adjustment to the
entire image regardless of how the Limit Shadow Adjustments control is set. For more
information on lift adjustments, see Adjusting the Black Point with the Shadow Slider.
• Master Gain: This parameter works exactly the same as the primary Highlight contrast
slider, adjusting the white point while leaving the black point at its current level and
scaling all the midtones in between the two.
• Master Gamma: This parameter works exactly the same as the primary Midtone contrast
slider, adjusting the distribution of midtones between 0 and 100 percent.
248 Chapter 9 The Primary In RoomThe Advanced Tab
This tab contains another set of parameters for adjusting each of the three primary color
channels within each of the three tonal zones. Additionally, there is a set of Printer Points
controls for colorists who are used to optical color grading for film.
For more information, see:
• RGB Controls
• Printer Points Controls
RGB Controls
These parameters provide per-channel control over contrast and color. These are not
numerical representations of any of the other controls in the Primary In room. Like the
parameters in the Basic tab, they're available as an additional set of controls.
Typically, these parameters are adjusted when the Auto Balance button is used to
automatically adjust a shot. (For more information, see Using the Auto Balance Button.)
However, you can use them as you see fit.
• Red, Green, and Blue Lift: These parameters work exactly the same as the Master Lift
parameter, but affect the individual color channels.
• Red, Green, and Blue Gain: These parameters work exactly the same as the Master Gain
parameter, but affect the individual color channels.
• Red, Green, and Blue Gamma: These parameters work exactly the same as the Master
Gamma parameter, but affect the individual color channels.
Chapter 9 The Primary In Room 249Printer Points Controls
These parameters are available for colorists who are used to working with the printer
points system for color timing film. Employed by film printing machines, the printer points
system allows color correction to be performed optically, by shining filtered light through
the conformed camera negatives to expose an intermediate positive print, in the process
creating a single reel of film that is the color-corrected print.
The process of controlling the color of individual shots and doing scene-to-scene color
correction is accomplished using just three controls to individually adjust the amount of
red, green, and blue light that exposes the film, using a series of optical filters and shutters.
This method of making adjustments can be reproduced digitally using the Printer Points
parameters.
Tip: These parameters are controllable using knobs on most compatible control surfaces.
What Is a Printer Point?
Each of the Red, Green, and Blue parameters is adjusted in discrete increments called
printer points (with each point being a fraction of an ƒ-stop, the scale used to measure
film exposure). Color implements a standard system employing a total range of 50 points
for each channel, where point 25 is the original neutral state for that color channel.
Technically speaking, each point represents 1/4 of an ƒ-stop of exposure (one ƒ-stop
represents a doubling of light). Each full stop of exposure equals 12 printer points.
Making Adjustments Using Printer Points
Unlike virtually every other control in the Primary In room, the Red, Green, and Blue Printer
Points parameters make a uniform adjustment to the entire color channel, irrespective
of image tonality.
Also unique is the way in which adjustments are made. To emulate the nature of the
filters employed by these kinds of machines, raising a parameter such as the Printer Points
Red parameter doesn’t actually boost the red; instead, it removes red, causing the image
to shift to cyan (the secondary of green and blue). To increase red, you actually need to
decrease the Printer Points Red parameter.
Increasing or decreasing all three Printer Points parameters together darkens the image
(by raising all three parameters) or lightens it (by lowering all three parameters). Making
disproportionate adjustments to the three channels changes the color balance of the
image relative to the adjustment, altering the color of the image and allowing for the
correction or introduction of color casts.
250 Chapter 9 The Primary In RoomThe Printer Points Parameters
These parameters control calibration and individual printer points for each color channel.
• Printer Points Calibration: This value calibrates the printer points system according to
the film gamma standard you wish to use. The default value of 7.8 is derived by
multiplying the value 12 (points per ƒ-stop) by a value of 0.65 (the default film gamma
standard used). 0.65 * 12 = 7.8. To recalibrate for a different film gamma value, insert
your own gamma value into the equation.
• Printer Points Red: The value with which to raise or lower the red channel.
• Printer Points Green: The value with which to raise or lower the green channel.
• Printer Points Blue: The value with which to raise or lower the blue channel.
Note: There is also a printer points node available in the Color FX room, which works
identically to the parameters covered in this section.
Using the Auto Balance Button
The Auto Balance button performs an automatic analysis of the current shot, based on
the frame at the position of the playhead. This is useful for quickly bringing a problem
shot with a subtly inobvious color cast to a neutral state, prior to performing further color
correction.
When you click this button, Color automatically samples the darkest and lightest 5 percent
of the image’s Luma channel in order to determine how to make shadow and highlight
adjustments to neutralize any color casts that are present in the image. In addition, the
black and white points of the image are adjusted to maximize image contrast, so that
the shot occupies the widest available range from 0 to 100.
Note: Unlike the Auto Balance controls in the Final Cut Pro Color Corrector 3-way filter,
the Auto Balance button is completely automatic, and does not require you to select
individual areas of the image for analysis.
To use the Auto Balance button
1 Move the playhead in the Timeline to a representative frame of the shot you want to
automatically color balance.
Chapter 9 The Primary In Room 2512 Click Auto Balance.
Once the analysis has been performed, the Red, Green, and Blue Lift and Gain parameters
in the Advanced tab of the Primary In room are automatically set to contain the results
of these adjustments. The result should render whites, grays, and blacks in the image
completely neutral.
Since the necessary adjustments are made to the Lift and Gain parameters in the Advanced
tab, the main Shadow, Midtone, Highlight, and Curves controls remain unused and remain
available to you for further adjustment of the image.
The RED Tab
When native RED QuickTime media is sent to or imported into Color, a RED tab appears
in the Primary In room, next to the Basic and Advanced tabs. There is no corresponding
RED tab in the Primary Out room.
Important: This tab only appears if you’ve installed the appropriate RED supporting
software for Final Cut Studio.
The RED camera writes raw, linear light image data to the R3D files that are recorded. The
controls found in the RED camera’s Audio/Video menus in no way alter the way the image
data is written within each R3D file. Instead, whatever settings were chosen at the time
are stored within each recorded clip as metadata (similar to a LUT) that determines how
these media files are displayed by compatible software. This metadata can be overridden
during the Log and Transfer process in Final Cut Pro.
252 Chapter 9 The Primary In RoomFor clips that were imported with native color metadata, the RED tab provides access to
the clip Color, Color Temp, and View metadata originally written by the RED camera.
However, this metadata can also be overwritten during ingest using a custom color
processing option in the Log and Transfer window. These parameters are provided so
that you can begin grading each clip in the state at which it was originally monitored
during the shoot, or at which it was ingested using the Final Cut Pro Log and Transfer
window.
Note: Although there is functional overlap between the controls found in this tab and
those found elsewhere in Color, the Kelvin and Tint controls are specially calibrated to
provide the most photometrically accurate white balance adjustments for RED QuickTime
media.
• Enabled: Turns all of the parameters found within the RED tab on or off. Turning Enabled
off suspends the effect of these parameters on the final rendered image in Color.
• Saturation: This parameter is available in the RED camera’s Color submenu, and adjusts
the color intensity of the image. The overall range is 0 (monochome) through 5.0
(extremely high), where 1 is unity.
Chapter 9 The Primary In Room 253• Kelvin: This value is set by options in the RED camera’s Color Temp menu, along with
Tint. This setting is designed to compensate for the “warmth” of the available lighting
to keep white elements of the scene looking neutral. Low Kelvin values will compensate
for “warmer” lighting (such as tungsten), while higher Kelvin values compensate for
“cool” lighting (such as noon-day sun or overcast days). Two user-selectable options
set Kelvin to predetermined values: Tungsten (3,200K), and Daylight (5,600K). The Auto
WB option automatically chooses a custom value for this parameter based on analysis
of a white card, while Manual WB lets the operator choose any value. The correction
made by this parameter is designed to work specifically with RED linear light image
data to provide the most photometrically correct result.
• Tint: This value is adjustable within the RED camera’s Color Temp menu, along with
Kelvin. Tint is designed as an additional white balance compensation for light sources
with a green or magenta component, such as fluorescent or sodium vapor bulbs. The
correction made by this parameter is designed to work specifically with RED linear light
image data to provide the most photometrically correct result.
• Exposure: Available in the RED camera’s Color menu. Increases and lowers image
lightness in increments calibrated to ƒ-stops. When raising the signal up to 100 or
lowering it down to 0, the image is clipped at the boundaries of broadcast legality. The
overall range is –7 to +7, where 0 is unity.
• Red, Green, and Blue Gain: Available in the RED camera’s Gain submenu. Allows individual
adjustment of each color channel. Adjusting any of these gain parameters boosts or
lowers the maximum value of the corresponding color channel and scales the midtones
while pinning the bottom of the channel to 0 percent. Lowering does the opposite.
The overall range is 0 to 10, where 1 is unity.
• Contrast: Available in the RED camera’s Color menu. Raising the contrast boosts the
highlights and lowers the shadows, while leaving the midtones centered around 50
percent unaffected. As the video signal reaches the boundaries of 100 and 0 percent,
it’s compressed rather than clipped. The overall range is –1 to +1, where 0 is unity.
• Brightness: Available in the RED camera’s Color menu. Raises and lowers image lightness.
When raising the signal close to 100 or lowering it down to 0, the image is compressed
rather than clipped. The overall range is –10 to +10, where 0 is unity.
• Gamma pop-up menu: In-camera, the Gamma setting is determined by the Color Space
option that’s selected in the RED Camera’s View menu. (It’s not available as an
individually adjustable parameter.) There are six options for gamma available in Color.
• Linear: No gamma adjustment is applied, linear-to-light as captured by the Mysterium
sensor.
• Rec. 709: The standard Gamma curve as specified by the Rec. 709 standard for video
gamma.
• REDspace: Similar to Rec. 709, but tweaked to be perceptually more appealing, with
higher contrast and lighter midtones.
254 Chapter 9 The Primary In Room• REDlog: A nonlinear, logarithmic gamma setting that maps the native 12-bit RED
image data to a 10-bit curve. The blacks and midtones that occupy the lowest 8 bits
of the video signal maintain the same precision as in the original 12-bit data, while
the highlights that occupy the highest 4 bits are compressed. While this reduces the
precision of detail in the highlights, this is a relative loss as the linearly encoded data
has an overabundance of precision.
• PDLOG 685: Another logarithmic gamma setting that maps the native 12-bit RED
image data into the linear portion of a Cineon or film transfer curve.
• Color Space pop-up menu: These options are available in the RED Camera’s View menu.
(In-camera, these options are tied to corresponding gamma settings.)
• CameraRGB: Identified on the camera as RAW, this mode bypasses the RED camera
matrix and represents the original, uncorrected sensor data.
• REDspace: Fits the raw RED image data into a color space that’s larger than that of
Rec. 709. Appropriate for digital cinema mastering and film output.
• Rec. 709: Fits the raw RED image data into the standard color space specified by the
Rec. 709 standard for high definition video. Appropriate for HD video mastering.
• ISO pop-up menu: A gain operation (similar to Exposure), which pins the black point at
0 while raising or lowering the white point of the image, linearly scaling everything in
between. The range is 100–2000; 320 is the default unity gain setting (no change is
made). Raising the signal too much can result in clipping.
Important: Changing the ISO setting of your RED camera does not alter the recorded
data. However, since it changes the lightness of the image you’re monitoring during
the shoot, it will influence how you light the scene and adjust the camera’s iris.
Chapter 9 The Primary In Room 255Secondary color correction controls let you isolate a portion of an image and selectively
adjust it without affecting the rest of the picture.
Once you’ve made your initial corrections using the Primary In room, the next step in
adjusting any shot is to move on to the Secondaries room to make more targeted
adjustments.
This chapter covers the following:
• What Is the Secondaries Room Used For? (p. 258)
• Where to Start in the Secondaries Room? (p. 259)
• The Enabled Button in the Secondaries Room (p. 260)
• Choosing a Region to Correct Using the HSL Qualifiers (p. 261)
• Controls in the Previews Tab (p. 268)
• Isolating a Region Using the Vignette Controls (p. 270)
• Adjusting the Inside and Outside of a Secondary Operation (p. 277)
• The Secondary Curves Explained (p. 278)
• Reset Controls in the Secondaries Room (p. 283)
257
The Secondaries Room
10What Is the Secondaries Room Used For?
The Secondaries room has been designed for maximum flexibility. While its central purpose
is to facilitate targeted corrections to specific features of the image, it can be used for a
variety of tasks.
• Isolating areas for targeted corrections: This is the primary purpose of the Secondaries
room. Using a variety of techniques, you can perform functions such as isolating the
highlights in an image to change the quality of light; targeting the color of an overly
bright sweater to desaturate it without affecting the rest of the image; or selecting an
actor’s face to create a post-production sunburn. Once you master the ability to
selectively adjust portions of the image, the possibilities are endless.
Before After
• Creating vignetting effects: Traditionally, vignettes used for creative purposes described
a darkening around the edges of the image that used to be created with mattes or lens
filters. You can create any type of vignette you need using either preset or custom
shapes, to darken or otherwise flag areas of the image. Vignettes can be used to focus
viewer attention by highlighting a subject in the foreground or by shading background
features that you don’t want sticking out.
Before After
258 Chapter 10 The Secondaries Room• Digitally relighting areas of the image: The same feature can be used in a different way,
drawing custom shapes to isolate regions of the image and add beams or pools of light
where previously there were none. This can come in handy in situations where the
lighting is a bit flat, and you want to add some interest to a feature in the scene.
Before After
• Making modifications changing the Primary In correction: A somewhat unconventional
use of the Secondaries room is to apply an additional correction to the entire image
on top of the original correction you made with the Primary In room. When all three
secondary qualifiers are set to include the entire image (which is the default setting),
adjustments made with the color balance, contrast, and saturation controls affect
everything in the frame, just as they do in the Primary In room. You can use this to
keep stylized adjustments separate from the baseline corrections you’re making in the
Primary In room. For more information on this type of workflow, see Managing a Shot’s
Corrections Using Multiple Rooms.
Where to Start in the Secondaries Room?
The process of secondary color correction is fairly straightforward and involves the
following steps.
• Stage 1: Isolating the Region You Need to Adjust
• Stage 2: Making Color Balance, Contrast, and Saturation Adjustments
• Stage 3: Moving Through the Eight Tabs to Make More Corrections
Stage 1: Isolating the Region You Need to Adjust
There are three basic methods you can use to isolate, or qualify, features or areas within
an image in the Secondaries room:
• Key on a range of color, saturation, or brightness.
• Use a shape as a mask.
• Use one of the secondary curves to selectively adjust a portion of the spectrum.
Chapter 10 The Secondaries Room 259All these methods are described in this chapter. Once you’ve selected a region of the
image to work on, the Control pop-up menu lets you apply separate operations to the
inside and outside of the selection.
Stage 2: Making Color Balance, Contrast, and Saturation Adjustments
After you’ve qualified an area for correction, you can use the same color balance controls,
primary contrast sliders, Saturation and Lift/Gain/Gamma parameters in the Basic tab, as
well as the RGB parameters in the Advanced tab that are available in the Primary In room.
For more information about these controls, see The Primary In Room.
Note: There is one additional correction parameter available in the Secondaries room
that’s not available in the Primary In and Out rooms, and that is the Global Hue parameter.
Using Global Hue, you can rotate the hue of every single color in the image at once. Unlike
the other parameters in the Secondaries room, Global Hue affects every pixel of the image,
and is not limited by the HSL qualifiers or the vignette controls.
Stage 3: Moving Through the Eight Tabs to Make More Corrections
Once you’ve completed the correction at hand, you can move on to the next secondary
operation you need to perform. The Secondaries room supports up to eight separate
secondary operations (although you may only have seven if you’re in single display mode).
In the next few sections, you’ll learn how to isolate areas of the image in different ways.
The Enabled Button in the Secondaries Room
The Enabled button, at the top left of the Secondaries control area, is one of the most
important controls in this room. Each of the eight tabs in the Secondaries room has its
own Enabled button.
Whenever you make an adjustment to any parameter or control in the Secondaries room,
this button is automatically turned on.
This button can be used to disable any Secondaries tab. For example:
• You can turn the Enabled button off and on to get a before-and-after preview of how
the secondary is affecting the image.
• You can turn the Enabled button off to disable a secondary effect without resetting it,
in case you want to bring it back later.
260 Chapter 10 The Secondaries RoomThe state of the Enabled button is also keyframable. This means you can use keyframes
to control this button to turn a secondary effect on and off as the shot plays. For more
information on keyframing, see Keyframing Secondary Corrections.
Choosing a Region to Correct Using the HSL Qualifiers
One of the most common ways of isolating a feature for targeted correction is to use the
HSL qualifiers (so named because they qualify part of the image for correction) to key on
the portion you want to color correct. HSL stands for hue, saturation, and lightness, which
are the three properties of color that together define the entire range of color that can
be represented digitally.
HSL qualification is often one of the fastest ways to isolate irregularly shaped subjects,
or subjects that are moving around in the frame. However, as with any chroma or luma
key, the subject you’re trying to isolate should have a color or level of brightness that’s
distinct from the surrounding image. Fortunately, this is not unusual, and reddish skin
tones, blue skies, richly saturated clothing or objects, and pools of highlights and shadows
are often ideal subjects for secondary correction.
If you’re familiar with the Limit Effect controls of the Color Corrector 3-way filter in
Final Cut Pro, you’ll find that the Secondaries room HSL controls work more or less the
same way.
Chapter 10 The Secondaries Room 261The HSL controls work as a chroma keyer. By selecting ranges of hue, saturation, and
lightness, you create a matte that is then used to define the region to which corrections
are applied. Everything outside the matte remains unaffected (although you can also
specify which portion of the matte you want to adjust, the inside or the outside).
Original image HSL qualifier settings
Matte Corrected image
The HSL Qualifier controls always sample image data from the original, uncorrected image.
This means that no matter what adjustments have been made in the Primary In room,
the original image values are actually used to pull the key. For example, even if you
completely desaturate the image in the Primary In room, you can still pull a chroma key
in the Secondaries room.
Tip: It is not necessary to use all three qualifiers when keying on a region of the image.
Each qualifier has a checkbox and can be turned on and off individually. For example, if
you turn off the H (hue) and S (saturation) controls, you can use the L (lightness) control
by itself as a luma keyer. This is a powerful technique that lets you isolate areas of an
image based solely on image brightness.
Creating Fast Secondary Keys Using the HSL Eyedropper
The eyedropper, at the top-left corner of the Basic tab, provides a quick and easy way to
sample color values from images you’re correcting.
262 Chapter 10 The Secondaries RoomTo use the eyedropper to pull a secondary key
1 Click the eyedropper.
The eyedropper becomes highlighted, and crosshairs appear superimposed over the
image in the preview and broadcast monitors. You use these crosshairs to sample the
HSL values from pixels in the image.
2 Move the mouse to position the crosshairs on a pixel with the color you want to key on,
then click once to sample color from a single pixel.
The crosshairs disappear, and the HSL controls are adjusted to include the sampled values
in order to create the keyed matte. In addition, the Enabled button turns on automatically
(which turns on the effect of the secondary operation in that tab). The Previews tab
becomes selected in the middle of the Secondaries room, showing the keyed matte that’s
being created by the HSL qualifiers. (For more information, see Controls in the Previews
Tab.)
Once you’ve created the keyed matte, the next step is to use the color correction controls
at the top of the Secondaries room to actually make the correction. For more information,
see The Primary In Room.
Chapter 10 The Secondaries Room 263In addition to sampling individual color values, you can also use the eyedropper to sample
an entire range of values.
To use the eyedropper to sample a range of values
µ Click the eyedropper, then drag the crosshairs over the range of pixels you want to sample.
The HSL controls expand to include the entire range of hues, saturation, and lightness in
the pixels you sampled. As a result, the keyed matte in the Previews tab is much more
inclusive.
To expand the HSL selection using the eyedropper
µ Click the eyedropper, then hold down the Shift key and either click a single pixel or drag
over a range of pixels with the crosshairs.
The crosshairs disappear, and the HSL controls are expanded to include the range of
sampled values you dragged on to expand the keyed matte in the Previews tab.
Note: When selecting a range of multiple HSL values, you can only select a contiguous
range of values. You cannot, for example, exclude yellow if you’ve included both red and
green, since yellow falls in between. If you need to select noncontiguous HSL ranges, you
should use multiple secondary operations. For example, choosing red with Secondaries
tab 1, and choosing green with Secondaries tab 2.
The HSL Controls
You don’t have to use the eyedropper to select a range of HSL values. You can also use
the HSL controls at the top of the Basic tab to select specific ranges of hue, saturation,
and lightness directly.
Each of these qualifiers can be turned on and off individually. Each qualifier that’s turned
on contributes to the keyed matte. Turning a qualifier off means that aspect of color is
not used.
264 Chapter 10 The Secondaries RoomEach qualifier has three sets of handles—center, range, and tolerance—which correspond
to three knobs on compatible control surfaces. These handles can also be manipulated
directly onscreen using the mouse.
Range
Center
Tolerance HSL Qualifiers Explained
To make HSL adjustments efficiently, you should have an in-depth understanding of the
nature of each type of adjustment.
• H (hue): Defines the range of colors that contribute to the key. Using hue by itself to
define a keyed matte can yield similar results to using the Hue, Sat, and Lum secondary
curves. Because the visible spectrum is represented by a wraparound gradient, the H
handles are the only ones that wrap around the ends of this control, allowing you to
select a complete range of blue to green, when necessary.
• S (saturation): Defines the range of saturation that contributes to the key. Using
saturation by itself to define a keyed matte can be effective for manually limiting
oversaturated colors. Using saturation and hue, but excluding lightness, lets you
manually limit specific colors throughout the image regardless of their lightness.
• L (lightness): Defines the range of lightness that contributes to the key. Using lightness
by itself to define a keyed matte is an extremely powerful technique that lets you quickly
isolate regions of the highlights, midtones, or shadows to perform specific adjustments
such as increasing or reducing the specific lightness of shadows, or manipulating the
color within highlights.
• Reset button: Resets all three qualifiers to the default state, which is an all-inclusive
selection.
HSL Qualifier Controls
This section describes the HSL qualifier controls.
• Center: A single handle defines the middle of the selected range of values.
• Range: An inner pair of handles to the left and right of the center handle defines the
initial range of values that contribute to the keyed matte. These are the solid white
pixels seen in the matte.
Chapter 10 The Secondaries Room 265• Tolerance: An outer pair of handles defines a range of values that surround the range
values to create falloff, giving a soft edge to the keyed matte. These are the lighter gray
pixels seen in the matte.
Adjusting the HSL Controls
This section explains how to adjust the HSL controls.
To adjust the center point for any qualifier
µ Drag anywhere within the center of the two Range handles.
To make a symmetric adjustment to the Range handles
µ Drag the Range handles directly, or drag anywhere between the Range and Tolerance
handles (if the tolerance is wide enough) to widen or narrow the range.
To make an asymmetric adjustment to the Range handles
µ Hold down the Shift key and drag the handle you want to adjust; the opposing handle
remains fixed in place.
When you make an asymmetric adjustment, the center point also readjusts to match the
new range.
Note: You cannot make asymmetric adjustments using knobs on a control surface.
266 Chapter 10 The Secondaries RoomTo adjust the Tolerance handles
µ Drag anywhere outside of the Center, Range, and Tolerance handles to widen or narrow
the tolerance.
You can also make asymmetric adjustments to tolerance by holding down the Shift key
while dragging.
The Color Swatches
A set of six swatches underneath the HSL qualifiers lets you automatically set the Hue
qualifier to a narrow range that’s centered on one of the primary red, green, and blue,
and secondary cyan, magenta, and yellow colors.
The swatches can be useful when you need to quickly make a hue selection for a feature
in the image that corresponds to one of these colors. When you choose one of these
swatches, the Saturation and Lightness controls remain completely unaffected.
To adjust the Hue qualifier using one of the color swatches
µ Shift-click any of the swatches.
The Hue qualifier resets itself to select the corresponding range of color.
Key Blur
The Key Blur parameter lets you apply a uniform blur to the keyed matte in order to soften
it. This can go a long way toward making an otherwise noisy or hard-to-pull key usable.
This parameter defaults to 0, with a maximum possible value of 8.
Chapter 10 The Secondaries Room 267Note: You can manually set the key blur to even higher values by typing them directly
into the Key Blur field.
No key blur With key blur One of the nice things about keying for color correction is that, unlike keying to create a
visual effects composite, you don’t always have to create keyed mattes with perfect edges
or completely solid interiors. Often an otherwise mediocre key will work perfectly well,
especially when the adjustment is subtle, so long as the effect doesn’t call attention to
itself by adding noise, or by causing vibrating “chatter” around the edges of the matte.
For example, holes in a keyed matte often correspond to shadows that are falling on the
subject you’re isolating. If you’re making a naturalistic adjustment to the highlights of
the image, you probably don’t want to include such shadowed areas in the correction,
so there’s no need to make further adjustments to the matte.
Check Your Secondary Keys During Playback
It’s always important to double-check to see how the secondary keys you pull look
during playback. Sometimes a secondary operation that looked perfectly good while
you were making the correction exhibits flickering or “chatter” at the edges that is the
result of noise, or of including a range of marginal values that are just at the edge of
the selected range. (This happens frequently for “hard-to-key” features in an image.) In
these cases, additional adjustments may be necessary to eliminate the problem.
Also, secondary keys that work well in one part of a shot may not work as well a couple
of seconds later if the lighting changes. Before moving on, it’s always a good idea to
see how a secondary operation looks over the entire duration of a shot.
Controls in the Previews Tab
The Previews tab is a two-part display that helps you guide your adjustments while you
use the HSL qualifiers and the vignette controls. Two reduced-resolution images show
you different views of the operation you’re performing.
268 Chapter 10 The Secondaries RoomNote: The Matte Preview Mode and Vignette Outline appear in the preview display of
the Scopes window only when the Previews tab in the Secondaries room is selected.
Matte Preview
Mode buttons
Vignette outline
Vignette preview HSL Qualifier Matte preview
• Vignette preview: The image on the left (above) shows you the position and size of the
currently selected vignette shape, when the Vignette button is enabled. When you use
the square or circle vignette, this window also contains an onscreen control you can
use to move, resize, and soften the vignette. If you’ve selected a user shape in the
Geometry room instead, you’ll see a noneditable outline of that shape. For more
information, see Isolating a Region Using the Vignette Controls.
• HSL qualifier preview: The image on the right shows you the matte that’s being generated
by the HSL qualifiers. This window does not include the mask that’s generated by the
vignette controls, nor does it display the HSL matte as it appears when the Key Blur
parameter is used. (The final HSL matte as it’s modified by both vignetting and key blur
is visible in the preview display only when the Matte Preview Mode is set to Matte
Only.)
The white areas of the mask indicate the parts of the image that are selected with the
current qualification settings, that will be affected by the adjustments you make. The
black areas of the image are the parts of the picture that remain unaffected.
• Matte Preview Mode buttons: These buttons control what is visible in the preview display
in the Scopes window. There are three modes:
• Final image: Shows a preview of how the final effect looks. This is similar to the
ordinary preview that’s displayed in the Scopes window, except that it also shows
the vignette outline, when the Vignette button is enabled.
• Desaturated preview: The areas of the image that are selected with the current
qualification settings appear in color, while the areas of the image that remain
unaffected are desaturated and appear monochrome.
Chapter 10 The Secondaries Room 269• Matte only: Shows the actual matte being used to limit the effect. This is similar to
the image displayed in the HSL Qualifier preview display, except that it shows the
sum of the vignette mask and the HSL mask, as well as the results of the mask as it’s
modified by the Key Blur parameter.
Final image Desaturated preview
Matte only
• Vignette outline button: When the Vignette button is turned on, the Vignette outline
button lets you display or hide the vignette outline that appears in the Preview window.
Isolating a Region Using the Vignette Controls
The vignette controls give you an extremely fast way to isolate areas of an image that
are geometrically round or rectangular, such as the face of someone in close-up, or a
window in the background. Vignettes are also useful for isolating subjects that are too
hard to key using the HSL qualifiers.
On the other hand, if the subject you’re vignetting moves, you need to either keyframe
the shape to move along with it (see Keyframing) or use motion tracking to automatically
create a path for the shape to follow. (For more information, see The Tracking Tab.)
270 Chapter 10 The Secondaries RoomVignettes can also be used to select large regions of the frame for brightening or
darkening. One common example of this is to use a shape to surround a region of the
image you want to draw the viewer’s attention to, switch the Control pop-up menu to
Outside, and darken the background outside of this shape using the contrast sliders to
make the subject “pop out” more, visually.
Before After vignette adjustment
Lastly, if the square or circle vignettes aren’t sufficient for isolating an irregularly shaped
subject, you can create a custom User Shape in the Shapes tab of the Geometry room,
and use that to limit the correction. You could go so far as to rotoscope (the process of
tracing something frame by frame) complex subjects in order to create highly detailed
adjustments that are too difficult to isolate using the HSL qualifiers.
User Shapes can be edited and animated only in the Geometry room, but the mattes
they create can be used to isolate adjustments in any of the eight Secondaries tabs.
The Vignette Controls
The vignette controls are located underneath the Previews tab. Some of these controls
can also be manipulated using the onscreen controls in the Previews tab.
Chapter 10 The Secondaries Room 271Note: If you have a compatible control surface, you can also use its controls to customize
the vignette. See Setting Up a Control Surface for more information.
• Vignette button: This button turns the vignette on or off for that tab.
• Use Tracker pop-up menu: If you’ve analyzed one or more motion trackers in the current
project, you can choose which tracker to use to automatically animate the position of
the vignette using this pop-up menu. To disassociate a vignette from the tracker’s
influence, choose None.
Note: When Use Tracker is assigned to a tracker in your project, the position of the
vignette (the center handle) is automatically moved to match the position of the
keyframes along that tracker’s motion path. This immediately transforms your vignette,
and you may have to make additional position adjustments to move the vignette into
the correct position. This is especially true if the feature you’re vignetting is not the
feature you tracked.
• Shape pop-up menu: This pop-up menu lets you choose a shape to use for the vignette.
• Square: A user-customizable rectangle. You can use the onscreen controls in the
Previews tab or the other vignette parameters to modify its position and shape. For
more information, see Using the Onscreen Controls to Adjust Vignette Shapes.
• Circle: A user-customizable oval. You can either use the onscreen controls in the
Previews tab, or the other vignette parameters to modify its position and shape.
• User Shape: Choosing User Shape from the Shape pop-up menu automatically moves
you to the Shapes tab of the Geometry room, where you can click to add points to
draw a custom shape to use for the vignette. When you finish, click the Attach button,
and then go back to the Secondaries room to make further adjustments. When you
use a User Shape as the vignette, the rest of the vignette parameters become
unavailable; you can modify and animate that shape only from the Shapes tab of the
Geometry room. For more information, see The Shapes Tab.
Parameters That Adjust Square or Circle Vignettes
The following parameters are only available when you use the Square or Circle options
in the Shape pop-up menu.
• Angle: Rotates the current shape.
• X Center: Adjusts the horizontal position of the shape.
• Y Center: Adjusts the vertical position of the shape.
• Softness: Blurs the edges of the shape.
272 Chapter 10 The Secondaries Room• Size: Enlarges or shrinks the shape.
• Aspect: Adjusts the width-to-height ratio of the shape.
Using the Onscreen Controls to Adjust Vignette Shapes
The Angle, X Center, Y Center, Softness, Size, and Aspect parameters can all be adjusted
via onscreen controls in the image on the left of the Previews tab.
Note: Although you can also view the outlines that correspond to these onscreen controls
in the preview display of the Scopes window when you turn the Vignette Outline button
on, this outline has no onscreen controls that you can manipulate. You can only make
these adjustments in the Previews tab.
To move the vignette
µ Drag anywhere inside or outside the shape in the Previews tab to move the vignette in
that direction.
The X Center and Y Center parameters are simultaneously adjusted. Color uses the same
coordinate system as Final Cut Pro to define position.
To resize the vignette
Do one of the following:
µ Drag any of the four corners of the vignette to resize the vignette relative to the opposite
corner, which remains locked in position.
µ Option-drag to resize the vignette relative to its center. (The center of a vignette is visible
as green crosshairs.)
µ Shift-drag to resize the vignette while locking its aspect, enlarging or reducing the shape
without changing its width-to-height ratio.
Depending on the operation you perform, the X and Y Center, Size, and Aspect parameters
may all be adjusted.
Chapter 10 The Secondaries Room 273To rotate the vignette
µ Right-click or Control-click any of the four corners of the vignette and drag to rotate it to
the left or right.
To adjust the softness of the vignette
µ Middle-click and drag to blur the edges of the vignette.
This adjustment modifies the Softness parameter. The degree of softness is visualized in
the Previews tab with a pair of concentric circles. The inner circle shows where the edge
blurring begins, and the outer circle shows where the edge blurring ends, along with the
shape.
Animating Vignettes
One of the most common operations is to place an oval over someone’s face and then
either lighten the person, or darken everything else, to draw more attention to the
subject’s face. If the subject is standing still, this is easy, but if the subject starts to shift
around or move, you need to animate the vignette using keyframes so that the lighting
effect follows the subject. For more information on keyframing, see Keyframing.
Another option is to use the motion tracker to automatically track the moving subject,
and then apply the analyzed motion to the vignette. For more information, see The
Tracking Tab.
Creating a User Shape for Vignetting
The following procedure outlines how you use the User Shape option in the Shape pop-up
menu of the vignette controls.
To use a user shape for vignetting
1 Open the Secondaries room, click one of the eight Secondaries tabs to select which
secondary operator to work on, and then select the Vignette checkbox to enable the
vignette controls.
2 Choose User Shape from the Shape pop-up menu.
The Shapes tab of the Geometry room opens, with a new shape in the shapes list to the
right, ready for you to edit.
274 Chapter 10 The Secondaries Room3 Click in the Geometry preview area to add control points outlining the feature you want
to isolate, then click the first control point you created to close the shape and finish
adding points.
The shapes you draw in the Geometry room default to B-Spline shapes, which use control
points that are unattached to the shape they create to push and pull the shape into place
(similar to the B-Splines used by the curves controls in the Primary In and Out rooms).
You can also change these shapes to simple polygons if you need a shape with hard
angles rather than curves, by clicking the Polygon button in the Shapes tab. For more
information on working with shapes, see The Shapes Tab.
Tip: If you’re not sure how many control points to add to create the shape you want,
don’t hesitate to create a few more than you think you’ll need. It’s easy to edit them after
they’re created, but you can’t add or remove control points to shapes that have already
been created.
4 If necessary, edit the shape to better fit the feature you’re trying to isolate by dragging
the control points to manipulate the shape.
Chapter 10 The Secondaries Room 2755 To feather the edge of the shape, increase the value of the Softness parameter.
Two additional editable shapes appear to the inside and outside of the shape you drew.
The inner shape shows where the feathering begins, while the outer shape shows the
very edge of the feathered shape. If necessary, each border can be independently adjusted.
6 As an optional organizational step, you can type an identifying name into the Shape
Name field, and press Return to accept the change.
7 Click Attach, at the top of the Shapes tab, to attach the shape you’ve created to the tab
of the Secondary room you were in. (The number of the secondary tab should be displayed
in the Current Secondary field at the top of the Shapes tab.)
8 If necessary, you can also add keyframes or motion tracking to animate the shape to
match the motion of the camera or subject, so the shape you created matches the action
of the shot.
9 When you finish with the shape, open the Secondaries room.
You’ll see the shape you created within the vignette area of the Previews tab. At this
point, the matte that’s created by the shape can be used to limit the corrections you
make, as with any other secondary matte.
276 Chapter 10 The Secondaries RoomWhen you use a user shape, the vignette controls in the secondary tab to which it’s
assigned become disabled. If at any point you need to edit the shape, you must do so in
the Geometry room; the secondary corrections that use that shape will automatically
update to reflect your changes.
Using Secondary Keying and Vignettes Together
When you turn on the vignette controls while also using the HSL qualifiers to create a
secondary key, the vignette limits the matte that’s created by the key. This can be
extremely helpful when the best-keyed matte you can produce to isolate a feature in the
frame results in unwanted selections in the background that you can’t eliminate without
reducing the quality of the matte. In this case, you can use the vignette as a garbage
matte, to eliminate parts of the keyed matte that fall outside the vignette shape.
Adjusting the Inside and Outside of a Secondary Operation
You can choose whether the color, contrast, and saturation adjustments you make affect
the inside or the outside of the isolated feature using the Control pop-up menu.
One of the most powerful features of the Secondaries room is the ability to apply separate
corrections to the inside and outside of a secondary matte in the same tab. This means
that each of the eight secondary tabs can actually hold two separate corrections.
Whenever you choose another region to work on, the controls update to reflect those
settings.
• Control pop-up menu: The Control pop-up menu also provides additional commands
for modifying these settings.
• Inside: The default setting. When set to Inside, all adjustments you make affect the
interior of the secondary matte (the area in white, when looking at the mask itself).
Before inside adjustment After
Chapter 10 The Secondaries Room 277• Outside: When set to Outside, all adjustments you make in that tab affect the exterior
of the secondary matte (the area in black). Making a darkening adjustment to the
outside of a softly feathered circle matte that surrounds the entire frame is one way
of creating a traditional vignette effect.
Before outside adjustment After
• Copy Inside to Outside: Copies the correction that’s currently applied to the inside of
the matte to the outside as well. This is a handy operation if you want to copy the
same correction to the outside as a prelude to making a small change, so that the
difference between the corrections applied to the inside and the outside is not so
large.
• Copy Outside to Inside: Copies the correction that’s applied to the outside to the
inside.
• Swap: Switches the corrections that are applied to the inside and outside of the
secondary matte, so that they’re reversed.
The Secondary Curves Explained
The secondary curves are a deceptively powerful set of controls that allow you to make
very small or large adjustments to the hue, saturation, and luminance of an image based
solely on regions of hue that you specify using control points on a curve.
Important: Curves cannot be animated with keyframes, although just about every other
parameter in the Secondaries room can be.
278 Chapter 10 The Secondaries RoomThese curves work much differently than the curves controls of the Primary In room. Each
of the secondary curves controls defaults to a flat horizontal line running halfway through
the graph area.
The visible spectrum is represented along the surface of the curve by a wrap-around
gradient, the ends of which wrap around to the other side of the curve. The control points
at the left and right of this curve are linked, so that moving one moves the other, to
ensure a smooth transition if you make any adjustments to red, which wraps around the
end of the curve.
Tip: If you’re having a hard time identifying the portion of curve that affects the part of
the image you want to adjust, you can use the color swatches in the 3D scopes to sample
a pixel from the preview, and a horizontal indicator will show the point on the curve that
corresponds to the sampled value. For more information, see Sampling Color for Analysis.
Adding points to the surface of this curve lets you define regions of hue that you want
to adjust. Raising the curve in these regions increases the value of the particular aspect
of color that’s modified by a specific curve, while lowering the curve decreases the value.
Chapter 10 The Secondaries Room 279For example, if you add four control points to the Saturation curve to lower the
green-through-blue range of the curve, you can smoothly desaturate everything that’s
blue and green throughout the frame, while leaving all other colors intact.
Before After
Sat curve adjustment One of the nicest aspects of these controls is that they allow for extremely specific
adjustments to narrow or wide areas of color, with exceptionally smooth transitions from
the corrected to the uncorrected areas of the image. In many instances, the results may
be smoother than might be achievable with the HSL qualifiers.
Another key advantage these controls have over the HSL qualifiers is that you can make
simultaneous adjustments to noncontiguous ranges of hue. In other words, you can boost
or lower values in the red, green, and blue areas of an image while minimizing the effect
of this adjustment on the yellow, cyan, and magenta portions of the image.
280 Chapter 10 The Secondaries RoomThe secondary curves use B-Splines, just like the primary curves controls. In fact, you add
and edit control points on the secondary curves in exactly the same way. For more
information, see Curve Editing Control Points and B-Splines.
Important: Adjustments made using the secondary curves cannot be limited using the
vignette or HSL controls.
Using the Secondary Curves
This section provides examples of how to use each of the three kinds of secondary curves.
Important: Curves cannot be animated with keyframes, although just about every other
parameter in the Secondaries room can be.
The Hue Curve Tab
When you raise or lower part of the secondary Hue curve, you make a hue adjustment
similar to the one you make when you use the Global Hue control, except that you only
rotate the hue value for the selected range of hue specified by the curve. Raising the
curve shifts the values toward red, while lowering the curve shifts the values toward blue.
Before
Hue curve adjustment
After
This control can be valuable for making narrow, shallow adjustments to the reddish/orange
section of the spectrum that affects skin tones, in order to quickly and smoothly add or
remove warmth.
Chapter 10 The Secondaries Room 281The Sat Curve Tab
Raising the Saturation curve increases the saturation in that portion of the spectrum,
while lowering it decreases the saturation. This is a powerful tool for creating stylized
looks that enhance or subdue specific colors throughout the frame.
Before
Sat curve adjustment
After
282 Chapter 10 The Secondaries RoomThe Lum Curve Tab
Raising the Luminance curve lightens the colors in that portion of the spectrum, while
lowering it darkens them. This is a good tool to use when you need to make contrast
adjustments to specific regions of color.
Before
Lum curve adjustment
After
Reset Controls in the Secondaries Room
The Secondaries room has two reset buttons, which are used to reset adjustments made
in the secondary tabs.
• Reset Secondary button: Resets only the currently open secondary tab.
• Reset All Secondaries button: Resets every secondary tab in the Secondaries room. Use
this button with care.
Chapter 10 The Secondaries Room 283When the primary and secondary color correction controls aren’t enough to achieve the
look you need, Color FX lets you create sophisticated effects using a node-based interface.
The Color FX room is a node-based effects environment. It’s been designed as an
open-ended toolkit that you can use to create your own custom looks by processing an
image with combinations of operations that take the form of nodes. Each node is an
individual image processing operation, and by connecting these nodes into combinations,
called node trees, you can create sophisticated effects of greater and greater complexity.
This chapter covers the following:
• The Color FX Interface Explained (p. 286)
• How to Create Color FX (p. 286)
• Creating Effects in the Color FX Room (p. 294)
• Using Color FX with Interlaced Shots (p. 300)
• Saving Favorite Effects in the Color FX Bin (p. 301)
• Node Reference Guide (p. 302)
285
The Color FX Room
11The Color FX Interface Explained
The Color FX room is divided into four main areas.
Node list Node view Parameters Color FX bin
The functionality of these areas is as follows:
• Node list: A list at the left of the Color FX room contains every image processing
operation that you can add. Some of these nodes are single input, performing that
operation to whatever image is input into them, while others are multi-input, taking
multiple versions of the image and combining them using different methods. All nodes
are alphabetically organized.
• Node view: The Node view, at the center of the Color FX room, is the area where nodes
that you create appear and are connected together and arranged into the node trees
that create the effect.
• Parameters tab: When you select a node in the Node view, its parameters appear in
this tab so that you can adjust and customize them.
• Color FX bin: This bin works similarly to the corrections and Grades bins, giving you a
way of saving effects that you create for future use.
How to Create Color FX
The Color FX room is not a compositing environment in which you combine multiple
images together. The only image you can bring into this room for processing is that of
the current shot. You create effects by assembling one or more image processing nodes
into node trees; these work together to reprocess the image in different ways. For more
information, see:
• How Node Trees Work
• Node Inputs and Outputs Explained
• Creating and Connecting Nodes
286 Chapter 11 The Color FX Room• Adjusting Node Parameters
• Bypassing Nodes
• Cutting, Copying, and Pasting Nodes
How Node Trees Work
In the Color image processing pipeline, the Color FX room processes the image as it
appears after whatever corrections have been applied in the Primary In and Secondaries
rooms. Unattached node inputs automatically connect to the state of the image as it’s
affected by the Primary In and Secondaries rooms. This is how each node tree begins,
with an empty input that’s automatically connected to the corrected image.
Note: The sole exception to this is the Color node, which generates a frame of solid color
that you can use with multi-input math nodes to tint an image in different ways.
To perform more operations on an image, you simply add more nodes, connecting the
outputs of previously added nodes to the inputs of new nodes using noodles.
You can think of a node tree as a waterfall of image processing data. Image processing
operations begin at the top and cascade down, from node to node. Each node exerts its
effect on the image that’s output from the node above it, until the bottom is reached, at
which point the image is at its final state.
Chapter 11 The Color FX Room 287The very last node in any node tree must be the Output node. This is the node that sends
the image that’s been processed by the Color FX room back into the Color image
processing pipeline. If there is no Output node, or if the Output node is disconnected,
then the node tree will have no effect on that shot, and its effect will not be rendered by
the Render Queue.
Note: A CFX bar will only appear in the grades track of the Timeline for clips with
connected Output nodes. For more information on correction bars in the Timeline, see
Basic Timeline Elements.
Node Inputs and Outputs Explained
Single input nodes take the image and perform an operation upon it. Single input nodes
can only process one incoming image at a time, so you can only connect a single noodle
to any one input.
Multi-input nodes are designed to combine multiple variations of the image in different
ways, in order to produce a single combined effect. These nodes provide multiple inputs
so that you can connect multiple noodles.
288 Chapter 11 The Color FX RoomAny node’s output, on the other hand, can be connected to multiple nodes in order to
feed duplicate versions of the image as it appears at that point in the tree to multiple
operations.
When you position the pointer over any node’s input, a small tooltip appears that displays
its name. This helps you to identify which input to connect a node to so you can achieve
the result you want.
Creating and Connecting Nodes
In this section, you’ll learn the methods used to add, delete, and arrange nodes to a tree
to create any effect.
To add a node to the Node view along with an automatically attached Output node
µ Drag the first node you create from the Node list into the Node view.
The first node you drag into the Timeline from the Node list always appears with an
Output node automatically connected to it.
To add a new node to the Node view
Do one of the following:
µ Double-click any node in the Node list.
µ Select a node from the Node list, then click Add.
µ Drag a node from the Node list into the Node view.
Chapter 11 The Color FX Room 289New nodes always appear disconnected in the Node view.
To insert a new node between two nodes that are already connected
µ Drag a node from the Node list on top of the noodle connecting any two nodes, and
drop it when the noodle turns blue.
To automatically attach a new node to the input or output of a previously created
node
µ Drag a node from the Node list so that the hand pointer is directly on top of a disconnected
input or output, then drop it.
The new node appears with a noodle connecting it to the node input or output you
dropped it onto.
To delete one or more nodes from the Node view
µ Select one or more nodes in the Node view, then press Delete or Forward Delete.
The node disappears, and any noodles that were connected to it are disconnected.
To connect the output of one node to the input of another
µ Drag a noodle from the output of one node to the input of another.
Noodles are green while they’re being created, but turn gray once they’re connected.
290 Chapter 11 The Color FX RoomTo disconnect a node from the one above it
Do one of the following:
µ Click the input of any node with a connected noodle to disconnect it.
µ Drag a noodle from the input of the node you want to disconnect to any empty area of
the Node view.
Tip: If you want to eliminate the effect a node is having without deleting or disconnecting
it, you can turn on its Bypass button, at the top of the Parameters tab. For more
information, see Bypassing Nodes.
When you’re working on large node trees, it pays to keep them organized so that their
operation is clear.
To rearrange nodes in the Node view
Do one of the following:
µ Drag a single node in any direction.
µ Drag a selection box over a group of nodes, then drag any of the selected nodes into any
direction to move them all together.
Adjusting Node Parameters
The operation of most nodes can be customized using parameters that vary from node
to node, depending on a node’s function. All node parameters appear in the Parameters
tab, to the left of the Color FX bin.
To show any node’s parameters in the Parameters tab
µ Click once on the node you want to edit.
Chapter 11 The Color FX Room 291Selected nodes appear highlighted in cyan, and if a selected node has any parameters,
they appear to the right, ready for editing. You can edit node parameters the same way
you edit parameters in any other room.
You can also choose the point in a node tree at which you want to view the image.
To show the image being processed at any node in the Node view
µ Double-click the node you want to view.
The currently viewed node appears highlighted in yellow, and the image as it appears at
that node in the tree appears in the onscreen preview and broadcast output displays.
Note: Because double-clicking a node loads its image and opens its parameters in the
Parameters tab, it appears with a blue outline as well.
For more information on making adjustments to a node while viewing the effect on
another node downstream in the node tree, see Viewing a Node’s Output While Adjusting
Another’s Parameters.
Viewing a Node’s Output While Adjusting Another’s Parameters
When you’re creating multinode effects, it’s often valuable to view a node that appears
at the bottom of the node tree while you’re adjusting a node that’s farther up the tree.
This way you can adjust any parameter while viewing its effect on the entire tree’s
operation.
In the following example, a high-contrast gauzy look is created with a series of nodes
consisting of the B&W, Curve, and Blur nodes on one side (to create a gauzy overlay), and
a Bleach Bypass on the other (providing high contrast), with both sides connected to a
Multiply node to create the gauzy combination.
292 Chapter 11 The Color FX RoomAs you fine-tune this effect, you want to adjust the amount the black-and-white image
contributes to the final effect by adjusting the Curve node, but you need to view the
output of the Multiply node in order to see how far to make the adjustment. In this case,
you double-click the Multiply node so that it becomes the viewed node (highlighted in
yellow).
Then, click the Curve node once to load its parameters into the Parameters tab. (The node
becomes highlighted in cyan.)
Bypassing Nodes
Each node has a Bypass button that appears at the top of its list of parameters. Click
Bypass to turn off the effect that node has on the tree without deleting the node from
the Node view.
Chapter 11 The Color FX Room 293Bypassed nodes are outlined with an orange dotted line.
If you want to suspend the effect of an entire node tree without deleting it or individually
turning on each node’s Bypass button, you must disconnect the Output node entirely.
Cutting, Copying, and Pasting Nodes
You can cut, copy, and paste selected nodes in the Color FX room. Using the Copy and
Paste operations, you can duplicate one or more nodes whenever necessary. This can be
especially useful when creating color effects for projects using interlaced media. (For
more information, see Using Color FX with Interlaced Shots.)
To cut one or more selected nodes
µ Choose Edit > Cut (or press Command-X).
The selected nodes are removed from the Node view, and are copied to the Clipboard.
To copy one or more selected nodes
µ Choose Edit > Copy (or press Command-C).
The selected nodes are copied to the Clipboard.
To paste nodes that you’ve previously cut or copied
µ Choose Edit > Paste (or press Command-V).
New instances of whichever nodes were previously cut or copied to the Clipboard appear
in the Node view.
Creating Effects in the Color FX Room
This section outlines some of the most common operations you’ll perform in the Color
FX room. For more information, see:
• Using Single Input Nodes
• Using Layering Nodes
• Math Layering Nodes Explained
• Creating Layered Effects Using Mattes
294 Chapter 11 The Color FX RoomUsing Single Input Nodes
The simplest use of this room is to apply one or two single-input nodes to create a stylized
effect. In this case, all you need to do is add the nodes you want to use, connect them
together in the order in which you want them applied, and then add an Output node to
the very end.
In the following example, a Bleach Bypass node (which alters the saturation and contrast
of an image to simulate a chemical film process) is followed by a Curve node (to further
alter image contrast), which is followed by the Output node that must be added to the
end of all node trees.
Using Layering Nodes
A more sophisticated use of nodes is to use multi-input nodes to combine two or more
separately processed versions of the image for a combined effect.
Chapter 11 The Color FX Room 295In one of the simplest examples, you can tint an image by attaching a Color node (which
generates a user-definable color) to one input of a Multiply layering node.
This adjustment multiplies the color with the corrected image. (Remember, disconnected
inputs always link to the corrected image data.) Because of the way image multiplication
works, the lightest areas of the image are tinted, while progressively darker areas are less
tinted, and the black areas stay black.
In a slightly more complicated example, the image is processed using three nodes: a
Duotone node (which desaturates the image and remaps black and white to two
customizable colors), a Curve node (to darken the midtones), and a Blur node. The result
is connected to one input of an Add node (with both Bias parameters set to 1).
296 Chapter 11 The Color FX RoomThe Duotone, Curve, and Blur nodes tint, darken, and blur the image prior to adding it
to the corrected image (coming in via input 2), and the result is a diffusion effect with
hot, glowing highlights.
Math Layering Nodes Explained
The layering nodes shown in Using Layering Nodes use simple math to combine two
differently modified versions of the image together. These mathematical operations rely
on the following numerical method of representing tonality in each of the three color
channels of an image:
• Black = 0 (so black for RGB = 0, 0, 0)
• Midtone values in each channel are fractional, from .00001 through .999999
• White = 1 (so white for RGB = 1, 1, 1)
Bear these values in mind when you read the following sections.
Add
The pixels from each input image are added together. Black pixels have a value of 0, so
black added to any other color results in no change to the image. All other values are
raised by the sum of both values. The order in which the inputs are connected doesn’t
matter.
Add operations are particularly well suited to creating aggressive glowing effects, because
they tend to raise levels very quickly depending on the input images. Bear in mind that
the best way of controlling which areas of the image are being affected when using an
Add operation is to aggressively control the contrast of one of the input images. The
darker an area is, the less effect it will have.
Note: By default, the Bias parameters of the Add node divide each input image’s values
by half before adding them together. If the results are not as vivid as you were hoping
for, change the Source 1 and Source 2 Bias parameters to 1.
Chapter 11 The Color FX Room 297Difference
The pixels from the image that’s connected to Source 1 are subtracted from the pixels
from the image that’s connected to Source 2. Black pixels have a value of 0, so any color
minus black results in no change to the Source 1 image. The order in which the inputs
are connected matters.
This node is useful for darkening the Source 1 image based on the brightness of the
Source 2 image.
Multiply
The pixels from each input image are multiplied together. White pixels have a value of
1, so white multiplied with any other color results in no change to the other image.
However, when black (0) is multiplied with any other color, the result is black.
When multiplying two images, the darkest parts of the images remain unaffected, while
the lightest parts of the image are the most affected. This is useful for tinting operations,
as seen previously, as well as for operations where you want to combine the darkest
portions of two images.
Creating Layered Effects Using Mattes
An extremely important method of creating layered effects involves using a grayscale
matte to control where in an image two inputs are added together. The Alpha Blend
node has three inputs that work together to create exactly this effect.
This node blends the Source 2 input to the Source 1 input in all the areas where the
Source 3 Alpha input image is white. Where the Alpha input image is black, only the
Source 1 input is shown.
298 Chapter 11 The Color FX RoomAny grayscale image can be used to create a matte that you can connect to the Alpha
input, for a variety of effects. In the following example, a Curve node is used to manipulate
the contrast of an image so that an Edge Detector node can better isolate the edges to
create a grayscale matte; a Blur node is used to soften the result, and an Invert node is
used to reverse the black and white areas of the matte so that the edges of the face
become the areas of the matte that are transparent, or not to be adjusted.
This matte is connected to the Alpha input of the Alpha Blend node (the third input). A
Blur node is then connected to the Source 2 input.
Chapter 11 The Color FX Room 299The Blur node blurs the corrected image, but the matte image that’s connected to the
Alpha input limits its effect to the areas of the image that don’t include the image detail
around the edges that were isolated using the Edge Detector node.
As you can see, the image that’s connected to the Alpha input of the Alpha Blend node
limits the way the Source 1 and Source 2 inputs are combined. This is but one example
of the power of the Alpha Blend node. You can use this node to limit many different
effects.
Using Color FX with Interlaced Shots
One of the limitations of the Color FX room is that many effects need to be specially
assembled when you’re working on interlaced video.
When you’re creating an effect for an interlaced shot, you need to separate each field at
the beginning of the node tree with two Deinterlace nodes, one set to Even and one set
to Odd. Once that’s done, you need to process each individual field using identical node
trees.
When you’re finished with the effect, you need to reassemble the fields into frames using
the Interlace node, connecting the Even branch of the node tree to the Even input on
the left and the Odd branch of the node tree to the Odd input on the right. The Output
node is attached to the Interlace node, and you’re finished.
300 Chapter 11 The Color FX RoomIf you don’t process each field separately, you may encounter unexpected image artifacts,
especially when using filtering and transform nodes such as Blur, Sharpen, Stretch, and
Translate.
Saving Favorite Effects in the Color FX Bin
When you’ve created a Color FX effect you really like, you can save it for future use using
the Color FX bin. This bin works the same way as the corrections bins in every other room.
To save an effect in the Color FX bin
1 Move the playhead to a shot with a node tree you want to save.
2 Type a name for the effect into the File field underneath the bin. (This step is optional,
but recommended.)
3 Click Save.
The effect is saved with a thumbnail taken from the shot it was saved from. Entering a
custom name is optional, but recommended, to help you keep track of all your corrections.
If you don’t enter a name, saved corrections (and grades) are automatically named using
the default Effect.Date.Time.cfx convention.
To apply a saved effect or grade to a single shot
1 Move the playhead to the shot you want to apply the effect to.
2 Do one of the following:
• Double-click the effect you want to apply.
• Select an effect, then click the Load button underneath the bin.
• Drag the effect onto the shot you want to apply it to.
The selected effect is applied to the shot at the position of the playhead. You can also
apply a saved effect to multiple shots.
To apply a saved effect to multiple shots
1 Select all of the shots you want to apply the correction to in the Timeline.
Chapter 11 The Color FX Room 3012 Do one of the following:
• Double-click the effect in the bin.
• Select a saved effect, then click the Load button underneath the bin.
• Drag the saved effect onto the selected shots in the Timeline.
The effect is then applied to all selected shots in the Timeline.
For more information on saving and managing corrections, see Managing Corrections
and Grades.
Node Reference Guide
This node reference guide contains a brief description of each node that appears in the
Node list. It’s broken down into three sections:
• Layer Nodes
• Effects Nodes
• Utility Nodes
Layer Nodes
The following nodes have multiple inputs and are used to combine two or more differently
processed versions of the corrected image in different ways.
Add
Mathematically adds each pixel from the two input images together. Add operations are
particularly well suited to creating aggressive glowing effects, because they tend to raise
levels very quickly depending on the input images. Bear in mind that the best way of
controlling which areas of the image are being affected when using an Add operation is
to aggressively control the contrast of one of the input images. The darker an area is, the
less effect it will have.
The order in which the inputs are connected does not matter. Add has two parameters:
• Source 1 Bias: Controls how much of the Source 1 image is added to create the final
result by multiplying the value in each channel by the specified value. Defaults to 0.5.
• Source 2 Bias: Controls how much of the Source 2 image is added to create the final
result by multiplying the value in each channel by the specified value. Defaults to 0.5.
Alpha Blend
This node blends (similar to the Blend node) the Source 2 input to the Source 1 input in
all the areas where the Source 3 Alpha input image is white. Where the Alpha input image
is black, only the Source 1 input is shown. The order in which the inputs are connected
affects the output.
302 Chapter 11 The Color FX RoomBlend
This node mixes two inputs together based on the Blend parameter. The order in which
the inputs are connected does not matter. Blend has one parameter:
• Blend: When set to 0, only Input 1 is output. When set to .5, Input 1 and Input 2 are
blended together equally and output. When set to 1, only Input 2 is output.
Darken
Emphasizes the darkest parts of each input. Overlapping pixels from each image are
compared, and the darkest pixel is preserved. Areas of white from either input image
have no effect on the result. The order in which the inputs are connected does not matter.
Difference
The pixels from the image that’s connected to Source 1 are subtracted from the pixels
from the image that’s connected to Source 2. Black pixels have a value of 0, so any color
minus black results in no change to the image from Source 1. Since this is subtraction,
the order in which the inputs are connected matters.
Interlace
The images connected to each input are interlaced. The Left input is for the Even field,
and the Right input is for the Odd field. This node is used at the end of node trees that
begin with Deinterlace nodes to process effects for projects using interlaced media.
Lighten
Lighten emphasizes the lightest parts of each input. Overlapping pixels from each image
are compared, and the lightest pixel is preserved. The order in which the inputs are
connected does not matter.
Multiply
The pixels from each input image are multiplied together. White pixels have a value of
1, so white multiplied with any other color results in no change to the other image.
However, when black (0) is multiplied with any other color, the result is black.
When multiplying two images, the darkest parts of the images remain unaffected, while
the lightest parts of the image are the most affected. This is useful for tinting operations,
as well as for operations where you want to combine the darkest portions of two images.
RGB Merge
The three inputs are used to insert individual channels into the red, green, and blue color
channels. You can split the three color channels apart using the RGB Split node, process
each grayscale channel individually, and then reassemble them into a color image again
with this node.
Effects Nodes
The following nodes have a single input and are used to apply a single correction or effect
to an image.
Chapter 11 The Color FX Room 303B&W
Desaturates the image to produce a monochrome image consisting of only the Luma
component. This is done using very specific math, adding together 0.299 of the red
channel, 0.587 of the green channel, and 0.114 of the blue channel to arrive at the final
monochrome result.
Bleach Bypass
Raises the contrast and desaturates the image. Simulates laboratory silver-retention
processes used to raise image contrast in film by skipping the bleaching stage of film
development, leaving exposed silver grains on the negative which boost contrast, increase
grain, and reduce saturation.
Blur
Blurs the image. Blur has one parameter:
• Spread: The amount of blur. Can be set to a value from 0 (no blur) to 40 (maximum
blur).
Clamp
Two parameters clip the minimum and maximum values in the image. Clamp has two
parameters:
• Min: The minimum level in the image. Any levels below this value are set to this value.
• Max: The maximum level in the image. Any levels above this value are set to this value.
Curve
A curve that affects image contrast similar to the Luma curve in the Primary In room.
Selecting this node displays a curve control in the Parameters tab that works identically
to those found in the Primary In room. Four buttons below let you choose which channel
the curve operates upon:
• Luma: Sets the curve to adjust the luma component of the image.
• Red: Sets the curve to adjust the red color channel of the image.
• Green: Sets the curve to adjust the green color channel of the image.
• Blue: Sets the curve to adjust the blue color channel of the image.
Duotone
Desaturates the image, mapping the black and white points of the image to two
user-customizable colors to create tinted images with dual tints from white to black.
Duotone has two parameters:
• Light Color: The color that the white point is mapped to.
• Dark Color: The color that the black point is mapped to.
304 Chapter 11 The Color FX RoomEdge Detector
A Convolution filter that boosts image contrast in such a way as to reduce the image to
the darkest outlines that appear throughout. Edge Detector has three parameters:
• B&W: Desaturates the resulting image. Useful when using this node to generate mattes.
• Scale: Adjusts the white point. Lowering Scale helps increase contrast and crush midtone
values to emphasize the outlines.
• Bias: Adjusts overall contrast. Lowering Bias increases contrast, while raising it lowers
contrast.
Exposure
Raises the highlights or crushes the shadows, depending on whether you raise or lower
the Exposure parameter. This node has one parameter:
• Exposure: Raising this parameter raises the highlights while keeping the black point
pinned. Setting this parameter to 0 results in no change. Lowering this parameter scales
the image levels down, crushing the shadows while lowering the highlights by a less
severe amount.
Film Grain
Adds noise to the darker portions of an image to simulate film grain or video noise due
to underexposure. Highlights in the image are unaffected. This node is useful if you have
to match a clean, well-exposed insert shot into a scene that’s noisy due to underexposure.
Also useful for creating a distressed film look. This node has three parameters:
• Grain Intensity: Makes the noise more visible by raising its contrast ratio (inserting both
light and dark pixels of noise) as well as the saturation of the noise.
• Grain Size: Increases the size of each “grain” of noise that’s added. Keep in mind that
the size of the film grain is relative to the resolution of your project. Film grain of a
particular size applied to a standard definition shot will appear “grainier” than the
same-sized grain applied to a high definition shot.
• Monochrome: Turning this button on results in the creation of monochrome, or
grayscale, noise, with no color.
Film Look
An “all-in-one” film look node. Combines the Film Grain operation described above with
an “s-curve” exposure adjustment that slightly crushes the shadows and boosts the
highlights. Contrast in the midtones is stretched, but the distribution of the midtones
remains centered, so there’s no overall lightening or darkening. This node has three
parameters:
• Grain Intensity: Makes the noise more visible by raising its contrast ratio (inserting both
light and dark pixels of noise) as well as the saturation of the noise.
Chapter 11 The Color FX Room 305• Grain Size: Increases the size of each “grain” of noise that’s added. Keep in mind that
the size of the film grain is relative to the resolution of your project. Film grain of a
particular size applied to a standard definition shot will appear “grainier” than the
same-sized grain applied to a high definition shot.
• Contrast: Makes an “s-curve” adjustment to contrast, which crushes the shadows and
boosts the highlights, while leaving the midtones centered. A value of 0 preserves the
original contrast of the corrected image, while a value of 1 is the maximum contrast
expansion that is possible with this node.
Gain
Adjusts contrast by raising or lowering the white point of the image while leaving the
black point pinned in place, and scaling the midtones between the new white point and
the black point. This node has four parameters:
• Gain: Adjusts the red, green, and blue channels simultaneously, for an overall change
to image highlights and midtones.
• Red Gain: Adjusts the red channel only, enabling color correction based on a white
point adjustment for that channel.
• Green Gain: Adjusts the green channel only, enabling color correction based on a white
point adjustment for that channel.
• Blue Gain: Adjusts the blue channel only, enabling color correction based on a white
point adjustment for that channel.
Gamma
Makes a standard gamma adjustment, which makes a nonlinear adjustment to raise or
lower the distribution of midtones of the image while leaving the black and white points
pinned in place. This is a power function, (f(x) = x
a
). This node has four parameters:
• Gamma: Adjusts the red, green, and blue channels simultaneously, for an overall change
to image midtones.
• Red Gamma: Adjusts the red channel only, enabling color correction based on a gamma
adjustment for that channel.
• Green Gamma: Adjusts the green channel only, enabling color correction based on a
gamma adjustment for that channel.
• Blue Gamma: Adjusts the blue channel only, enabling color correction based on a
gamma adjustment for that channel.
306 Chapter 11 The Color FX RoomGrain Reduction
Reduces grain and noise in an image by averaging adjacent pixels in that frame according
to the values specified in the Master, Red, Green, and Blue Scale parameters. Edge
detection can be used to preserve sharpness in areas of high-contrast detail via the Edge
Retention parameter, and a sharpening operation can be applied after grain reduction
to boost overall detail. Because some shots have noise that’s more apparent in specific
color channels, you can make independent adjustments to each channel. This node has
six parameters:
• Master Scale: Averages the adjacent pixels of every color channel in the image to reduce
grain and noise, at the expense of a certain amount of image softness.
• Red Scale: Selectively averages pixels in the red channel.
• Green Scale: Selectively averages pixels in the green channel.
• Blue Scale: Selectively averages pixels in the blue channel.
• Edge Retention: Uses edge detection to isolate areas of high-contrast detail in the image
(such as hair, eyes, and lips in an actor’s close-up), and excludes those areas of the
image from the Grain Reduction operation to preserve the most valuable image detail
from softening. Higher values preserve more of the original image in these areas.
• Post Sharpening: Applies a Sharpening Convolution filter after the Grain Reduction
operation to try and restore some lost detail once the grain has been softened. Use
this parameter sparingly—if you set this too high, you’ll end up reintroducing the grain
you’re trying to reduce.
Hue
Rotates the hue of every pixel in the entire image. This node has one parameter:
• Shift: The amount by which you want to shift the hue. This is not done in degrees, as
is represented in the Vectorscope. Instead, you use a value from –1 to 1, where –1, 0,
and 1 place the hue at the original values.
Invert
Inverts the image. Useful for creating “positives” from the image negative. Also useful for
reversing a grayscale image that you’re using as a matte with the Alpha Blend node, to
reverse the portions of the matte that will be solid and transparent.
Lift
Lift uniformly lightens or darkens the entire image, altering the shadows, midtones, and
highlights by the same amount. This node has four parameters:
• Lift: Adjusts the red, green, and blue channels simultaneously, for an overall change
to image brightness.
• Red Lift: Adjusts the red channel only, enabling color correction based on a lift
adjustment for that channel.
Chapter 11 The Color FX Room 307• Green Lift: Adjusts the green channel only, enabling color correction based on a lift
adjustment for that channel.
• Blue Lift: Adjusts the blue channel only, enabling color correction based on a lift
adjustment for that channel.
Maximum
Averages adjacent pixels together (how many is based on the Brush Size parameter) to
produce a single, larger pixel based on the brightest value in that pixel group. Larger
values result in flattened, almost watercolor-like versions of the image. This node is also
useful for expanding the white areas and smoothing out grayscale images that you’re
using as mattes. This node has one parameter:
• Brush Size: Defines how many pixels are averaged into a single, larger pixel. Extremely
large values result in progressively larger, overlapping square pixels of uniform color,
emphasizing lighter pastel-like tones in the image.
Minimum
Averages adjacent pixels together (how many is based on the Brush Size parameter) to
produce a single, larger pixel based on the darkest value in that pixel group. Larger values
result in flattened, darkened versions of the image. This node is also useful for expanding
the black areas and smoothing out grayscale images that you’re using as mattes. This
node has one parameter:
• Brush Size: Defines how many pixels are averaged into a single, larger pixel. Extremely
large values result in progressively larger, overlapping square pixels of uniform color,
emphasizing darker, muddier tones in the image.
Printer Lights
Provides Red, Green, and Blue parameters for color correction that work identically to the
printer points controls in the Advanced tab of the Primary In room. For more information,
see Printer Points Controls.
Saturation
Raises or lowers overall image saturation, making the image more or less colorful. If you
use the Saturation node to completely desaturate an image, all three color channels are
blended together equally to create the final monochrome result, which looks different
then if you had used the B&W node. This node has one parameter:
• Saturation: The default value of 1 produces no change. 0 is a completely desaturated
image, while the maximum value of 10 produces an excessively saturated, hyper-stylized
version of the image.
308 Chapter 11 The Color FX RoomScale RGB
Expands or contracts the overall contrast ratio of a shot, from the black point to the white
point, centering the midpoint of this operation at a percentage of image tonality that
you specify. This node has two parameters:
• Scale: The amount by which to expand or contract the overall contrast ratio in the shot.
This is a multiplicative operation, so a value of 1 produces no change, while larger
values increase the contrast ratio, and smaller values decrease the contrast ratio.
• Center: Specifies the percentage of image tonality upon which the expansion and
contraction is centered, so the original image values at this percentage remain at that
percentage. The default value of 0.5 adjusts the white and black points equally in both
directions (the white point goes up, the black point goes down, and whatever values
are at 50 percent remain at 50 percent). A value of 0 pins the black point while applying
the entire adjustment to the white point, and a value of 1 pins the white point while
applying the entire adjustment to the black point.
Sharpen
Applies a Sharpen Convolution filter that selectively enhances contrast in areas of image
detail to provide the illusion of sharpness. Should be used sparingly as this operation also
increases the sharpness of film grain and video noise. This node has one parameter:
• Sharpen: Higher values increase image detail contrast. A value of 0 does no sharpening.
Smooth Step
Applies a nonadjustable “s-curve” adjustment to slightly crush the blacks and boost the
whites, leaving the black and white points pinned at 0 and 100 percent. Designed to
emulate the exposure tendencies of film at the “toe” and “shoulder” of the image. This
is a similar contrast adjustment to that made by the Film Look node.
Stretch
Provides separate vertical and horizontal scaling operations that let you “squeeze” and
“stretch” the image. You can change the center pixel at which this scaling is performed.
This node has four parameters:
• Horizontal Center: The pixel at which horizontal scaling is centered. The center pixel
doesn’t move; instead, the scaling of the image is relative to this position.
• Vertical Center: The pixel at which vertical scaling is centered. The center pixel doesn’t
move; instead, the scaling of the image is relative to this position.
• Horizontal Scale: Specifies how much to stretch the image, horizontally. Higher values
stretch the image outward, while lower values squeeze the image inward. The default
value at which the image is unchanged is 1.
• Vertical Scale: Specifies how much to stretch the image, vertically. Higher values stretch
the image outward, while lower values squeeze the image inward. The default value
at which the image is unchanged is 1.
Chapter 11 The Color FX Room 309Translate
Offsets the image relative to the upper-right corner. This node has two parameters:
• Horizontal Offset: Moves the image left.
• Vertical Offset: Moves the image down.
Utility Nodes
The following nodes don’t combine images or create effects on their own. Instead, they
output color channel information or extract matte imagery in different ways. All these
nodes are meant to be used in combination with other layering and effects nodes to
create more complex interactions.
Color
Produces a frame of solid color. This can be used with different layering nodes to add
colors to various operations. This node has one control:
• Color: A standard color control lets you choose the hue, saturation, and lightness of
the color that’s generated.
Deinterlace
Removes the interlacing of a shot in one of three ways, corresponding to three buttons.
You can use this node to either remove interlacing by blending the fields together, or
you can use two Deinterlace nodes to separate the Even and Odd fields of an interlaced
shot prior to processing each field separately and reassemble them using the Interlace
node. This node has three buttons:
• Merge: Outputs the blended combination of both fields.
• Even: Outputs only the Even field, line-doubled to preserve the current resolution.
• Odd: Outputs only the Odd field, line-doubled to preserve the current resolution.
HSL Key
An HSL keyer that outputs a grayscale matte which you can use to isolate effects using
the Alpha Blend node, or simply to combine with other layering nodes in different ways.
This keyer works identically to the one found in the Secondaries room. For more
information, see Choosing a Region to Correct Using the HSL Qualifiers.
Output
This must be the last node in any node tree. It outputs the effect created within the Color
FX room to the main Color image processing pipeline for rendering. If an Output node
is not connected to the node tree, that effect will not be rendered by the Render Queue.
310 Chapter 11 The Color FX RoomRGB Split
Outputs the red, green, and blue color channels individually, depending on which button
you click. Each grayscale color channel can then be independently manipulated with
different node tree branches, before being reassembled using the RGB Merge node. This
node has three checkboxes:
• Red: Outputs the red channel.
• Green: Outputs the green channel.
• Blue: Outputs the blue channel.
Vignette
Creates a simple square or circle vignette. This vignette appears as a color-against-grayscale
preview if the Vignette node is viewed directly. When the results are viewed “downstream,”
by viewing a different node that’s processing its output, the true grayscale image is seen.
This node has the following parameters:
• Use Tracker: If you’ve analyzed one or more motion trackers in the current project, you
can choose which tracker to use to automatically animate the position of the vignette
from this pop-up menu. To disassociate a vignette from the tracker’s influence, choose
None.
• Shape Type: Lets you choose the type of vignette, either Circle or Square.
• Invert: Click this button to make the white area black, and the black area white.
• X Center: Adjusts the horizontal position of the shape.
• Y Center: Adjusts the vertical position of the shape.
• Size: Enlarges or shrinks the shape.
• Aspect: Adjusts the width-to-height ratio of the shape.
• Angle: Rotates the current shape.
• Softness: Blurs the edges of the shape.
Chapter 11 The Color FX Room 311The Primary Out room provides an additional set of controls for overall color correction,
but it can also be used as a tool to trim the grades applied to a selected group of shots.
This chapters covers the different uses of the Primary Out room, which shares the same
controls as the Primary In room. For more information about primary color correction
controls, see The Primary In Room.
This chapter covers the following:
• What Is the Primary Out Room Used For? (p. 313)
• Making Extra Corrections Using the Primary Out Room (p. 314)
• Understanding the Image Processing Pipeline (p. 314)
• Ceiling Controls (p. 315)
What Is the Primary Out Room Used For?
The controls and functionality of the Primary Out room duplicate those of the Primary In
room. This includes sharing saved corrections as the Primary In and Out rooms access
the same saved corrections in their bins.
The Primary Out room is valuable for three main reasons:
• It provides an extra room that you can use to make additional modifications to a shot’s
grade, without changing the adjustments applied using the Primary In room.
• The Primary Out room comes after the Primary In, Secondaries, and Color FX rooms in
the image processing pipeline, so you can apply adjustments to the overall image after
the corrections and effects have been added in the other rooms.
• There are three additional controls in the Primary Out toom that don’t exist in the
Primary In room. The Ceiling parameters give you one more way to limit the color
values in a shot to legalize or stylize them.
313
The Primary Out Room
12Making Extra Corrections Using the Primary Out Room
The Color interface was designed for flexibility. The functionality of each of the color
correction rooms overlaps broadly, and although each room has been arranged to optimize
certain types of operations, you can perform corrections using whichever controls you
prefer.
In many cases, colorists like to split up different steps of the color correction process
among different rooms. This is detailed in Managing a Shot’s Corrections Using Multiple
Rooms.
Using this approach, you might perform a shot’s main correction using the Primary In
room, use the Secondaries room for stylized “look” adjustments, and then apply one of
your previously saved “secret sauce” Color FX room effects to give the shot its final grade.
Once your client has had the opportunity to screen the program, you’ll no doubt be given
additional notes and feedback on your work. It’s at this time that the value of the Primary
Out room becomes apparent.
Up until now, this room has remained unused, but because of that, it’s a great place to
easily apply these final touches. Because you can apply these final corrections in a
completely separate room, it’s easy to clear them if the client changes his or her mind.
Furthermore, it’s easy to use the Primary Out room to apply changes that affect an entire
scene to multiple clips at once (sometimes referred to as trimming other grades).
To trim one or more selected grades using the Primary Out room
1 Move the playhead to the shot you want to adjust, then click the Primary Out room.
2 Make whatever adjustments are required using the color and contrast controls.
3 Select all the shots in the Timeline that you want to apply these adjustments to.
4 Click Copy To Selected.
The corrections you made in the Primary Out room of the current shot are applied to
every shot you’ve selected.
Note: The Copy To Selected command overwrites any previous settings in the Primary
Out room of each selected clip, so if you need to make a different adjustment, you can
simply repeat the procedure described above to apply it to each selected shot again.
Understanding the Image Processing Pipeline
Another use of the Primary Out room is to apply corrections to clips after the corrections
that have been applied in each of the previous rooms.
314 Chapter 12 The Primary Out RoomAs the processed image makes its way from the Primary In to the Secondaries to the Color
FX rooms, the corrections in each room are applied to the image that’s handed off from
the previous room. Since the Primary Out room is the very last correction room in every
grade, it processes the image that’s output from the Color FX room. You can take
advantage of this to apply overall corrections to the post-processed image.
In the following example, a series of corrections that affect saturation are made in each
of the rooms, but the Primary Out room is used to reduce the saturation of the end result.
You can see that the final correction modifies the collective output from every other
room.
Primary Out Adjustment
(Saturation adjustment affects the sum
of all corrections)
Secondary Adjustment
(Boost orange saturation)
Primary In Adjustment
(Original image adjustment)
Color FX Adjustment
(Add blue vignette)
Ceiling Controls
Lastly, the Primary Out room has a single group of controls that aren’t found in the Primary
In room. The Enable Clipping button in the Basic tab of the Primary Out room lets you
turn on the effect of the three individual ceiling parameters for the red, green, and blue
color channels of the current shot.
This option lets you prevent illegal broadcast values in shots to which you’re applying
extreme Primary In, Secondary, or Color FX corrections if you don’t want to turn on
Broadcast Safe for the entire program.
The Ceiling parameters can also be used to perform RGB limiting for hard-to-legalize clips.
Chapter 12 The Primary Out Room 315Note: If Enable Clipping and Broadcast Safe are both on, the lowest of the two standards
is applied.
These controls are used to adjust color channel ceiling settings:
• Enable Clipping: Enables the Ceiling Red/Green/Blue controls.
• Ceiling Red: Sets the maximum allowable chroma in the red channel. All values above
this level will be set to this level.
• Ceiling Green: Sets the maximum allowable chroma in the green channel. All values
above this level will be set to this level.
• Ceiling Blue: Sets the maximum allowable chroma in the blue channel. All values above
this level will be set to this level.
316 Chapter 12 The Primary Out RoomColor provides many tools for managing the corrections and grades that you've applied.
You can work even faster by saving, copying, and applying corrections and grades you've
already created to multiple shots at once.
There are three areas of the Color interface where you can save, organize, copy, apply,
and otherwise manage corrections and grades: the corrections bin inside each room, the
Grades bin and the Shots browser in the Setup room, and the grades track in the Timeline.
This chapter describes the use of all these areas of the interface in more detail.
This chapter covers the following:
• The Difference Between Corrections and Grades (p. 317)
• Saving and Using Corrections and Grades (p. 318)
• Managing Grades in the Timeline (p. 325)
• Using the Copy To Buttons in the Primary Rooms (p. 332)
• Using the Copy Grade and Paste Grade Memory Banks (p. 334)
• Setting a Beauty Grade in the Timeline (p. 334)
• Disabling All Grades (p. 335)
• Managing Grades in the Shots Browser (p. 336)
• Managing a Shot’s Corrections Using Multiple Rooms (p. 343)
The Difference Between Corrections and Grades
There is a distinct difference between corrections and grades in Color. Understanding
the difference is key to managing each set of adjustments correctly.
317
Managing Corrections and Grades
13Corrections are adjustments that are made within a single room. You have the option to
save individual corrections into the bins available in the Primary In and Out, Secondaries,
and Color FX rooms. Once saved, corrections can be applied to one or more shots in your
project without changing the settings of any other rooms. For example, if there are five
shots in a scene to which you want to apply a previously saved secondary correction, you
can do so without affecting the primary corrections that have already been made to those
shots. Each room has its own corrections bin for saving and applying individual corrections,
although the Primary In and Primary Out rooms share the same saved corrections.
A grade, on the other hand, encompasses multiple corrections across several rooms,
saving every primary, secondary, and Color FX correction together as a single unit. When
you save a group of corrections as a grade, you can apply them all together as a single
preset. Applying a saved grade overwrites any corrections that have already been made
to the shot or shots you're applying it to. Saved grades are managed using the Grades
bin, located in the Setup room.
Saving and Using Corrections and Grades
You can save any correction and grade, in order to apply one shot's settings to others at
a later time. Examples of the use of saved corrections and grades include:
• Saving the finished grade of a shot in your program in order to apply it to other shots
that are also from the same angle of coverage
• Saving a correction to a shot from a specific problem reel of tape (for example, a reel
with a uniformly incorrect white balance) that you'll want to apply to every other shot
from the same reel
• Saving a stylistic "look" correction in the Primary, Secondaries, or Color FX room that
you want to apply to other scenes or programs
For more information, see:
• Saving Corrections into Corrections Bins
• Saving Grades into the Grades Bin
• Deleting Saved Corrections and Grades
• Organizing Saved Corrections and Grades with Folders
• Applying Saved Corrections and Grades to Shots
Saving Corrections into Corrections Bins
The Primary In, Secondaries, Color FX, and Primary Out rooms all have corrections bins
where you can save corrections that are specific to those rooms for future use. When you
save corrections in any room, they're available to every project you open in Color.
318 Chapter 13 Managing Corrections and GradesTo save a correction from the current shot into the current room’s bin
1 Move the playhead to the shot with a correction you want to save.
2 Click in the File field underneath the corrections bin, enter a name for the saved correction,
and press Return. (This step is optional.)
3 Click Save.
The correction is saved into the current room's bin with a thumbnail of the shot it was
saved from.
To save any shot’s correction into the current room’s bin
1 Click in the File field underneath the corrections bin, enter a name for the saved correction,
and press Return. (This step is optional.)
Chapter 13 Managing Corrections and Grades 3192 Drag the correction bar (in the grades track of the Timeline) of the shot you want to save
to the corrections bin.
Tip: To overwrite a previously saved correction with a new one using the same name,
select the correction you want to overwrite before saving the new grade, then click
Replace when a warning appears. This is useful when you’ve updated a grade that you
previously saved.
Entering a custom name for your saved correction is optional, but recommended, to help
you keep track of all your corrections during extensive grading sessions. If you don't enter
a name, saved corrections (and grades) are automatically named using the following
method:
CorrectionType.Day Month Year Hour.Minute.Second TimeZone.extension
The date and time used correspond to the exact second the correction is saved. For
example, a saved secondary correction might have the following automatic name:
Secondary.01 May 2007 10.31.47EST.scc
Corrections from each room are saved into corresponding directories in the
/Users/username/Library/Application Support/Color directory. For more information, see
How Are Grades and Corrections Saved and Organized?.
Saving Grades into the Grades Bin
Saved grades store the corrections that are applied in the Primary In, Secondaries, Color
FX, and Primary Out rooms all at once, so there's one more step.
To save a grade from the current shot
1 Click the Grades tab in the Setup room.
320 Chapter 13 Managing Corrections and Grades2 Move the playhead to the shot with a grade you want to save.
3 Select the grade that you want to save by clicking it in the Timeline.
4 Click in the File field underneath the corrections bin, enter a name for the saved correction,
and press Return. (This step is optional.)
5 Click the Save button (in the bottom-right corner of the Grades bin).
The grade is saved to the Grades bin.
The grade is saved with a thumbnail from the shot it was saved from. Once you've saved
a grade, deleting, organizing, and applying grades is identical to deleting, organizing,
and applying saved corrections.
Grades are saved to the /Users/username/Library/Application Support/Color/Grades
directory.
Chapter 13 Managing Corrections and Grades 321To save any shot’s grade
1 Click in the File field underneath the corrections bin, enter a name for the saved correction,
and press Return. (This step is optional.)
2 Drag the grade bar of any shot you want to save into the Grades bin.
Tip: To overwrite a previously saved grade with a new one using the same name, select
the grade you want to overwrite before saving the new grade, then click Replace when
a warning appears. This is useful when you’ve updated a grade that you previously saved.
Deleting Saved Corrections and Grades
You can delete saved corrections and grades you no longer need.
To delete a saved correction or grade
1 Select a correction or grade in any bin.
2 Press Delete or Forward Delete.
3 When a warning appears, click Yes.
322 Chapter 13 Managing Corrections and GradesThe selected correction or grade is deleted, both from Color and from disk. This operation
cannot be undone.
Organizing Saved Corrections and Grades with Folders
Saved corrections and grades are available to every project you open. For this reason,
you may find it useful to save your corrections and grades into folders within each room's
bin. There are a number of different ways you can use folders to organize your saved
corrections and grades:
• You can create a folder for each new project you work on, saving all the corrections
that are specific to a particular project within the corresponding folder.
• You can also create folders for grades that you have saved for use with any project. For
example, you may create a library of your own stylistic "looks" that you can apply to
instantly present your clients with different options.
Note: You can only save corrections and grades in a folder after that folder has been
created.
To create a new folder inside a bin
1 Click New Folder.
2 Enter a name for the new folder in the New Folder dialog, then click Create.
A new folder with the name you entered is created inside the corrections bin of that
room.
Chapter 13 Managing Corrections and Grades 323Every time you create a folder in a bin, you also create a subdirectory within the saved
correction directory for that room within the /Users/username/Library/Application
Support/Color directory.
To save a correction or grade into a folder
1 Move the playhead to the shot with a correction or grade you want to save.
2 Double-click a folder in the corrections or Grades bin to open it.
The Directory pop-up menu updates to display the directory path in the Finder of the
currently open folder.
3 Enter a name for the saved correction or grade in the File field underneath the corrections
or Grades bin. (This step is optional.)
4 Click Save.
The correction or grade is saved within that folder.
Important: There is no way of moving a saved correction into a folder after it's been saved
using the Color interface.
Reorganizing Saved Corrections and Grades in the Finder
Since each corrections bin simply mirrors the contents of the corresponding
subdirectories in the /Users/username/Library/Application Support/Color directory, you
can also use the Finder to reorganize your saved corrections and grades. For more
information, see Reorganizing Saved Corrections and Grades in the Finder.
Applying Saved Corrections and Grades to Shots
Once you've saved a correction or grade, applying it to one or more shots in your project
is easy.
To apply a saved correction or grade from a bin to a single shot
1 Move the playhead to the shot you want to apply the correction or grade to.
2 Do one of the following:
• Double-click the correction or grade you want to apply.
• Select a correction or grade to apply, then click the Load button underneath the bin.
• Drag the correction or grade onto the shot you want to apply it to.
The selected grade is applied to the shot at the position of the playhead.
To apply a saved correction or grade from a bin to multiple shots
1 In the Timeline, select all of the shots you want to apply the correction to.
Important: If the current shot at the position of the playhead is not selected, it will not
be included in the selection when you apply a saved correction from a bin.
324 Chapter 13 Managing Corrections and Grades2 Do one of the following:
• In the Grades or corrections bin, double-click the correction or grade you want to apply.
• Select a saved correction or grade in the Grades or corrections bin, then click the Load
button underneath the bin.
• Drag the saved correction or grade from the Grades or corrections bin, then drop it
onto the selected shots in the Timeline.
The correction or grade is then applied to all selected shots in the Timeline.
Managing Grades in the Timeline
Each shot can have up to four alternate grades, shown with different colors in the grades
tracks that are located underneath the video track. The currently selected grade for each
shot is blue, while unselected grades are gray. The bars showing the individual corrections
that contribute to the currently selected grade are shown in other colors, underneath
each shot's grade bars.
You can use the grades and correction bars in the grades tracks to add, switch, and copy
grades directly in the Timeline. For more information, see:
• Adding and Selecting Among Multiple Grades
• Resetting Grades in the Timeline
• Copying Corrections and Grades in the Timeline
Adding and Selecting Among Multiple Grades
Each shot in the Timeline can be set to use one of up to four alternate grades. Only the
currently selected grade actually affects a shot. The other unused grades let you store
alternate corrections and looks, so that you can experiment with different settings without
losing the original.
By default, each shot in a project has a single primary grade applied to it, although you
can add more at any time.
Chapter 13 Managing Corrections and Grades 325To add a new grade to a shot
Do one of the following:
µ Control-click or right-click a grade, then choose Add New Grade from the shortcut menu.
µ Move the playhead to the shot you want to add a new grade to, then press Control-1
through Control-4.
If a grade corresponding to the number of the grade you entered doesn't already exist,
one will be created. Whenever a new grade is added, the grades track expands, and the
new grade becomes the selected grade. New grades are clean slates, letting you begin
working from the original state of the uncorrected shot.
To change the selected grade
1 Move the playhead to the shot you want to change the grade of.
2 Do one of the following:
• Click the grade you want to switch to.
• Press Control-1 through Control-4.
• Control-click or right-click a grade, then choose Select Grade [x] from the shortcut
menu, where x is the number of the grade you're selecting.
The shot is updated to use the newly selected grade.
Resetting Grades in the Timeline
If necessary, you can reset any of a shot's four grades.
To reset a grade in the Timeline
1 Move the playhead to the shot whose grade you want to reset.
2 in the grades track of the Timeline, Control-click or right-click the grade you want to reset
to and choose Reset Grade [x] from the shortcut menu, where x is the number of the
grade.
Resetting a grade clears all settings from the Primary In, Secondaries, Color FX, and Primary
Out rooms, bringing that shot to its original state. Pan & Scan settings in the Geometry
room are left intact.
Copying Corrections and Grades in the Timeline
You can drag a correction or grade from one shot to another to copy it in the Timeline.
326 Chapter 13 Managing Corrections and GradesTo copy a correction from one shot to another
µ Drag a single correction bar in the grades track of the Timeline to the shot you want to
copy it to.
The shot you drag the correction onto becomes highlighted, and after you drop the
correction, the current grade for that shot appears with the same grade bar.
Note: When you copy individual corrections, secondary corrections overwrite other
secondary corrections of the same number.
To copy a grade from one shot to another
µ Drag a shot's grade bar in the grades track of the Timeline to a second shot you want to
copy it to.
The shot you drag the grade onto becomes highlighted, and after you drop it, every
correction in the current grade for that shot is overwritten with those of the grade you
copied.
You can also copy a grade to another grade within the same shot. This is useful for
duplicating a grade to use as a starting point for creating variations on that grade.
Chapter 13 Managing Corrections and Grades 327To copy a grade to another grade in the same shot
µ Drag a grade bar in the grades track of the Timeline onto another grade bar for the same
shot.
The copied grade overwrites all previous corrections.
Tip: This is a great way to save a shot's grade at a good state before continuing to
experiment with it. If you don't like your changes, you can easily switch back to the original
grade.
You can also drag a corrections or grade bar to copy it to multiple selected shots.
To copy a correction or grade to multiple selected shots in the Timeline
1 Select the shots you want to copy a correction or grade to.
Tip: You can select multiple clips directly in the Timeline, or you can select them in the
Shots browser of the Setup room if that’s easier. Shots that you select in the Shots browser
are also automatically selected in the Timeline.
2 Drag the correction or grade that you want to copy onto the grade bar of any of the
selected shots, either from the Timeline, or from a bin.
The grade bars of the shots you’re about to copy to should highlight in cyan.
Note: When dragging corrections or grades from one shot to another in the Timeline,
you should always drop them inside of the grades track of the Timeline. Dropping a
correction or grade onto an item in a video track may only copy it to the shot you drop
it onto.
328 Chapter 13 Managing Corrections and Grades3 Release the mouse button to copy the correction or grade to the selected shots.
Keep in mind the following rules when dragging corrections and grades onto multiple
selected shots:
• Dragging onto one of several selected shots copies that correction or grade to the
currently selected grade of each shot in the selection.
Before
After
Chapter 13 Managing Corrections and Grades 329• Dragging onto an alternate grade of one of several selected shots copies that correction
or grade into the alternate grade of the shot you dropped it onto, but it’s copied into
the currently selected grade of every other shot in the selection.
Before
After
330 Chapter 13 Managing Corrections and Grades• Dragging onto a shot that’s not part of the current selection only copies that correction
or grade to that shot.
Before
After
Chapter 13 Managing Corrections and Grades 331• The current shot at the position of the playhead is not included in a multishot Copy
operation unless it’s specifically selected (with a cyan highlight).
Before
After
Using the Copy To Buttons in the Primary Rooms
The Copy To Selected and Copy To All buttons in the Primary In and Primary Out rooms
are powerful tools for applying Primary In room or Primary Out room corrections to other
shots in your project.
To copy a primary correction to all currently selected shots in the Timeline
1 Move the playhead to a shot with a grade you want to copy to other shots in your program.
2 Set the grade used by that shot to the one you want to copy.
3 Select all the shots in the Timeline you want to copy the current grade to, being careful
not to move the playhead to another shot.
332 Chapter 13 Managing Corrections and Grades4 Click Copy To Selected.
The grade at the current position of the playhead is copied to all selected shots.
To copy a primary correction to every single shot in the Timeline
1 Move the playhead to a shot with a grade you want to copy to other shots in your program.
2 Set the grade used by that shot to the one you want to copy.
3 Click Copy To All.
The grade at the current position of the playhead is copied to every shot in your program.
Note: The Secondaries and Color FX rooms don’t have Copy To Selected or Copy To All
buttons. However, you can accomplish the same task in one of two ways: select the shots
you want to copy a correction to and then drag and drop within the Timeline (see Copying
Corrections and Grades in the Timeline); or save a Secondaries or Color FX correction to
that room’s bin, then select the shots you want to apply that correction to and drag it
onto one of the selected shots. For more information, see Applying Saved Corrections
and Grades to Shots.
Chapter 13 Managing Corrections and Grades 333Using the Copy Grade and Paste Grade Memory Banks
You can use the Copy Grade and Paste Grade commands to copy grades from one shot
and paste them into others. Five memory banks are available for copying and pasting
grades. This means that you can copy up to five different grades—with one in each
memory bank—and then paste different grades into different shots as necessary.
To copy a grade into one of the five memory banks
1 Move the playhead to the shot you want to copy a grade from.
2 Make the grade you want to copy the currently selected grade.
3 Choose Grade > Copy Grade > Mem-Bank 1 through 5 (or press Shift–Option–Control–1
through 5).
To paste a grade from one of the five memory banks
1 Move the playhead to the shot you want to copy a grade to.
2 Set the currently selected grade to the grade you want to paste into.
3 Choose Grade > Paste Grade > Mem-Bank 1 through 5 (or press Shift–Option–1 through
5).
The grade is applied to the shot at the position of the playhead.
Note: You cannot paste a grade from one of the five memory banks to multiple selected
shots at once.
You can also use the Copy and Paste memory banks feature via a supported control
surface. For more information, see Setting Up a Control Surface.
Setting a Beauty Grade in the Timeline
When you've set up a project with multiple grades for each shot, it may become difficult
to keep track of the grade you like best for any given shot. Marking a particular grade as
the beauty grade lets you keep track of the currently preferred grade for each shot.
While the beauty grade setting is primarily intended as a visual marker for your reference,
there is a command available from the Render Queue menu to add all beauty grades to
the Render Queue. (For more information, see How to Render Shots in Your Project.) This
means that you can use the beauty grade designation to control which shots are added
to the Render Queue. For example, you might use the beauty grade to keep track of
which clips you’ve changed during a revisions session, making it easy to render only the
changed shots at the end of the day.
The beauty grade does not have to be the currently selected grade, although if you begin
using the beauty grade designation, it’s best to keep it up-to-date for each shot in your
project to avoid confusion.
334 Chapter 13 Managing Corrections and GradesTo mark a grade as the beauty grade
1 Move the playhead to the shot on which you want to set a beauty grade.
2 Select the grade you want to set as the beauty grade.
3 Do one of the following:
• Choose Grade > Set Beauty Grade.
• Press Shift-Control-B.
• Move the pointer into the Timeline area, then press B.
The currently selected grade turns rust red to show that it's the beauty grade.
You can change which grade is set as the beauty grade at any time, or you can clear
beauty grade designations altogether.
To clear the beauty grade designation of one or more shots
1 Select one or more shots in the Timeline.
2 Choose Grade > Clear Selected Beauty Grades.
The beauty grade color is removed from all selected shots.
To clear the beauty grade designation from all shots
µ Choose Grade > Clear All Beauty Grades.
The beauty grade color is removed from all shots in the Timeline
Disabling All Grades
It's often valuable to disable every single correction you've applied to a shot, in order to
see a before-and-after view of the current state of your grade.
To disable and reenable rendering for all grades
µ Press Control-G.
Chapter 13 Managing Corrections and Grades 335All corrections made with the Primary In, Secondaries, Color FX, and Primary Out rooms
are disabled, and the grades track of the Timeline turns red to indicate that all grades are
currently disabled.
Note: Pan & Scan settings in the Geometry room remain enabled even when grades are
disabled.
Managing Grades in the Shots Browser
The Shots browser provides a different way to navigate and organize the shots in your
program, in a more nonlinear fashion than the Timeline allows. For example, you can use
the Find field in list view to search for groups of shots with common names.
You can also use icon view as an organizational tool to rearrange the shots in your program
into groups based not on their position in the program, but on the angle of coverage
they're from or the type of grade you'll be applying, to give but two examples. For more
information, see Using the Shots Browser.
336 Chapter 13 Managing Corrections and GradesNavigating and Arranging Shots in Icon View
When you're working on a project with many shots, it can help to scroll around and zoom
in and out to find the shots you're looking for.
To scroll around the Shots browser when in icon view
µ Middle-click anywhere within the Shots browser, then drag in the direction you want to
scroll.
To zoom in to or out of the Shots browser when in icon view
Do one of the following:
µ Press the Control key and drag with the left mouse button.
µ Right-click and drag up to zoom out, or down to zoom in.
You can also rearrange shots freely when the Shots browser is in icon view. Rearranging
the order of shots in icon view does nothing to change the shot order in the Timeline,
but it can help you to organize shots visually so they’re faster to find, select, and work
with later.
To move a shot in icon view
µ Drag the name bar of a shot to another location in the Shots browser.
Choosing Grades in Icon View
You can show all the alternate grades that are available to a shot and select the grade
that is currently in use.
To show all of a shot's available grades
µ Double-click the name bar underneath a shot's icon.
All the grades available to that shot appear as bars underneath, connected to the shot
with blue connection lines. Once the grades are revealed, you can change which one is
selected.
Chapter 13 Managing Corrections and Grades 337To select the grade used by a shot
µ Double-click the grade you want to select.
The selected grade turns blue, while the unselected grades remain dark gray.
Note: Grades that have been rendered are colored green.
Selecting Shots in the Shots Browser in Icon View
When in icon view, you can select one or more shots in the Timeline just as you can when
in list view. Additionally, you can select which grade a shot uses by expanding a shot to
reveal all its grades.
To change the current shot in icon view
µ Click the arrow to the right of a shot's name bar.
The current shot's name bar appears gray, and the playhead moves to that shot's first
frame in the Timeline.
338 Chapter 13 Managing Corrections and GradesTo select a shot
µ Click the shot's name bar, underneath its icon.
Selected shots appear with a cyan highlight over their name bars, and are simultaneously
selected in the Timeline.
To select multiple shots
µ Command-click the name bars of all the shots you want to select.
Grouping and Ungrouping Shots
A group is an organizational construct that's available in the Shots browser only when
it's in icon view. The purpose of groups is very simple: they provide targets with which
you can copy a grade to multiple shots at once.
Some examples of ways you might use groups include:
• You can organize all shots in a particular scene in a single group to facilitate applying
and updating stylized corrections to every shot in that scene at the same time.
• You could also organize only those shots within a scene that are from the same angle
of coverage (and so may be able to share the same corrections), so that you can apply
and update the same grade to every shot at once.
• Every shot of a certain type (for example, all head shots of a specific speaker) can be
grouped together to similarly let you apply corrections or grades to all those shots
simultaneously.
The uses of groups are endless. In short, any time you find yourself wanting to apply a
single correction or grade to an entire series of shots, you should consider using groups.
Note: Shots can only belong to one group at a time.
There are several different ways you can select shots you want to group together.
To select shots in the Timeline or Shots browser (in list view) and create a group
1 Select the shots you want to include in the group by doing one of the following:
• Shift-click or Command-click to select a range of contiguous or noncontiguous shots
in the Timeline.
Chapter 13 Managing Corrections and Grades 339• Set the Shots browser to list view, then Shift-click or Command-click to select a range
of contiguous or noncontiguous shots.
• Use the Find field, or click a column header to sort the list view, to help you identify
the shots you want to group together.
2 Set the Shots browser to icon view.
3 Press G.
A group is created, and a group node appears with blue connection lines showing all the
shots that belong to that group. Once created, you can rearrange the shot icons as
necessary to clean up the browser.
To create a group in icon view
1 Open the Shots browser in the Setup room.
2 Set the Shots browser to icon view.
3 Rearrange the shots you want to group within the Shots browser area (optional).
Even though this step is not strictly necessary, it can be helpful visually for you to see
which shots you're grouping together as a spatially arranged set of icons.
4 Select all the shots you want to group by Command-clicking their name bars.
5 Press G.
340 Chapter 13 Managing Corrections and GradesA group is created, and a group node appears with blue connection lines showing all the
shots that belong to that group.
To add a shot to an already existing group
µ Right-click anywhere on a shot's name bar, then drag a connection line to the node of
the group you want to add it to.
To ungroup a collection of grouped clips
µ Select the group node you want to delete, then press Delete or Forward Delete.
The node and its connection lines disappear, leaving the shots ungrouped.
To remove a single shot from a group
µ Right-click anywhere on a shot's name bar, then drag a connection line to an empty area
of the Shots browser.
Chapter 13 Managing Corrections and Grades 341When you release the mouse button, that shot will no longer be connected to the group.
For more information on working with groups once you’ve created them, see Working
with Groups.
Working with Groups
Once you've created one or more groups of shots, you can use the group node to show
and hide the shots that are connected to the group, and to copy grades and corrections
to every shot that's connected to that group.
When a group is collapsed, the shots that are connected to that group are hidden.
Double-clicking a collapsed group makes all the hidden shots visible again.
To collapse or expand a group
µ Double-click any group's node.
Once you've created a group, copying a correction or grade to the group is easy.
To copy a correction to a group
µ Drag the correction bar you want to copy from the Timeline onto any group node.
The correction you dragged overwrites the settings in the same room of every shot in
that group.
342 Chapter 13 Managing Corrections and GradesImportant: You can only copy corrections and grades from the Timeline to groups in the
Shots browser.
To copy a grade to a group
µ Drag a grade bar from the Timeline onto any group node.
The grade you dragged overwrites the currently selected grade of every shot in that
group. Unselected grades are not affected.
Managing a Shot’s Corrections Using Multiple Rooms
Color's interface for correcting and manipulating the color of your shots is extremely
flexible. While each room has individual controls that are tailored to specific kinds of
operations, some functions do overlap, and the Primary In, Secondaries, Color FX, and
Primary Out rooms collectively contribute to the final appearance of your piece. How you
use these rooms is entirely up to you.
At minimum, the grading of every project involves the following steps:
Stage 1: Optimizing the Exposure and Color of Each Shot
See Stage 1: Correcting Errors in Color Balance and Exposure for more information.
Stage 2: Balancing Every Shot in a Scene to Have Similar Contrast and Color Balance
See Stage 3: Balancing All the Shots in a Scene to Match for more information.
Stage 3: Applying a Creative Look to the Scene
See Stage 5: Achieving a “Look for more information.
Stage 4: Making Modifications in Response to Client Feedback
See Stage 8: Making Digital Lighting Adjustments for more information.
These steps can all be performed within a single room, or they can be broken up among
several rooms.
Doing Everything in One Room
• Excluding special operations such as secondary color corrections and Color FX, each
of these steps in the grading process can be performed via a single set of adjustments
within the Primary In room. In fact, for simple programs that don't require extensive
corrections, this may be the only room you use.
Chapter 13 Managing Corrections and Grades 343This is especially true for projects where the director of photography and the crew
worked to achieve the desired look during the shoot, leaving you with the tasks of
balancing the shots in each scene and making whatever adjustments are necessary to
simply expand and perfect the contrast and color that you’ve been provided.
Grading Across Multiple Rooms
• You can also distribute the different color correction steps outlined above among
multiple rooms. This technique lets you focus your efforts during each stage of the
color correction process and also provides a way of discretely organizing the adjustments
you make, making each change easier to adjust later. For more detailed information,
see Grading a Shot Using Multiple Rooms.
Grading a Shot Using Multiple Rooms
One common color correction strategy is to break up the various stages of correction you
apply to a shot among several rooms in Color, instead of trying to do everything within
the Primary In room. This can focus your efforts during each step of the color correction
process, and it also provides a way of discretely organizing the adjustments you make,
making them easier to adjust later once the client has notes.
This section suggests but one out of countless ways in which the different rooms in Color
can be used to perform the steps necessary to grade your projects.
Stage 1: Optimizing the Exposure and Color of Each Shot
You might start by optimizing each shot's exposure and color in the Primary In room. As
a way of prepping the project in advance of working with the client in a supervised
session, you might restrict your adjustments to simply making each shot look as good as
possible on its own by optimizing its exposure and balancing the color, regardless of the
later steps you'll perform.
Stage 2: Balancing Every Shot in a Scene to Have Similar Contrast and Color Balance
After optimizing each clip, you can balance the contrast and color of each shot to match
the others in that scene using the first tab in the Secondaries room. If you select the
Enable button of the Secondaries room without restricting the default settings of the HSL
qualifiers, the adjustments you make are identical to those made in one of the Primary
rooms.
Important: If you're using a secondary tab to affect the entire image, make sure the
Previews tab is not the selected tab while you work. If the Previews tab is selected, the
monitored image is modified by the selected Matte Preview Mode and may exhibit a
subtle color shift as a result while the Secondaries tab is selected. Clicking the Hue, Sat,
or Lum Curve tabs, even though you're not using them, lets you monitor the image
correctly.
344 Chapter 13 Managing Corrections and GradesStage 3: Applying a Creative Look to the Scene
Now that the shots have been optimized and the scenes balanced, you can focus on
specific creative issues using tabs 2 through 8 in the Secondaries room. You might use
these tabs to apply a creative look, or you could go further and make specific digital
relighting adjustments. At this point in the process, you can also use the Color FX room
to further extend your creative possibilities.
Stage 4: Making Modifications in Response to Client Feedback
Once your client has had the opportunity to screen the nearly finished grade of the
program, you'll no doubt be given additional notes and feedback on your work. You can
use the Primary Out room, which up until now has remained unused, to easily apply these
final touches.
Moreover, because each step of the color grading process was performed in a specific
room of the Color interface, it will hopefully be easier to identify which client notes
correspond to the adjustments needing correction.
The steps outlined above are simply suggestions. With time, you'll undoubtedly develop
your own way of managing the different processes that go into grading programs in
Color.
Chapter 13 Managing Corrections and Grades 345You can create animated grades and other effects using keyframes in the Timeline.
The keyframing mechanism in Color is simple, but effective. It’s designed to let you quickly
animate color corrections, vignettes, Color FX nodes, Pan & Scan effects, and user shapes
with a minimum number of steps.
This chapter covers the following:
• Why Keyframe an Effect? (p. 347)
• Keyframing Limitations (p. 347)
• How Keyframing Works in Different Rooms (p. 349)
• Working with Keyframes in the Timeline (p. 351)
• Keyframe Interpolation (p. 353)
Why Keyframe an Effect?
In many cases, you may work on entire projects where there's no need to keyframe any
of your corrections. However, keyframed primary corrections will often let you compensate
for dynamic changes in exposure or color in shots that might otherwise be unusable. You
can also use keyframes to create animated lighting and color effects to further extend a
scene's original lighting.
Here are some common examples of ways you can use animated keyframes:
• Correct an accidental exposure change in the middle of a shot.
• Create an animated lighting effect, such as a light being turned off or on.
• Correct an accidental white balance adjustment in the middle of a shot.
• Move a vignette to follow the movement of a subject.
• Animate a user shape to rotoscope a subject for an intensive correction.
Keyframing Limitations
There are three major limitations to the use of keyframes in Color.
347
Keyframing
14You Can’t Keyframe Clips That Use Speed Effects
While color correcting projects that were sent from Final Cut Pro, there’s a limitation to
shots with speed effects applied to them. While they can be adjusted in any of the rooms
in Color like any other shot, speed-effected shots cannot be keyframed in Color.
If you’re prepping a project in Final Cut Pro that you want to send to Color, you can avoid
this limitation by exporting all clips with speed effects as self-contained QuickTime files
and reedit them into the Timeline of your Final Cut Pro sequence to replace the original
effects before you send the sequence to Color.
Tip: If you’re exporting clips with speed effects in order to make them self-contained
QuickTime files, you may want to try sending slow motion clips to Motion, where you
can set the clip’s Frame Blending parameter to Optical Flow for smoother effects
processing. After you’ve processed your slow motion clips in Motion, it’s best to export
self-contained QuickTime files from Motion, which you can then reedit into your
Final Cut Pro sequence to replace the original effects.
You Can’t Keyframe Curves in the Primary or Secondaries Room
Curves in the Primary In and Out rooms, or in the Secondaries room, can’t be animated
with keyframes. The other parameters in the room will be animated, but curves remain
static throughout the shot.
Pan & Scan Room Keyframes Can’t Be Sent Back to Final Cut Pro
Pan & Scan keyframes that are created in Color cannot be translated into corresponding
motion effect keyframes in Final Cut Pro. All Color keyframes are removed when you send
your project back to Final Cut Pro, with the settings at the first frame of each clip being
used for translation.
Note: Keyframed Scale, Rotation, Center, and Aspect Ratio Motion tab parameters in
Final Cut Pro do not appear and are not editable in Color, but these keyframes are
preserved and reappear when you send your project back to Final Cut Pro. If a clip has
Motion tab keyframes from Final Cut Pro, it appears in Color with the geometry of the
last keyframe that’s applied to the clip. If necessary, you can Reset the geometry room
to see the entire clip, since this will have no effect on the keyframes being internally
preserved and returned to Final Cut Pro.
348 Chapter 14 KeyframingHow Keyframing Works in Different Rooms
You can keyframe effects in the Primary In, Secondaries, Color FX, Primary Out and
Geometry rooms. Each room has its own separate set of keyframes, stored in individual
tracks of the keyframe graph of the Timeline. These tracks are hidden until you start
adding keyframes within a particular room, which makes that room's keyframe track
visible.
Keyframes created in each room are visible in the Timeline all at once, but you can edit
and delete only the keyframes of the room that's currently open. All other keyframes are
locked until you open their associated rooms.
Although the ways you create, edit, and remove keyframes are identical for every room,
keyframes have different effects in each room. For more information, see:
• Keyframing Corrections in the Primary In and Out Rooms
• Keyframing Secondary Corrections
• Keyframing Color FX
• Keyframing Pan & Scan Effects
• Keyframing User Shapes
Keyframing Corrections in the Primary In and Out Rooms
You can keyframe every control and parameter in the Primary In and Out rooms. This lets
you correct inappropriately shifting lighting and color caused by automatic camera
settings, as well as create animated effects of your own. There are two caveats to
keyframing corrections in the Primary In and Out rooms:
• Keyframes in the Primary rooms record the state of all controls and parameters at once.
It's not possible to independently keyframe individual parameters.
• Curves cannot be animated with keyframes, although every other parameter in the
Primary In and Primary Out rooms can be.
Chapter 14 Keyframing 349Note: How color adjustments are animated depends on the Radial HSL Interpolation
setting in the User Prefs tab of the Setup room. In nearly all cases, you'll get the best
results by leaving this option turned off. For more information, see The User Preferences
Tab.
Keyframing Secondary Corrections
Like parameters and controls in the Primary In and Out rooms, most of the color correction
parameters and controls in the Secondaries room can be animated. Each of the eight
secondary tabs has its own keyframe track. Furthermore, each secondary tab's Inside and
Outside settings are individually keyframed.
In addition to the color and contrast controls, the following secondary controls can also
be animated using keyframes:
• The Enable button that turns each secondary correction off and on
• The qualifiers for the secondary keyer
• The Vignette button that turns vignetting off and on
• All vignette shape parameters
Note: Secondary curves cannot be animated with keyframes.
The ability to keyframe all these controls means you can automate secondary color
correction operations in extremely powerful ways. For example, you can adjust the
qualifiers of the secondary keyer to compensate for a change of exposure in the original
shot that's causing an unwanted change in the area of isolation.
Keyframing the vignette shape parameters lets you animate vignettes to follow a moving
subject, or to create other animated spotlight effects.
Keyframing Color FX
You can keyframe node parameters in the Color FX room to create all sorts of effects.
Even though the Color FX room only has a single keyframe track, each node in your node
tree has its own keyframes. You can record the state of every parameter within a node
using a single set of keyframes; however, a node's parameters cannot be individually
keyframed.
The only keyframes that are displayed in the Color FX room's keyframe track are those of
the node that's currently selected for editing. All other node keyframes are hidden. This
can be a bit confusing at first, as keyframes appear and disappear in the Timeline
depending on which node is currently being edited.
Keyframing Pan & Scan Effects
You can keyframe all the adjustments you make using the Pan & Scan parameters and
onscreen controls in the Geometry room, creating animated Pan & Scan effects and
geometric transformations. All parameters are keyframed together.
350 Chapter 14 KeyframingKeyframing User Shapes
You can keyframe user shapes created in the Shapes tab of the Geometry room to
rotoscope (isolate by tracing frame by frame) moving subjects and areas of the frame for
detailed correction in the Secondaries room.
Note: You can only keyframe shapes after they have been assigned to a tab in the
Secondaries room.
Working with Keyframes in the Timeline
It takes a minimum of two keyframes to animate an effect of any kind. Each keyframe
you create stores the state of the room you're in at that frame. When you've added two
keyframes with two different corrections to a room, Color automatically animates the
correction that's applied to the image from the correction at the first keyframe to the
correction at the last.
Once you add a keyframe to a shot in a particular room, you can edit the controls and
parameters in that room only when the playhead is directly over a keyframe. If you want
to make further adjustments to a keyframed shot, you need to move the playhead to the
frame at which you want to make an adjustment and add another keyframe. Then you
can make the necessary adjustments while the playhead is over the new keyframe.
To add a keyframe for the currently open room
µ Choose Timeline > Add Keyframe (or press Control-9).
Once you've added one or more keyframes, you can use a pair of commands to quickly
move the playhead to the next keyframe to the right or left.
Chapter 14 Keyframing 351To move the playhead from one keyframe to the next in the currently open room
Do one of the following:
µ Press Option–Left Arrow to move to the next keyframe to the left.
µ Press Option–Right Arrow to move to the next keyframe to the right.
µ Control-click in the keyframe graph of the Timeline, then choose Next Keyframe or Previous
Keyframe from the shortcut menu.
Keyframes at the current position of the playhead are highlighted.
You can delete keyframes you don't need.
To delete a single keyframe
1 Move the playhead to the frame with the keyframe you want to delete.
2 Choose Timeline > Remove Keyframe (or press Control-0).
You can also delete every keyframe applied to a shot in a particular room all at once.
When you remove all the keyframes from a particular effect, the entire effect is changed
to match the values of the frame at the current position of the playhead.
To delete every keyframe in a single room
1 Click the tab of the room with the keyframes you want to remove.
2 Move the playhead to a frame where the effect is at a state you want applied to the entire
shot.
3 Control-click the keyframe you want to delete in the Timeline, then choose Remove All
Keyframes from the shortcut menu.
Every keyframe applied to that room or secondary tab is deleted, and the keyframe graph
for that room disappears from the Timeline. When you delete all a shot's keyframes at
once, the correction or effects settings of the frame at the position of the playhead become
the settings for the entire shot.
Important: The Remove All Keyframes command removes all the keyframes in the currently
selected room, regardless of which area in the Timeline's keyframe graph you Control-click.
You can easily adjust the timing of keyframes that you're already created.
To move a keyframe and change its timing
µ Drag it to the left or right.
You can also adjust the timing of a keyframe while previewing the frame you're moving
it to.
352 Chapter 14 KeyframingTo move a keyframe while updating the previewed image
µ Press Option while dragging a keyframe to the left or right.
If you need to, you can also make the keyframe graph in the Timeline taller, to make it
easier to see what you're doing. For more information, see Customizing the Timeline
Interface.
You can also use the keyframe graph to navigate to a room with keyframed effects.
To open the room corresponding to a keyframe track
µ Double-click any keyframe track in the Timeline.
Keyframe Interpolation
The interpolation method that a keyframe is set to determines how settings are animated
from one keyframe to the next. There are three possible types of interpolation:
• Smooth: Smooth keyframes begin the transition to the next keyframed state slowly,
reaching full speed in the middle of the transition and then slowing down to a stop at
the next keyframe. This "easing" from one keyframe to the next creates transitions
between color corrections, animated Color FX node parameters, Pan & Scan settings,
and animated user shapes that look and move smoothly and naturally. However, if you
have more than two keyframes, your effect will seem to pause for one frame as the
playhead passes over each keyframe, which may or may not be desirable.
• Linear: Linear keyframes make a steady transition from one keyframed state to the
next, with no acceleration and no slowing down. If you use linear keyframes to animate
an effect that happens somewhere in the middle of a shot, the animated effect may
appear to begin and end somewhat abruptly. On the other hand, if you are keyframing
an animated effect that begins at the first frame and ends at the last frame of the shot,
the appearance will be of a consistent rate of change.
Chapter 14 Keyframing 353• Constant: Constant keyframes perform no interpolation whatsoever. All effects change
abruptly to the next keyframed state when the playhead reaches the next constant
keyframe. Constant keyframes are useful when you want an effect to change
immediately to another state, such as increasing the contrast to simulate a sudden
lightning strike flashing through a window.
By default, all new keyframes that you create are smooth, although you can change a
keyframe's interpolation at any time. Changing a keyframe's interpolation affects only
the way values are animated between it and the next keyframe to the right.
To change a keyframe's interpolation
1 Move the playhead to the keyframe you want to change.
2 Choose Timeline > Change Keyframe (or press Control-8).
354 Chapter 14 KeyframingThe Geometry room provides a way to zoom in to shots, create pan and scan effects,
draw custom mattes for vignetted secondary operations, and track moving subjects to
automate the animation of vignettes and shapes.
The Geometry room is divided into an image preview (which contains the onscreen
controls for all of the functions in this room) and three tabs to the right. Each tab has
different tools to perform specific functions. The Pan & Scan tab lets you resize, rotate,
flip, and flop shots as necessary. The Shapes tab lets you create custom masks to use with
secondary corrections. Finally, the Tracking tab provides an interface for creating and
applying motion tracking, to use with vignettes and custom shapes in your project.
This chapter covers the following:
• Navigating Within the Image Preview (p. 355)
• The Pan & Scan Tab (p. 356)
• The Shapes Tab (p. 361)
• The Tracking Tab (p. 370)
Navigating Within the Image Preview
Each of the tabs in the Geometry room relies upon onscreen controls in the image preview
area to the left of the controls tabs. You can zoom in or out and scroll around this area
to get a better look at your image while you work, and you can even zoom and pan
around while you’re in the middle of drawing a shape.
To zoom in to or out of the image preview
µ Right-click and drag up to zoom out, and down to zoom in to the image preview.
To pan around the image preview
µ Middle-click to drag the image preview in any direction.
To reframe the image preview to fit to the current size of the screen
µ Press F.
355
The Geometry Room
15The Pan & Scan Tab
The Pan & Scan tab lets you apply basic transformations to the shots in your projects.
You can use these transformations to blow images up, reposition them to crop out
unwanted areas of the frame, and rotate shots to create canted angles. You can also use
pan and scan effects to reframe each shot when you’re downconverting a high-resolution
widescreen project to a standard definition 4:3 frame. For more information, see:
• Exchanging Geometry Settings with Final Cut Pro
• Working with the Pan & Scan Tab
• Animating Pan & Scan Settings with Keyframes and Trackers
• Copying and Resetting Pan & Scan Settings
Exchanging Geometry Settings with Final Cut Pro
When you send a sequence from Final Cut Pro to Color, the following Motion tab
parameters are translated into their equivalent Color parameters.
Pan & Scan parameters in Color
Motion tab parameters in
Final Cut Pro
Scale Scale
Rotation Rotation
Center Position X, Position Y
Aspect Ratio Aspect Ratio
While you grade your program, you can preview the effect these transformations have
on each shot and make further adjustments as necessary.
Once you finish working on your project in Color, whether or not Color processes Pan &
Scan adjustments when you render each shot from the Render Queue depends on what
kind of source media you’re using, and how you’re planning on rendering it:
• When projects are sent to Color from Final Cut Pro or imported via XML files, all the
Pan & Scan transformations that are applied to your shots in Color are translated back
into their equivalent Final Cut Pro Motion tab settings. You then have the option to
further customize those effects in Final Cut Pro prior to rendering and output.
• Keyframed Scale, Rotation, Center, and Aspect Ratio Motion tab parameters do not
appear and are not editable in Color, but these keyframes are preserved and reappear
when you send your project back to Final Cut Pro.
• Pan & Scan keyframes created in Color cannot be translated into corresponding Motion
tab keyframes in Final Cut Pro. All Color keyframes are removed when you send your
project back to Final Cut Pro, with the settings at the first frame of each clip being used
for translation.
356 Chapter 15 The Geometry Room• When outputting 2K and 4K Cineon and DPX image sequences, Pan & Scan
transformations are processed within Color along with your color corrections when
rendering the output media.
• If your project uses 4K native RED QuickTime media, then Pan & Scan transformations
are processed within Color, whether you’re rendering DPX/Cineon image sequences
for film output, or QuickTime media to send back to Final Cut Pro. Projects using 2K
native RED QuickTime media work similarly to other projects using QuickTime clips.
Motion Tab Keyframes Are Preserved In Roundtrips
If any clips are animated using Scale, Rotate, Center, or Aspect Ratio parameter keyframes
in Final Cut Pro, these keyframes do not appear and are not editable in Color, but they
are preserved and reappear when you send your project back to Final Cut Pro.
If a clip has Motion tab keyframes from Final Cut Pro, it appears in Color with the
geometry of the last keyframe that is applied to the clip. If necessary, you can reset the
geometry room to see the entire clip while you make corrections in Color, since this will
have no effect on the keyframes being internally preserved and returned to Final Cut Pro.
Working with the Pan & Scan Tab
You can transform shots in your program using two sets of controls. To the left, onscreen
controls appear within the image preview area, while to the right, numeric parameters
mirror these adjustments.
Chapter 15 The Geometry Room 357Using the Onscreen Controls
The onscreen controls for the Pan & Scan tab consist of an outer bounding box that
represents the scaled output with four handles at each corner and a pair of action safe
and title safe indicators within. By default, the onscreen control is the same size as the
resolution of your project.
The onscreen controls are designed to work in conjunction with the image that’s displayed
by the preview and broadcast displays. In other words, you use the onscreen controls to
isolate the portion of the image you want to output, and you view the actual
transformation on the preview and broadcast displays.
To resize a shot
µ Drag any of the four corners of the onscreen control to resize the shot relative to its center.
The onscreen control shrinks or expands to include less or more of the image, and the
preview and broadcast displays show the result. This also adjusts the Scale parameter.
To rotate a shot
µ Drag just outside the four corner handles, right to rotate left, and left to rotate right.
358 Chapter 15 The Geometry RoomBecause the onscreen control works by selecting a portion of the static source image,
the onscreen control rotates in the opposite direction of the effect, but the preview and
broadcast displays show the correct result.
To reposition a shot
µ Drag anywhere within the red bounding box.
The onscreen control moves to select a different portion of the shot, and the preview
and broadcast displays show the result.
Note: There are no onscreen controls for the Aspect Ratio, Flip, and Flop controls.
Chapter 15 The Geometry Room 359Using the Pan & Scan Parameters
Each of the adjustments you make using the onscreen controls is mirrored and recorded
numerically by the parameters in the Pan & Scan tab to the right. If you want, you can
directly manipulate these parameters by either entering new values into the fields or by
holding down the middle mouse button and dragging within a field to adjust it using
the virtual slider.
• Position X and Y: Controls the portion of the image that’s viewed when you reposition
the onscreen control. These parameters translate to the two dimensions of the Center
parameter in Final Cut Pro.
• Scale: Controls the size of the image.
• Aspect Ratio: Lets you change the width-to-height ratio of shots to either squeeze or
stretch them. This parameter has no onscreen control.
• Rotation: Lets you spin the shot about the center of the onscreen control.
• Flip Image: Lets you reverse the image horizontally. Right and left are reversed.
• Flop Image: Lets you reverse the image vertically. Top and bottom are reversed.
Important: The Flip Image and Flop Image parameters are disabled when you’re working
with an XML project from Final Cut Pro because there are no equivalent parameters in
the Motion tab.
Animating Pan & Scan Settings with Keyframes and Trackers
Animation of the Pan & Scan parameters is primarily intended for projects which will be
rendered out of Color as DPX or Cineon image sequences. Animating Pan & Scan
parameters is not recommended for projects you’ll be sending back to Final Cut Pro, since
neither keyframes nor tracker data can be sent back.
360 Chapter 15 The Geometry RoomIf necessary, you can animate Pan & Scan effects in one of two ways:
• Using keyframes: You can keyframe all the Pan & Scan transform controls. For more
information on keyframing in Color, see Keyframing.
• Using a tracker: You can also use motion tracking to automatically animate a Pan &
Scan effect; for example, to move to follow a character who is walking across the screen.
Once you create a tracker and analyze the shot (in the Tracking tab), you simply choose
the number of the tracker you want to use from the Use Tracker pop-up menu, and
the Position X and Y parameters are automatically animated. If Use Tracker is set to
None, no trackers are applied. For more information, see The Tracking Tab.
Copying and Resetting Pan & Scan Settings
Three buttons at the bottom of the Pan & Scan tab let you copy and reset the adjustments
you make with these controls.
• Copy To Selected button: Select one or more shots in the Timeline, then click this button
to copy the current Pan & Scan settings to all the selected shots.
• Copy To All button: Copies the Pan & Scan settings to all the shots in the program. This
is useful if you’re making a global adjustment when changing the format of a program.
• Reset Geometry button: Resets all the Pan & Scan parameters to the default scale for
your project.
The Shapes Tab
The Shapes tab lets you draw custom shapes to use as vignettes in the Secondaries room
for feature isolation, vignetting, or digital relighting. The Shapes tab is not meant to be
used by itself, nor are you meant to begin operations in the Shapes tab. Instead, shapes
are initially created by choosing the User Shape option from the Shape pop-up menu of
the Vignette controls in the Secondaries room.
When you choose this option, you are immediately taken to the Shapes tab of the
Geometry room, which provides the controls for drawing and editing your own custom
shapes. For a more thorough explanation of this workflow, see Creating a User Shape for
Vignetting.
Note: User Shapes can only be used with secondary operations in the Secondaries room.
They cannot be used in the Color FX room.
Chapter 15 The Geometry Room 361Controls in the Shapes Tab
The Shapes tab has the following controls:
• Current Secondary pop-up menu: Lists which of the eight available tabs in the Secondaries
room is the currently selected secondary operation, but you can choose any secondary
tab from this pop-up menu prior to making an assignment. When you click the Attach
button, this is the secondary tab that the currently selected shape will be attached to.
• Attached Shape: When you select a shape that has been attached to a shot’s secondary
tab, this field shows the selected shape’s name and the grade to which it’s been attached
using the following format: shapeName.gradeNumber
• Attach button: Once you’ve drawn a shape you want to use to limit a secondary
operation, click Attach to attach it to the currently open secondary tab in the Secondaries
room (shown in the Current Secondary field).
• Detach button: Click Detach to break the relationship between a shape and the
secondary tab to which it was previously assigned. Once detached, a shape no longer
has a limiting effect on a secondary operation.
362 Chapter 15 The Geometry Room• Shapes list: This list shows all the unattached shapes that are available in a project, as
well as the shapes that have been assigned to the current shot. Clicking a shape in this
list displays it in the image preview area and updates all the parameters in the Shapes
tab with the selected shape’s settings.
• Name column: The name of the shape, editable in the Shape Name field.
• ID column: An identification number for the shape. ID numbers start at 0 for the first
shape and are incremented by one every time you create a new shape.
• Grade column: When a shape is attached, this column shows the grade to which it’s
been attached.
• Sec column: When a shape is attached, this column shows which of the eight
secondary tabs the shape has been attached to.
• Hide Shape Handles: Click Hide Shape Handles to hide the control points of shapes in
the image preview. The outline of the shape remains visible.
• Reverse Normals: When a shape is feathered using the Softness parameter, this button
reverses which shape defines the inner and outer edges of feathering.
• Use Tracker pop-up menu: If you’ve analyzed one or more Motion Trackers in the current
project, you can choose which tracker to use to automatically animate the position of
the vignette from this pop-up menu. To disassociate a vignette from the tracker’s
influence, choose None.
• Softness: A global feathering operation for the entire shape. When set to 0, the shape
has a hard (but anti-aliased) edge. When set to any value above 0, inner and outer
softness shapes appear along with their own control points. The inner shape shows
where the feathering begins, while the outer shape shows the very edge of the feathered
shape. If necessary, each border can be independently adjusted.
• Shape Name: This field defaults to “untitled”; however, you can enter your own name
for the currently selected shape in order to better organize the shapes list.
• New button: Click New to create a new, unassigned shape.
• Remove button: Choose a shape and click Remove to delete a shape from the Shapes
list.
• Close Shape/Open Shape button: This button switches the currently selected shape
between a closed and open state.
• Save button: Saves the currently selected shape to the Shape Favorites directory.
• Load button: Loads all shapes that are currently saved in the Shape Favorites directory
into the Shapes list of the current shot.
• B-spline/Polygon buttons: Switches the currently selected shape between B-Spline mode,
which allows for curved shapes, and Polygon mode, in which shapes only have angled
corners.
Chapter 15 The Geometry Room 363• Main/Inner/Outer buttons: These buttons let you choose which points you want to
select when dragging a selection box in the image preview, without locking any of the
other control points. You can always edit any control point, no matter what this control
is set to.
About the Shapes List
The Shapes list contains an entry for every unattached shape in the current project, as
well as for all of the attached shapes used by the shot at the current position of the
playhead. Clicking a shape in this list displays it in the image preview area and updates
all of the parameters in the Shapes tab with the selected shape’s settings.
• Name column: The name of the shape, editable in the Shape Name field.
• ID column: An identification number for the shape. ID numbers start at 0 for the first
shape and are incremented by one every time you create a new shape.
• Grade column: When a shape is attached, this column shows the grade to which it’s
been attached.
• Sec column: When a shape is attached, this column shows which of the eight secondary
tabs the shape has been attached to.
Saving and Loading Favorite Shapes
You can create a collection of custom shapes to use in other projects by using the Save
and Load buttons. When you select an unattached shape in the Shapes list and click
Save, it’s saved to the following directory:
/Users/username/Library/Application Support/Color/BShapes/
Click Load to load all the shapes that are saved within this directory into the Shapes list
of the current shot. Once you decide which shape you want to use, you can remove the
others.
Drawing Shapes
Drawing and editing shapes works in much the same way as other compositing
applications. Color uses B-Splines to draw curved shapes, which are fast to draw and edit.
These splines work similarly to those used in the curves in the Primary and Secondaries
rooms.
364 Chapter 15 The Geometry RoomB-Splines use control points that aren’t actually attached to the shape’s surface to “pull”
the shape into different directions, like a strong magnet pulling thin wire. For example,
here’s a curve with three control points:
The control point hovering above the shape is pulling the entire shape toward itself,
while the surrounding control points help to keep other parts of the shape in place.
The complexity of a shape is defined by how many control points are exerting influence
on that shape. If two control points are added to either side, and moved down, the curve
can be modified as seen below.
To make curves in a shape sharper, move their control points closer together. To make
curves more gentle, move the control points farther away from one another.
The following procedures describe how to create, remove, and adjust the control points
that edit curve controls.
To draw a shape
1 Click one of the eight tabs in the Secondaries room to use it to make a secondary
correction, turn on the Enable and Vignette buttons, then choose User Shape from the
Shape pop-up menu.
The Shapes tab in the Geometry room opens, and you’re ready to draw a shape.
2 Click anywhere within the image preview area to add the first control point.
3 Continue clicking within the image preview area to add more points.
4 When you’re ready to finish, close the shape by clicking the first control point you created.
Chapter 15 The Geometry Room 3655 Enter a name into the Shape Name field, then press Return. (This step is optional.)
6 Click the Attach button to use the shape in the secondary tab.
A duplicate of the shape you just drew appears in the list, which shows the number of
the grade and the secondary tab to which it’s attached. (The original shape you drew
remains in the list above, ready to be recycled at a future time.) At this point, you’re ready
to use that shape in the Secondaries tab to which it’s been attached.
To adjust a shape
µ Drag any of its control points in any direction.
Unlike Bezier splines, B-Splines have no tangents to adjust. The only adjustments you can
make require using the number and position of control points relative to one another.
To reposition a shape
µ Drag its green center handle in any direction.
The center handle is the point around which keyframing and motion tracking
transformations are made.
To resize a shape
1 Make sure the Main button is selected in the Shapes tab.
2 Drag a selection box around every control point you want to resize.
366 Chapter 15 The Geometry RoomSelected control points turn green.
You don’t have to select every control point in the shape; you can make a partial selection
to resize only a portion of the overall shape. The center of all selected control points
displays a small green crosshairs box that shows the position of the selected control
points relative to the center handle.
3 Do one of the following:
• Drag any of the four corners of the selection box to resize the shape relative to the
opposite corner, which remains locked in position.
• Option-drag the selection box to resize the shape relative to its center control (visible
as green crosshairs).
• Shift-drag the selection box to resize the shape while locking its aspect ratio, enlarging
or reducing the shape without changing its width-to-height ratio.
To toggle a shape between a curved B-Spline and an angled polygon
µ Click either B-Spline or Polygon in the Shapes tab to change the shape to that type of
rendering.
To feather the edge of a shape
1 Increase its Softness value.
Chapter 15 The Geometry Room 367The Softness parameter applies a uniform feathering around the entire shape. This also
reveals a pair of inside and outside shapes that represent the inner and outer boundaries
of the feathering effect that’s applied to the shape.
2 If necessary, adjust the shape’s inner and outer shape to create the most appropriate
feathering outline around the perimeter of the shape.
This lets you create irregularly feathered outlines when you’re isolating a feature where
one edge should be hard, and another feathered.
To add control points to a previously existing shape
1 Select a shape to edit in the Shapes list.
2 Click Open Shape.
368 Chapter 15 The Geometry Room3 Click within the image preview area to add control points to the end of the selected
shape.
4 Click the first control point of the shape when you finish adding more control points.
Animating Shapes with Keyframes and Trackers
If necessary, you can animate shapes in one of two ways:
• Using keyframes: You can keyframe shapes. For more information on keyframing in
Color, see Keyframing.
• Using a tracker: You can also use motion tracking to automatically animate a shape;
for example, to move to follow a feature that’s moving because the camera is panning.
Once you create a tracker and analyze the shot (in the Tracking tab), you simply select
a shape from the Shapes list and choose the number of the tracker you want to use
from the Use Tracker pop-up menu, and the shape is automatically animated. If the
Use Tracker pop-up menu is set to None, no trackers are applied. For more information,
see The Tracking Tab.
Chapter 15 The Geometry Room 369The Tracking Tab
Motion tracking is the process of analyzing a shot in order to follow the motion of a
specific feature in the image to create a motion path. Once you’ve done this, you can use
these motion-tracked camera paths to animate secondary vignettes, Pan & Scan operations,
user shapes, and the Vignette node in the Color FX room to follow these motion paths.
This way, the corrections you make appear to follow moving subjects or the motion of
the camera.
There are actually two kinds of tracking:
• Automatic Tracking: Automatic tracking is ideal, as the computer analyzes part of the
image that you specify to follow a moving subject. This method creates a motion path
with a minimum of user input, but some shots may be difficult to track. When you
create an Automatic Tracker, a single onscreen control appears that consists of a pair
of boxes with crosshairs at the center.
When you process a tracker, Color analyzes an area of pixels specified by the outer
orange Search Region box of the onscreen control, over the range of frames specified
by the Mark In and Mark Out buttons. The tracker attempts to “follow” the feature
you’ve identified (using the inner red Reference Pattern box of the onscreen control)
as it moves across the frame. Angular, high-contrast features are ideal reference patterns
that will give you the best results.
• Manual Tracking: Manual tracking uses you as the computer, providing a streamlined
interface for you to follow a moving subject by clicking it with your mouse, frame by
frame from the In point to the Out point, until you’ve constructed a motion path by
hand. This method can be tedious, but it can also yield the best results for shots that
are difficult to track automatically.
You can use either one or both of these methods together to track a subject’s motion.
Note: Color can only use one-point motion tracking. Two- and four-point tracking are
not supported.
370 Chapter 15 The Geometry RoomWill Motion Tracking Solve All Your Problems?
With shots where there is a clearly defined target (something high-contrast and angular,
preferably), automatic motion tracking can be the fastest way to quickly and accurately
animate a vignette to follow the motion of the subject or camera in a shot, but not
always.
If you’re working on a shot where automatic tracking is almost usable, but has a few
errors, you might be able to use manual tracking on top of the automatic track to correct
the most egregious mistakes, and then increase Tracking Curve Smoothness to get an
acceptable result. For more information about manual tracking, see Using the Tracking
Tab.
However, if actors or other subjects in the shot pass in front of the feature you’re tracking,
or if the motion of a shot is so fast that it introduces motion blur, or if there’s excessive
noise, or if there’s simply not a feature on the subject you need to track that’s
well-enough defined, you may need to resort to manual tracking for the entire shot,
which can be tedious if it’s a long shot. In many cases, manual keyframing may well be
the most efficient solution. For more information on keyframing, see Keyframing.
Using Motion Tracking to Animate Vignettes and Shapes
After you’ve processed a tracker, you can use that tracker’s analysis to animate the
following:
• A vignette in the Secondaries room
• A user shape in the Geometry room
• X and Y positions in the Pan & Scan tab of the Geometry room
• The Vignette node in the Color FX room
When applied to a vignette or a user shape, the animation of the Motion Tracker is added
to the X and Y positioning of the shape. For this reason, it’s most efficient to track a subject
and assign that tracker to the vignette, shape, or setting first, and adjust the positioning
later.
For example, suppose you’ve used a tracker to follow the movement of someone’s eye,
and you want to apply that motion to a vignette that highlights that person’s face. You
should choose the tracker from the Use Tracker pop-up menu first. As soon as you choose
a tracker, the vignette or shape you’re animating moves so that it’s centered on the
tracked feature. At that point, you can position the center, angle, and softness of the
shape to better fit the person’s face. This way, the vignette starts out in the correct position
and goes on to follow the path created by the tracker. Because the tracker uses an
additional transformation, you can still reposition the vignette using the X and Y center
parameters or the onscreen control in the Previews tab.
Chapter 15 The Geometry Room 371If you track a limited range of a shot’s total duration by setting In and Out points for the
tracker that are shorter than the length of the shot, the vignette stays at the initial position
you drag it to until the playhead reaches the tracker’s In point, at which time the vignette
begins to follow the tracker’s motion path. When the playhead reaches the Out point,
the vignette stops and remains at the last tracked frame’s position until the end of the
shot.
Note: If you apply a tracker to the Pan & Scan settings for any shot in a project that was
sent from Final Cut Pro, the tracking data will be lost when the project is sent back to
Final Cut Pro. However, if it’s for a project that’s being rendered as a DPX or Cineon image
sequence, the animated Pan & Scan settings will be rendered into the final image.
Controls in the Tracking Tab
Motion tracking is accomplished by creating a tracker in the Tracker list in the Tracking
tab of the Geometry room. You can create as many trackers for a shot as you like, but
you can only use one at a time to animate a vignette or shape. The Tracker list shows
every tracker you’ve created and analyzed for a given shot, and each tracker has an ID
number (they’re numbered in the order in which they’re created). These ID numbers
appear in the Use Tracker pop-up menu for any vignette or shape that can be animated
using a tracker.
372 Chapter 15 The Geometry RoomThe Tracking tab has the following controls:
• Tracker list: A list of all the trackers that have been created for the shot at the current
position of the playhead. This list has three columns:
• Name column: The name of that tracker. All trackers are named in the following
manner: tracker.idNumber
• ID number: The ID number that corresponds to a particular tracker. This is the number
you choose from any Use Tracker pop-up menu to pick a tracker to use to animate
that adjustment.
• Status column: A progress bar that shows whether or not a tracker has been processed.
Red means that a tracker is unprocessed, while green means processed.
Chapter 15 The Geometry Room 373• Manual Tracker: Click to enter Manual Tracking mode, where you use the pointer to
click on a feature in the preview area that you want to track. Each click positions the
onscreen tracker control manually to create a tracking keyframe, and then advances
the playhead one frame, until you reach the end of the shot. Using this feature, you
can rapidly hand-track features in shots that automatic tracking can’t resolve.
• Tracking Curve Smoothness: Smooths the tracking data to eliminate uneven or irregular
motion. Higher values smooth the tracked motion path more. You can smooth both
automatic and manual tracking data.
Note: The original Motion Tracker data is retained and never modified via the smoothing.
• Process: Once you’ve adjusted the onscreen controls to identify a reference pattern
and search area, click Process to perform the analysis.
• New: Creates a new tracker in the Tracker list.
• Remove: Deletes the currently selected tracker in the Tracker list.
• Mark In: Marks an In point in the current shot at which to begin processing. When you
create a new tracker, the In point is automatically created at the current position of the
playhead.
• Mark Out: Marks an Out point in the current shot at which to end processing. When
you create a new tracker, the Out point is automatically created at the end of the last
frame of the shot.
Using the Tracking Tab
This section describes how to use the Tracking tab to create motion paths with which to
animate vignettes, shapes, and Pan & Scan settings.
To automatically track a feature
1 Move the playhead to the shot you want to track.
Since a new In point will be created at the position of the playhead, make sure to move
it to the first frame of the range you want to track.
2 Open the Tracker tab in the Geometry room, then click New.
A new, unprocessed tracker appears in the Tracker list, and its onscreen controls appear
in the image preview area. A green In point automatically appears at the playhead in a
new track of the Timeline, and a green Out point appears at the end.
374 Chapter 15 The Geometry RoomIn many cases, the In and Out points will include the whole shot. However, if the feature
you’re tracking is not visible or only moves for a small portion of the shot, you may want
to set In and Out points only for that section of the clip. If the In point was incorrectly
placed, you can always move the playhead to the correct frame and click Mark In.
3 Drag anywhere within the center box of the onscreen control to move it so that the
crosshairs are centered on the feature you want to track.
In this example, the Reference Pattern box is being centered on the man’s eye.
4 Adjust the handles of the inner box (the Reference Pattern box) to fit snugly around this
feature.
The bigger the box, the longer the track will take.
5 Next, adjust the outer box to include as much of the surrounding shot as you judge
necessary to analyze the shot.
Tip: For a successful track, the feature you’ve identified using the Reference Pattern box
should never move outside the search region you’ve defined as the shot proceeds from
one frame to the next. If the motion in the shot is fast, you’ll want to make the outer box
larger, even though this increases the length of time required for the analysis. If the
motion in the shot is slow, you can shrink the Search Region box to a smaller size to
decrease the time needed for analysis.
6 Move the playhead to the last frame of the range you want to track, then click Mark Out.
Chapter 15 The Geometry Room 375A green Out point appears in the Timeline.
In many cases, this will be the last frame of the shot. However, if the feature you’re tracking
becomes obscured, you’ll want to set the Out point to the last frame where the feature
is visible.
7 Click Process.
Color starts to analyze the shot, starting at the In point, and a green progress bar moves
from the In point to the Out point to show how much of the clip has been analyzed.
When processing is complete, that tracker appears with a green bar in the Status column
of the Tracker list, and that tracker is ready to be used in your project. That tracker’s motion
path appears in the image preview area whenever that tracker is selected.
If necessary, the tracker is ready to be refined with smoothing, manual repositioning of
individual control points in the motion path, or manual tracking. When you’re finished,
the tracker is ready to be used to animate a vignette or shape.
If the resulting motion path from an Automatic Tracker has a few glitches, you can drag
individual keyframes around to improve it.
To manually adjust a tracked motion path
1 If necessary, set the Tracking Curve Smoothness to 0 so you can more accurately see and
position the tracked keyframes.
376 Chapter 15 The Geometry Room2 Drag the playhead in the Timeline through the tracked range of the shot, and identify
keyframes that stick out incorrectly, or that drift from the proper direction of the subject’s
motion.
3 Drag the offending control point in the preview area so that it better fits the overall
motion path.
You can drag any control point in the motion path to a new position, not just the keyframe
at the position of the playhead.
If there’s a shot in which the motion is too difficult to track automatically, you might try
manually tracking the feature. You can turn on the Manual Tracker option either to correct
mistakes in an automatically tracked motion path, or you can use manual tracking on its
own to create an entire motion path from scratch.
To manually track a feature
1 Move the playhead to the shot you want to track.
2 Open the Tracker tab in the Geometry room, and do one of the following:
• Click an existing tracker in the Tracker list to modify it.
• Click New to create a new motion path from scratch.
3 Click Manual Tracker to enter Manual Tracking mode.
When you turn on manual tracking, the onscreen tracker control disappears.
4 Move the playhead to the first frame of the range you want to track, then click Mark In.
5 Now that everything’s set up, simply click a feature in the preview area that you want to
track.
For example, if you were tracking someone’s face for vignetting later on, you might click
the nose. Whatever feature you choose, make sure it’s something that you can easily and
clearly click on, in the same place, on every frame you need to track.
Each click creates a keyframe manually, and then advances the playhead one frame.
6 Click the same feature you clicked in the previous frame, as each frame advances, until
you reach the Out point, or the end of the shot.
Chapter 15 The Geometry Room 377As you add more manual tracking points, a motion path slowly builds following the trail
of the feature you’re tracking.
7 When you’ve finished manually tracking, stop clicking.
That tracker is ready to be assigned to a parameter elsewhere in your project.
Note: Turning off the Manual Tracker does not turn off your manually tracked keyframes.
Sometimes a motion track is successful, but the resulting motion path is too rough to
use in its original state. Often, irregular motion will expose an animated effect that you’re
trying to keep invisible. These may be seen as jagged motion paths.
In these cases, you can use the Tracking Curve Smoothness slider to smooth out the
motion path that’s created by the tracker.
To smooth a track
1 Select a tracker in the Tracker list.
378 Chapter 15 The Geometry Room2 Adjust the Tracking Curve Smoothness slider, dragging it to the right until the motion
tracking path is smooth enough for your needs.
The Tracking Curve Smoothness slider is nondestructive. This means that the original
tracking data is preserved, and you can raise or lower the smoothing that’s applied to
the original data at any time if you need to make further adjustments. Lowering the
Tracking Curve Smoothness to 0 restores the tracking data at its originally analyzed state.
Chapter 15 The Geometry Room 379The Still Store provides an interface with which to compare shots to one another while
you do scene-to-scene color correction.
Using the Still Store interface, you can save images from different shots in a project to
use as reference stills for comparison to shots you’re correcting to match. This is a common
operation in scene-to-scene color correction, when you’re balancing all the shots in a
scene to match the exposure and color of one another, so they all look as if they were
shot at the same place, at the same time.
Using the Still Store, you can save reference stills from any shot in your project, for
comparison to any other shot. This means if you’re working on a documentary where a
particular style of headshot is interspersed throughout the program, you can save a
reference still of the graded master headshot, and recall it for comparison to every other
headshot in the program.
This chapter covers the following:
• Saving Images to the Still Store (p. 381)
• Saving Still Store Images in Subdirectories (p. 383)
• Removing Images from the Still Store (p. 383)
• Recalling Images from the Still Store (p. 384)
• Customizing the Still Store View (p. 384)
Saving Images to the Still Store
To use the Still Store, you must first save one or more images for later recall.
To add an image to the Still Store
1 Move the playhead to a frame you want to save to the Still Store.
You should choose a graded image that contains the subjects you need to compare and
that is representative of the lighting and color you’re trying to match.
2 If the Still Store is currently turned on, turn it off to make sure you don’t accidentally save
a still of the currently displayed split screen.
381
The Still Store
163 Optionally, if you want to save the still with a custom name, you can click the Still Store
tab and type a name in the File field below the Still Store bin.
If you don’t enter a custom name, each still image you save will be automatically named
in the following manner:
Still.Day_Month_Year_Hour_Minute_SecondTimezone.sri
The date and time reflect exactly when the still image was saved.
Note: If you load a still image into the Still Store immediately prior to saving another one,
the newly saved still image will use the name of the still you loaded, overwriting the
previously saved still as a result.
4 To save the still, do one of the following:
• From any room, choose Still Store > Store (or press Control-I).
• Click the Still Store tab, then click Save.
A still image of the frame at the position of the playhead is saved as an uncompressed
DPX file in the /StillStore/ subdirectory within the project bundle itself. It also appears
within Color as an item in the Still Store bin. When the Still Store is set to icon view, each
saved still appears with a thumbnail for reference.
Still Store images are saved at the native resolution of the source media from which
they’re derived, but they’re not saved with the currently applied LUT correction. That
way, if your project were using a LUT when you saved the images in the Still Store, and
you clear that LUT from your project, the saved still images will continue to match the
shots they originated from.
Important: Still Store images aren’t updated if the shot they originated from is regraded.
This means that if you save a Still Store image from a shot, and then later regrade that
shot to have a different look, the saved Still Store image will no longer be representative
of that shot and should be removed. If there is any question whether or not a still image
correctly reflects a shot’s current grade, the date and time the still image was saved might
provide a hint.
382 Chapter 16 The Still StoreWhy Is Your Project Getting So Big?
Because all still images are saved within the “StillStore” subdirectory inside your project
bundle, you may notice that your project takes longer to back up than it used to if you
save a lot of still images. If you need to reduce the size of the project file, you should
delete as many unused Still Store images as you can.
Saving Still Store Images in Subdirectories
By default, whenever you save a still image, it’s saved in your project’s internal “StillStore”
subdirectory and appears in the Still Store bin along with all the other stills you saved.
All stills in the Still Store bin appear in the order in which they were created, with the
newest stills appearing last.
You can also organize your saved stills into subdirectories. You might create individual
subdirectories based on the date of work, the scene stills are saved from, or any other
organizational means of your own devising.
To create a custom subdirectory in the Still Store bin
1 Click the Still Store tab.
2 Click New Folder.
3 When the New Folder dialog appears, enter a name in the “Name of new folder” field,
then click Create.
A new subdirectory appears inside of the “StillStore” directory within your project bundle
and becomes the currently open directory to which all new still images are saved.
Important: You cannot move still images into subdirectories once they’ve been created.
To save new stills in a subdirectory, you need to navigate the Still Store bin to that directory
before saving any new stills.
Removing Images from the Still Store
Saved images can stack up pretty quickly in the Still Store, so you want to make sure you
regularly remove all unnecessary stills.
To remove an image from the Still Store
1 Click the Still Store tab.
2 Select the still image you want to remove.
3 Press the Delete or Forward Delete key.
4 Click Yes in the warning dialog that appears, to confirm that you really do want to delete
the selected still image.
You cannot undo the deletion of a still from the Still Store.
Chapter 16 The Still Store 383Recalling Images from the Still Store
Once an image has been added to the Still Store, it can be recalled at any time. To display
a saved still image, you need to load it into the Still Store and then enable the Still Store
to view the image.
To load an image into the Still Store
1 Click the Still Store tab.
2 Do one of the following:
• Select the still image you want to load, then click Load.
• Double-click the still image you want to load.
Once a still is loaded, you still have to turn on Display Loaded Still to make the image
visible.
To display an image that’s loaded into the Still Store
Do one of the following:
µ Choose Still Store > Display Loaded Still (or press Control-U).
µ Click the Still Store tab, then select Display Loaded Still.
The currently loaded still image appears both in the preview display and on your broadcast
monitor. By default, still images appear as a left-to-right split-screen comparison, but this
can be customized.
Customizing the Still Store View
Different colorists use the Still Store in different ways. Some prefer to flip between two
full-screen images as they make their comparisons, while others like to create a split
screen so they can compare the Still Store and the shot being graded side by side. Color
lets you work either way. For more information, see:
• Still Store View Settings
• Controls in the Still Store Bin
384 Chapter 16 The Still StoreStill Store View Settings
Each still image has its own settings for how that image will appear when it’s recalled.
These settings can be found on the right side of the Still Store room.
• Enable: Makes the currently loaded Still Store image visible in the preview and video
output monitors. Identical to the Still Store > Enable (Control-U) command.
• Transition: This parameter determines how much of the loaded still is visible onscreen.
When set to 0, the loaded still is not visible at all. When set to 1, the loaded still fills the
entire screen. Any value in between creates a split-screen view.
• Angle: Changes the angle along which the border of a split screen is oriented. The
orientation buttons below automatically change the Angle parameter, but the only
way to create a diagonal split screen is to customize this control yourself.
• Left to Right: Changes the Angle parameter to 180 degrees, to create a vertical split
screen with the still to the left.
• Right to Left: Changes the Angle parameter to 0 degrees, to create a vertical split screen
with the still to the right.
• Top to Bottom: Changes the Angle parameter to –90 degrees, to create a horizontal
split screen with the still at the top.
• Bottom to Top: Changes the Angle parameter to 90 degrees, to create a horizontal split
screen with the still at the bottom.
Chapter 16 The Still Store 385Controls in the Still Store Bin
The Still Store bin has the following controls:
• Up Directory button: Clicking this button takes you to the next directory up the current
path. You cannot exit the project bundle. To keep your project organized you should
make sure that you save all your stills within the “StillStore" directory of your project
bundle.
• Home Directory button: Changes the directory path to the “StillStore” directory within
your project bundle.
• Icon View: Changes the Still Store bin to icon view. Each saved still image is represented
by a thumbnail, and all stills are organized according to the date and time they were
saved, with the oldest stills appearing first (from left to right).
• List View: In list view, all still images and directories are represented by two columns;
the still image file’s name appears to the left, and the date of its creation appears to
the right. All stills are organized according to the date and time they were saved, with
the oldest appearing at the top and the newest at the bottom.
• Icon Size slider: When the Still Store bin is in icon view, this slider lets you increase and
decrease the size of the thumbnails that are displayed for each still.
• File field: This field does double duty. When you load a still image, this field displays
the still image’s name. However, if you enter a custom name and then save another
still, the new still will be created with the name you entered.
• Directory pop-up menu: This pop-up menu shows you the current directory path and
lets you navigate farther up the current directory structure, if you wish.
• New Folder button: Creates a new subdirectory inside the StillStore directory of your
project bundle.
386 Chapter 16 The Still Store• Save button: Saves the frame at the current position of the playhead as a still image,
for later recall.
• Load button: Loads a still so that it’s available for comparison using the Enable button,
or the Enable command in the Still Store menu (Control-U).
Chapter 16 The Still Store 387Once you’ve finished color correcting your program, the controls in the Render Queue
let you render the appropriate set of media files for the final output of your program,
either to Final Cut Pro or for delivery to other compatible systems.
This chapter covers the following:
• About Rendering in Color (p. 389)
• The Render Queue Interface (p. 395)
• How to Render Shots in Your Project (p. 396)
• Rendering Multiple Grades for Each Shot (p. 400)
• Managing Rendered Shots in the Timeline (p. 401)
• Examining the Color Render Log (p. 401)
• Choosing Printing Density When Rendering DPX Media (p. 402)
• Gather Rendered Media (p. 403)
About Rendering in Color
Rendering has a different purpose in Color than it does in an application like Final Cut Pro.
In Color, all effects-processing for playback is done on the fly, either dropping frames or
slowing down as necessary to display your color-corrected output at high quality for
evaluation purposes. Playback in Color is not cached to RAM, and there is no way to
“pre-render” your project for playback while you work.
In Color, rendering is treated as the final step in committing your corrections to disk by
generating a new set of media files. The Render Queue lets you render some or all of the
shots in your project once they’ve been corrected in Color.
You can use the Render Queue to render your project either incrementally or all at once.
For example, if you’re working on a high-resolution project with a multi-day or multi-week
schedule, you may choose to add each scene’s shots to the Render Queue as they’re
approved, preparing them for an overnight render at the end of each day’s session. This
distributes the workload over many days and eliminates the need for a single
time-consuming render session to output the entire program at once.
389
The Render Queue
17The Graphics Card You’re Using Affects the Rendered Output
Color uses the GPU of the graphics card that’s installed in your computer to render the
color correction and geometry adjustments that you’ve applied to the shots in your
program. Different video cards have GPU processors with differing capabilities, so it’s
entirely possible for the same Color project to look slightly different when rendered on
computers with different graphics cards. To ensure color accuracy, it’s best to render
your project on a computer using the same graphics card that was used when color
correcting that program.
Which Effects Does Color Render?
Projects that are imported from XML and EDL project files may have many more effects
than Color is capable of processing. These include transitions, geometric transformations,
superimpositions, and speed effects. When rendering your finished program, your
import/export workflow determines which effects Color renders.
In particular, if you render out 2K or 4K DPX or Cineon image sequences to be printed to
film, Color renders the shots in your project very differently than if you’ve rendered
QuickTime files to be sent in a return trip back to Final Cut Pro.
In all cases, the corrections you’ve made using the Primary In, Secondary, Color FX, and
Primary Out rooms are always rendered.
Effects That Aren’t Rendered in a Color–to–Final Cut Pro Roundtrip
• When you shepherd a project through an XML-based Final Cut Pro–to–Color roundtrip,
all transitions, filters, still images, generators, speed effects, Motion tab keyframes and
superimposition settings, and other non-Color-compatible effects from the original
Final Cut Pro project are preserved within your Color project, even if those effects aren’t
visible.
• Color Corrector 3-way filters are the exception. The last Color Corrector 3-way filter
applied to any clip is converted into a Primary In correction in Color. When you send
the project back to Final Cut Pro, all Color Corrector 3-way filters will have been removed
from your project.
• When you’ve finished grading your program in Color and you render that project as a
series of QuickTime movies in preparation for returning to Final Cut Pro, any of the
previously mentioned effects that have been invisibly preserved are not rendered.
Instead, when you send the finished Color project back to Final Cut Pro, such effects
reappear in the resulting Final Cut Pro sequence. At that point you have the option of
making further adjustments and rendering the Final Cut Pro project prior to outputting
it to tape or as a QuickTime master movie file.
390 Chapter 17 The Render QueueEffects That Are Only Rendered for 2K and 4K Output
• When rendering out DPX or Cineon image sequences, all clips are rendered at the
resolution specified by the Resolution Presets pop-up menu in the Project Settings tab
of the Setup Room.
• When rendering out DPX or Cineon image sequences, all the transformations you made
in the Geometry room’s Pan & Scan tab are rendered.
• When rendering out DPX or Cineon image sequences, all video transitions are rendered
as linear dissolves when you use the Gather Rendered Media command to consolidate
the finally rendered frames of your project in preparation for film output. This feature
is only available for projects that use DPX and Cineon image sequence media or RED
QuickTime media, and is intended only to support film out workflows. Only dissolves
are rendered; any other type of transition (such as a wipe or iris) will be rendered as a
dissolve instead.
• Effects that you need to manually create that aren’t rendered by Color include any
video transitions that aren’t dissolves, speed effects, composites, and titles. These must
be created in another application such as Shake.
Effects That Are Rendered When Projects Use 4K Native RED QuickTime Media
• When rendering projects using 4K native RED QuickTime media, the output is always
rendered at the resolution specified by the Resolution Presets pop-up menu in the
Project Settings tab of the Setup room. Additionally, all the transformations you’ve
made in the Geometry room’s Pan & Scan tab are always rendered into the final media.
This is not true of projects using 2K native RED QuickTime media.
• If you’re outputting to film and you’ve set the Render File Type pop-up menu in the
Project Settings tab of the Setup room to DPX or Cineon, then all video transitions are
rendered as linear dissolves when you use the Gather Rendered Media command to
consolidate the finally rendered frames of your project in preparation for film output.
This feature is only available for projects that use DPX and Cineon image sequence
media or RED QuickTime media, and is intended only to support film out workflows.
Only dissolves are rendered; any other type of transition (such as a wipe or iris) will be
rendered as a dissolve instead.
• If you’re sending the project back to Final Cut Pro and the Render File Type pop-up
menu in the Project Settings tab of the Setup room is set to QuickTime, effects such
as transitions that have been invisibly preserved are not rendered. Instead, when you
send the finished Color project back to Final Cut Pro, such effects reappear in the
resulting Final Cut Pro sequence. At that point, you have the option of making further
adjustments and rendering the Final Cut Pro project prior to outputting it to tape or
as a QuickTime master movie file.
Chapter 17 The Render Queue 391Motion Settings, Keyframes, and Pan & Scan Adjustments in Roundtrips
A subset of the static motion settings from Final Cut Pro is translated into the equivalent
Pan & Scan settings in Color when you first import the project. These settings have a
visible effect on your Color project and can be further adjusted as you fine-tune the
program. However, if you’re rendering QuickTime output in preparation for sending
your project back to Final Cut Pro, these effects are not rendered by Color unless your
project uses 4K native RED QuickTime media; ordinarily, static Pan & Scan settings are
passed from Color back to Final Cut Pro for rendering there. Keyframes are handled
differently:
• Keyframed Scale, Rotation, Center, and Aspect Ratio Motion tab parameters from
Final Cut Pro do not appear and are not editable in Color, but these keyframes are
preserved and reappear when you send your project back to Final Cut Pro.
• Color Pan & Scan keyframes cannot be translated into corresponding motion effect
keyframes in Final Cut Pro. All Color keyframes are removed when you send your
project back to Final Cut Pro, with the settings at the first frame of each clip being
used for translation.
For more information, see Exchanging Geometry Settings with Final Cut Pro.
Some Media Formats Require Rendering to a Different Format
There are many codecs that Color supports for media import, such as the XDCAM, MPEG
IMX, and HDV families of codecs, that cannot be used as the export format when rendering
out of Color. Most of these are formats which, because they’re so highly compressed,
would be unsuitable for mastering. Additionally, many of these formats use “squeezed”
anamorphic frame sizes, rather than the standard full-raster SD and HD frame sizes that
programs are typically mastered to. For all of these codecs, two things happen when you
render media for output:
• Media formats that are unsupported for output will be rendered using a different codec: If
the media in your project uses a codec that’s not supported for output, then every shot
in your project will be rendered using a different codec that issupported. In these cases,
Color supports a specific group of codecs that are either lightly or completely
uncompressed that are suitable for mastering. You can choose which of these codecs
to render your media with by choosing from the Resolution and Codec Settings controls
in the Project Settings tab of the Setup room.
• Media formats that are rendered using a different codec will be rendered full raster: If you’re
rendering using a different codec, all anamorphic media in your project will be resized
to the closest full-raster frame size. For example, media using the anamorphic
1280 x 1080 or 1440 x 1080 frame sizes will be rendered using the standard
1920 x 1080 frame size.
392 Chapter 17 The Render QueueWhenever rendering your project changes the codec, frame size, or both, you are presented
with a dialog when you send your project to Final Cut Pro that asks: “Change graded
Final Cut Pro sequence to match the QuickTime export codec?”
• If you click Yes to change the sequence settings to match the graded media rendered
by Color, then the codec used by the sequence sent to Final Cut Pro will be changed
from the one that was originally sent to Color. Also, the frame size of the sequence will
change to match the frame size of the rendered media.
• If you click No, the settings of the sequence that Color sends back to Final Cut Pro will
be identical to those of the sequence that was originally sent from Final Cut Pro to
Color, but the codec used by the clips won’t match that of the sequence, and the
rendered clips will have their scale and aspect ratio altered to fit the original frame size.
For a complete list of which codecs are supported by Color, see Compatible QuickTime
Codecs for Import.
For a list of the mastering codecs that Color supports for output, see Compatible QuickTime
Codecs for Output.
Rendering Mixed Format Sequences
If you edit together a mixed format sequence in Final Cut Pro—for example, combining
standard definition and high definition clips of differing formats—you can still send it to
Color, as long as each clip of media throughout the sequence is in a format that’s
compatible with Color.
When you render the finished project, how the final media is processed depends on the
format you’re rendering to:
• If you’re rendering QuickTime media to send back to Final Cut Pro: Each shot is individually
rendered with the same frame size, aspect ratio, and interlacing as the original media
file it’s linked to. Regardless of the project’s resolution preset, standard definition shots
are rendered as standard definition, high definition shots are rendered as high definition,
progressive frame shots are rendered progressive, and interlaced shots are rendered
interlaced. On the other hand, every shot in the project is rendered using the QuickTime
export codec that’s specified in the Project Settings tab of the Setup room, and if the
original frame size is a nonstandard high definition frame size, then it is changed to
the nearest full-raster frame size when rendered.
When you send the project back to Final Cut Pro, the Position, Scale, Aspect Ratio, and
Rotation parameters of each shot in the Pan & Scan tab of the Geometry room are
passed back to each clip’s corresponding Motion tab settings in Final Cut Pro, so that
all of the clips conform to the sequence settings as they did before. However, each
rendered media file in the project that was sent back to Final Cut Pro should have the
same frame size, aspect ratio, and interlacing as the original media files that were
originally sent to Color.
Chapter 17 The Render Queue 393If the original frame size of the sequence was a nonstandard high definition frame size,
then you have the option of either changing the sequence frame size when you send
the project back to Final Cut Pro to match that of the full-raster media rendered by
Color, or leaving it alone. In either case, the Motion tab settings for each clip in
Final Cut Pro are automatically adjusted so that all clips fit into the returned sequence
in the same was as they did in in Color.
Ultimately, it’s up to Final Cut Pro to transform and render all clips that don’t match
the current sequence settings as necessary to output the program to whichever format
you require.
• If you’re rendering 4K native RED QuickTime media, or DPX or Cineon image sequences to
be output by a film printer: In this case, all shots are rendered according to the Position,
Scale, Aspect Ratio, and Rotation settings in the Pan & Scan tab settings, with the final
frame size conforming to the currently specified resolution preset. The final result is a
series of DPX or Cineon image sequences with uniform frame sizes.
Mixing Frame Rates is Not Recommended
Mixed format sequences are extremely convenient during the offline edit of a project
that incorporates a wide variety of source material. For example, it’s extremely common
to mix high definition and standard definition clips in documentary programs. In many
cases, you can mix formats with different frame sizes and finish your program using the
original media without problems.
However, it’s not recommended to send a sequence to Color that mixes clips with
different frame rates, particularly when mixing 23.98 fps and 29.97 fps media. The
resulting graded media rendered by Color may have incorrect timecode and in or out
points that are off by a frame.
Furthermore, when outputting to tape, all sequences should consist of clips with
matching frame rates and field handling (progressive or interlaced) for the highest
quality results.
If you have one or more clips in your sequence with a frame rate or field handling
standard that don’t match those of the sequence, you can use Compressor to do a
standards conversion of the mismatched clips. For more information, see Final Cut Studio
Workflows, available at http://documentation.apple.com/en/finalcutstudio/workflows.
Rendering Projects That Use Multiclips
If you’re working on a project that was edited using the multicamera editing features in
Final Cut Pro, the multiclips in your sequence need no special preparation for use in Color.
(They can be sent to Color either collapsed or uncollapsed.) However, no matter how
many angles a multiclip may have had in Final Cut Pro, once a sequence is sent to Color,
only the active angle for each multiclip is visible for grading and rendering. The resulting
sequence of rendered media that is sent back to Final Cut Pro consists of ordinary clips.
394 Chapter 17 The Render QueueThe Render Queue Interface
You specify which shots in the program you want to render using the Render Queue list.
Whenever you add shots to this list, they’re organized by shot number. The order in which
shots appear in this column dictates the order in which they’re rendered—the topmost
unrendered shot in the list is rendered first, and then rendering continues for the next
unrendered shot on the list, and so on until the end of the list is reached.
• Render Queue list: Six columns of information appear in the Render Queue:
• Number column: Identifies that shot’s numeric position in the Timeline. All shots in
the Render Queue are listed in descending order based on their ID number.
• Shot Name column: Shows a thumbnail and the name of the shot.
• In column: The first frame of media that will be rendered for that shot. This timecode
is equal to the Project In point plus the current Handles value specified in the Project
Settings tab of the Setup room.
• Out column: The last frame of media that will be rendered for that shot. This timecode
is equal to the Project Out point plus the current Handles value specified in the
Project Settings tab of the Setup room. If there is no extra media available on disk
for handles at the beginning or end of shots, then handles will not be added.
• Grade ID column: Shows the currently selected grade for that shot. You can queue
up the same shot up to four times with different grades enabled, in order to render
media for each grade associated with that shot.
• Progress column: This is the column where a render bar appears to let you know how
long that shot is taking to render. If the shot is not currently rendering, this column
shows the render status of that shot (queued, rendering, or rendered).
Render Queue Controls
The following buttons beneath the Render Queue list let you add shots to the queue,
remove them, and initiate rendering.
• Add Unrendered: Adds all currently unrendered shots to the Render Queue.
• Add Selected: Adds all currently selected shots to the Render Queue.
• Add All: Adds every shot in the Timeline to the Render Queue. Shots that have already
been rendered are also placed in the queue and will be rerendered unless they’re first
removed. Shots that are rerendered overwrite the previously rendered media.
• Remove Selected: Removes only shots that you’ve selected from the Render Queue.
• Clear Queue: Removes all shots from the Render Queue.
Chapter 17 The Render Queue 395• Start Render: Initiates rendering for all the shots that have been placed into the Render
Queue. This button has the same function as the Render > Start Render menu command.
Important: Once you’ve initiated rendering, you can stop it by pressing either Escape
or Control-Period. When you’ve stopped rendering, whichever shot was interrupted
will need to be rerendered from its In point.
How to Render Shots in Your Project
The Render Queue is designed to let you manage the rendering of your project any way
you like. You can add every shot in the program to the Render Queue in order to render
everything at once, or you can add only the shots that were completed that day as part
of a process of rendering your project incrementally.
However you decide to render the media in your project, the process is pretty much the
same: you check your project and shot settings, add shots to the Render Queue, and then
use the Start Render command.
To check your Project Settings and User Preferences before you add shots to the Render
Queue
1 Before you add any shots to the Render Queue, always double-check the Render Directory
field in the Project Settings tab of the Setup room, to make sure that you’re using the
correct render directory. Otherwise, your media may not be rendered where you expect
it to be.
2 Next, check the following parameters in the Project Settings tab, since they affect how
your media is rendered:
• Display LUT: If you have a display LUT applied to your project, it will be rendered into
the output. If you were using the LUT to simulate an output profile (for example, film
printing), you don’t want this to happen. Choose File > Clear Display LUT to prevent
the LUT from affecting the rendered output. For more information, see Using Display
LUTs.
• Resolution Presets: If you change the resolution preset to a different frame size than
the one the project was originally set to, how that frame size affects the rendering of
your final graded media depends on whether your project uses ordinary QuickTime
media, native RED QuickTime media, or DPX/Cineon media. For more information, see
Resolution and Codec Settings.
• Render File Type: This setting determines whether you render QuickTime media
(appropriate for sending back to Final Cut Pro), or DPX or Cineon image sequences
(appropriate for printing to film). For more information, see Resolution and Codec
Settings.
396 Chapter 17 The Render Queue• Printing Density: If you’re rendering DPX media, make sure that Printing Density is set
to the correct format. For more information, see Choosing Printing Density When
Rendering DPX Media.
• Deinterlace Renders: This setting forces Color to deinterlace all media that’s rendered.
Color does not have a sophisticated deinterlacing method, so this setting is inappropriate
for high-quality output. For more information, see Resolution and Codec Settings.
• QuickTime Export Codecs: Choose the QuickTime codec you want to use for rendering
your final output. The list of available codecs is limited to mastering-quality codecs
including Apple ProRes and Uncompressed. For more information, see Compatible
QuickTime Codecs for Output.
• Broadcast Safe: Turning Broadcast Safe on or off affects whether out-of-gamut values
are clipped when the output media is rendered. For more information, see Broadcast
Safe Settings.
3 Lastly, open the User Prefs tab and check the following settings:
• Internal Pixel Format: Make sure that the Internal Pixel Format is set to the correct bit
depth. If you graded your program with Internal Pixel Format set to 8- through 16-bit,
changing it to Floating Point may alter how certain Color FX operations work. If you
intend to work at a lower bit depth but render at Floating Point, it’s a good idea to
double-check all shots with Color FX corrections applied to them prior to rendering to
make sure that they look the way you intended.
• Render Proxy: If you’re rendering Cineon or DPX image sequences, or RED QuickTime
files, and you’re delivering full-quality files, make sure that the Render Proxy pop-up
menu is set to Full Resolution.
To render one or more shots in your program
1 Go through the Timeline and, for each of the shots you’re planning on rendering, choose
the grade you want to render.
The grade you select for each shot determines which grade is rendered when you add a
shot to the Render Queue.
2 Do one of the following to add shots to the Render Queue list:
• Click Add All, or choose Render Queue > Add All (or press Option-Shift-A) to add the
current grade for every shot in the project.
• Click Add Unrendered, or choose Render Queue > Add Unrendered to add only the
shots that haven’t yet been rendered.
• Select one or more shots, then click Add Selected, or choose Render Queue > Add
Selected (or press Option-A) to add only the selected shots.
• Turn on the beauty grade designation for specific shots to indicate which grades are
preferred or which shots you want to render, then choose Render > Add All Beauty
Grades. (Shots without beauty grade designations aren’t added to the Render Queue.)
Chapter 17 The Render Queue 397Once you add shots to the Render Queue list, the status of each of the shots that you
add changes to Queued in the Shots browser. In the Timeline, each of the shots that
you added appears with a yellow status bar over the currently used grade for each
queued shot, to show you which of the available grades is being rendered.
Note: You can add a shot to the Render Queue with one grade enabled, then choose
another grade for that shot and add it to the Render Queue again to render both grades
for that shot.
3 Click Start Render, or choose Render Queue > Start Render (or press Command-P).
Tip: You may find that your program renders more quickly if you set the Video Output
pop-up menu in the User Prefs tab of the Setup room to Disabled.
The shots in the Render Queue start rendering. A green progress bar appears in the
Progress column of the first unrendered shot in the list, which shows how long that shot
is taking to render.
At the same time, the render bar appearing above the Timeline ruler for the shot being
rendered gradually turns green to mirror the progress of the render, while the grade bar
that’s currently being rendered turns magenta.
398 Chapter 17 The Render QueueOnce the first shot in the Render Queue has finished rendering, the next one begins, and
rendering continues from the top to the bottom of the list until the last shot is rendered.
All rendered shots in the Timeline appear with a green render bar above the Timeline
ruler and a green status bar over the grade that was rendered.
Note: To pause rendering, press Escape (whichever shot is interrupted will have to start
rendering over again from its beginning). You can click Start Render again to resume
rendering.
All rendered media is written to that project’s render directory, which is specified in the
Project Settings tab of the Setup room. The render directory is organized into numbered
subdirectories, with one subdirectory corresponding to each shot in your project’s Timeline.
The number of each subdirectory corresponds to each shot’s number in the Number
column of the Render Queue. Each of these subdirectories contains up to four rendered
sets of media corresponding to each rendered grade.
To save and export your program after rendering
1 After you’ve rendered all of the clips in your project, it’s important to save the project
immediately. The rendered status of each shot in the Timeline includes the path of each
rendered media file, which is used to relink the media when your project is sent back to
Final Cut Pro.
2 Once your Color project is safely saved, you need to send the project to the environment
in which it will be output.
• In a Final Cut Pro roundtrip: Send the project back to Final Cut Pro using the File > Send
To > Final Cut Pro command. For more information, see Sending Your Project Back to
Final Cut Pro.
• If you’re rendering for film output: The next step is to use the File > Gather Rendered
Media command to prepare the final image sequence that will be output to film. For
more information, see Gather Rendered Media.
For more information about options in the Project Settings tab or User Prefs tab in the
Setup room, see The Project Settings Tab or The User Preferences Tab.
Chapter 17 The Render Queue 399Rendering Multiple Grades for Each Shot
Each shot in your Color project uses one of up to four possible grades. As you work, you
have the ability to freely change which grade is used by any shot, switching among
different looks as necessary during the development of the program’s aesthetic.
You also have the ability to render each of a shot’s grades individually, or together. This
way, whenever there’s a scene where the client might approve one of four different looks,
you can hedge your bets by rendering all versions.
Color keeps track of which grade is currently selected when you send that project back
to Final Cut Pro, or when you use the Gather Rendered Media command, and makes sure
that the appropriate render file is used.
Each rendered grade is numbered. For example, if you rendered two different grades in
a QuickTime-based project for shot number 1, the subdirectory for that shot would have
two shots, named 1_g1.MOV and 1_g2.MOV, with the number coming immediately after
the g indicating which grade that file corresponds to.
To render multiple grades for a single shot
1 Move the playhead to a shot you want to render, and choose the first grade you want to
render out for that shot.
2 Select that shot, click the Render Queue tab, then click Add Selected to add that shot to
the Render Queue.
3 Change the grade used by that shot to the next one you want to render.
4 Click Add Selected again to add that shot to the Render Queue a second time.
Every grade that’s queued for rendering appears with a yellow render bar over the grade
bar in the Timeline.
400 Chapter 17 The Render QueueThe grade ID column in the Render Queue shows you what grades you’ve selected to
render for each shot.
Managing Rendered Shots in the Timeline
Once you’ve rendered shots in the Timeline, they stay rendered unless you make a change
to the grade. When you change the grade of a shot that’s already been rendered, its
render bar will once again turn red, showing that its current state is unrendered. Rendering
the new state of the grade for that shot overwrites the previous render file.
If you try to add a shot that’s currently shown as having been rendered to the Render
Queue (for example, you’ve inadvertently included one or more shots that have already
been rendered in a selection of shots you want to render), a dialog warns you which shots
will be rerendered, with the option to leave them out of the queue.
Clicking Yes forces Color to add them to the Render Queue, where they will be rendered
a second time.
Examining the Color Render Log
Every time you render shots in a project, information about what was rendered, when it
was rendered, and how long it took to render is written to a color.log file. This information
can be used to benchmark your system, troubleshoot rendering issues, and keep a record
of how long different projects take to render. Every time you render anything in any Color
project, information about that rendering session is appended to this one log.
Whenever you click Start Render, the date and time the render was started and number
of clips queued up for rendering is written into the log, followed by information and
statistics about each clip that is rendered. This information includes:
• Path the rendered file was written to
Chapter 17 The Render Queue 401• Resolution of the rendered file
• QuickTime Codec or Format
• Number of Frames rendered in each rendered file
• Time to render
• Performance (in frames per second)
The date and time that rendering was completed appears after the end of each session’s
individual clip entries.
The color.log file is stored in /Users/username/Library/Logs directory. However, you can
view this log from within Color.
To see the Color render log
µ Choose Render Queue > Show Render Log.
The render log appears in a Console window.
You have the option of clearing out the color.log file if it becomes too long.
To clear the Color render log
µ With the Render Log window showing, click Clear Display.
Choosing Printing Density When Rendering DPX Media
When you’re rendering DPX image sequences in preparation for printing to film, it’s
important that you choose the appropriate Printing Density from the Project Settings tab
of the Setup room. Consult with the film printing facility you’re working with to determine
the right setting for your program.
Note: Choosing Cineon as the Render File Type limits the Printing Density to Film (95
Black - 685 White : Logarithmic), while choosing QuickTime as the Render File Type limits
it to Linear (0 Black - 1023 White).
The Printing Density pop-up menu lets you choose how to map 0 percent black and 100
percent white in each color-corrected shot to the minimum and maximum numeric ranges
that each format supports. Additionally, the option you choose determines whether or
not super-white values are preserved. There are three possible settings:
• Film (95 Black - 685 White : Logarithmic): The minimum and maximum values of 0 and
100 percent in Color’s scopes correspond to the digital values of 95 and 685 in rendered
DPX files. Super-white values above 100, if present in Color, are preserved using this
format.
402 Chapter 17 The Render Queue• Video (64 Black - 940 White : Linear): The minimum and maximum values of 0 and 100
percent in Color’s scopes correspond to the digital values of 64 and 940 in rendered
DPX files. Super-white values above 100, if present in Color, are preserved using this
format.
• Linear (0 Black - 1023 White): The minimum and maximum values of 0 and 100 percent
in Color’s scopes correspond to the digital values of 0 and 1023 in rendered DPX files.
Super-white values, if present in Color, are clipped using this format when rendering
DPX files.
This is also the default setting for QuickTime output. When rendering QuickTime files,
super-white values above 100 are preserved if the QuickTime export codec is set to a
Y′CBCR
-compatible codec, such as Apple ProRes 422 (HQ) or 10-bit Uncompressed 4:2:2.
If you’re rendering to an RGB-compatible codec, such as Apple ProRes 4444, super-white
values are clipped.
Gather Rendered Media
The Gather Rendered Media command can only be used if the shots of a project have
been rendered as a series of DPX or Cineon image sequences. This command is used to
reorganize all of a project’s rendered image sequence media in preparation for delivery
to a film printer.
This operation organizes your rendered image sequences in three ways:
• Every rendered frame of media for your project is placed within a single directory.
• Every frame of media for your project is renamed to create a single, continuous range
of frames from the first to the last image of the rendered project.
• All video transitions in your project are rendered as linear dissolves. Only dissolves are
rendered; any other type of transition appearing in your project, such as a wipe or iris,
will be rendered as a dissolve instead.
Important: You cannot gather media in an XML-based roundtrip.
To gather rendered media
1 Chose File > Gather Rendered Media.
2 Choose one of three options for gathering the rendered media for your project:
• Copy Files: Makes duplicates of the image sequence files, but leaves the originally
rendered files in the render directory.
• Move Files: Copies the image sequence files, and then deletes the originally rendered
files from the render directory.
Chapter 17 The Render Queue 403• Link Files: Creates aliases of the originally rendered files in the render directory. This is
useful if you want to process the frames using an application on your computer, and
you don’t want to duplicate the media unnecessarily. This is not useful if you’re intending
to transport the media to another facility, since the alias files only point to the original
media in the render directory, and contain no actual image data.
3 Click Create New Directory if you want to place the gathered media inside of a new
directory.
4 Click Gather.
Every rendered frame of every shot in your project is renamed, renumbered, and placed
in the directory you specified, ready for further processing or delivery.
404 Chapter 17 The Render QueueWhen using analog devices, make sure they are calibrated for accurate brightness and
color so you can color correct your video accurately.
This appendix covers the following:
• About Color Bars (p. 405)
• Calibrating Video Monitors with Color Bars (p. 405)
About Color Bars
Color bars are an electronically generated video signal that meet very strict specifications.
Because the luma and chroma levels are standardized, you can use color bars passing
through different components of a video system to see how each device is affecting the
signal.
NTSC and PAL each have specific color bar standards, and even within NTSC and PAL
there are several standards. When you evaluate color bars on a video scope, it is important
to know which color bars standard you are measuring, or you may make improper
adjustments. “SMPTE bars” is a commonly used standard.
When Should You Use Color Bars?
Analog devices always need to be calibrated and adjusted, even if only by minute
degrees. This is because heat, age, noise, cable length, and many other factors subtly
affect the voltage of an analog electronic video signal, which affects the brightness and
color of the video image. Color bars provide a reference signal you can use to calibrate
the output levels of an analog device.
Calibrating Video Monitors with Color Bars
Editors and broadcast designers shouldn’t rely on an uncalibrated monitor when making
crucial adjustments to the color and brightness of their programs. Instead, it’s important
to use a calibrated broadcast monitor to ensure that any adjustments made to exposure
and color quality are accurate.
405
Calibrating Your Monitor
A AppendixMonitors are calibrated using SMPTE standard color bars. Brightness and contrast are
adjusted by eye, using the color bars onscreen. Adjusting chroma and phase involves
using the “blue only” button found on professional video monitors. This calibration should
be done to all monitors in use, whether they’re in the field or in the editing room.
To calibrate your monitor
1 Connect a color bars or test pattern generator to the monitor you’re using, or output one
of the built-in color bars generators in Final Cut Pro.
Important: Avoid using still image graphics of color bars. For more information, see Y′CBCR
Rendering and Color Bars.
2 Turn on the monitor and wait approximately 30 minutes for the monitor to “warm up”
and reach a stable operating temperature.
3 Select the appropriate input on the video monitor so that the color bars are visible on
the screen.
Near the bottom-right corner of the color bars are three black bars of varying intensities.
Each one corresponds to a different brightness value, measured in IRE. (IRE originally
stood for Institute of Radio Engineers, which has since merged into the modern IEEE
organization; the measurement is a video-specific unit of voltage.) These are the PLUGE
(Picture Lineup Generation Equipment) bars, and they allow you to adjust the brightness
and contrast of a video monitor by helping you establish what absolute black should be.
4 Turn the chroma level on the monitor all the way down.
This is a temporary adjustment that allows you to make more accurate luma adjustments.
The Chroma control may also be labeled color or saturation.
5 Adjust the brightness control of your monitor to the point where you can no longer
distinguish between the two PLUGE bars on the left and the adjacent black square.
At this point, the brightest of the bars (11.5 IRE) should just barely be visible, while the
two PLUGE bars on the left (5 IRE and 7.5 IRE) appear to be the same level of black.
6 Now, turn the contrast all the way up so that this bar becomes bright, and then turn it
back down.
406 Appendix A Calibrating Your MonitorThe point where this bar is barely visible is the correct contrast setting for your monitor.
(The example shown below is exaggerated to demonstrate.)
When monitor
brightness and contrast
is properly adjusted, this
strip should barely be
visible above black.
When adjusting the contrast, also watch the white square in the lower left. If the contrast
is too high, the white square appears to “spill” into the surrounding squares. Adjust the
contrast until the luma of the white square no longer spills into surrounding squares.
Important: Contrast should only be adjusted after brightness.
7 Once you have finished adjusting luma settings, turn up the Chroma control to the middle
(detent) position.
Note: Some knobs stop subtly at a default position. This is known as the detent position
of the knob. If you’re adjusting a PAL monitor, then you’re finished. The next few steps
are color adjustments that only need to be made to NTSC monitors.
8 Press the “blue only” button on the front of your monitor to prepare for the adjustment
of the Chroma and Phase controls.
Note: This button is usually only available on professional monitors.
9 Make the following adjustments based on the type of video signal you’re monitoring:
• If you’re monitoring an SDI or component Y′CBCR
signal, you only need to adjust the
Chroma control so that the tops and bottoms of the alternating gray bars match. This
is the only adjustment you need to make, because the Phase control has no effect with
SDI or component signals.
• If you’re monitoring a Y/C (also called S-Video) signal, it’s being run through an RGB
decoder that’s built into the monitor. In this case, adjust both the Chroma and Phase
controls. The chroma affects the balance of the outer two gray bars; the phase affects
the balance of the inner two gray bars. Adjustments made to one of these controls
affects the other, so continue to adjust both until all of the gray bars are of uniform
brightness at top and bottom.
Appendix A Calibrating Your Monitor 407Note: The step in the second bullet also applies to the monitoring of composite signals,
but you really, really shouldn’t be monitoring a composite signal if you’re doing color
correction.
Once your monitor is correctly calibrated, all the gray bars will be evenly gray and all the
black bars evenly black.
When the phase (similar
to hue) of the monitor is
correctly adjusted, you
should see alternating
bars of gray and black,
as shown.
Y′CBCR Rendering and Color Bars
Y′CBCR
rendering must be supported by the codec used in a sequence in order for
Final Cut Pro to render color bars with a PLUGE (Picture Lineup Generation Equipment)
area that includes a super-black (4 IRE in NTSC, 2 IRE in PAL) signal for calibration. The
PLUGE part of the test signal cannot be rendered using an RGB-based codec.
408 Appendix A Calibrating Your MonitorThis chapter shows the various keyboard shortcuts that are available while working in
Color.
This appendix covers the following:
• Project Shortcuts (p. 409)
• Switching Rooms and Windows (p. 410)
• Scopes Window Shortcuts (p. 411)
• Playback and Navigation (p. 411)
• Grade Shortcuts (p. 412)
• Timeline-Specific Shortcuts (p. 413)
• Editing Shortcuts (p. 413)
• Keyframing Shortcuts (p. 414)
• Shortcuts in the Shots Browser (p. 414)
• Shortcuts in the Geometry Room (p. 414)
• Still Store Shortcuts (p. 414)
• Render Queue Shortcuts (p. 415)
Project Shortcuts
Keyboard shortcut Function
New project
N Open project
O
Revert to last saved state of the current project
R
Save project
S
Save archive as; allows you to name an archive
option S
409
Keyboard Shortcuts in Color
B AppendixKeyboard shortcut Function
Open archived version of project
option A
Import clip (opens the File browser in the Setup room)
I
Gather Rendered Media (only for Cineon or DPX projects)
option G
Undo; press Command-Z a second time to restore the change
Z
Cut
X
Copy
C
Paste
V
Select All
A
Deselect All
shift A Open Color Help
shift
?
Switching Rooms and Windows
Keyboard shortcut Function
Open Setup room
1 Open Primary In room
2 Open Secondaries room
3 Open Color FX room
4 Open Primary Out room
5 Open Geometry room
6 Open Still Store
7 Open Render Queue
8 Open Project Settings tab in the Setup room
9 Open Shots browser in the Setup room
0
410 Appendix B Keyboard Shortcuts in ColorKeyboard shortcut Function
Select Color window
shift 1
Select Scopes window
shift 2
Switches between single display and dual display modes the next
time Color is opened
shift 0
Scopes Window Shortcuts
Keyboard shortcut Function
Change scope to Waveform
W
Change scope to Vectorscope
V
Change scope to Histogram
H
Change scope to 3D Scope
C
Playback and Navigation
Keyboard shortcut Function
Switches between play and stop
space
Play backward
J
Stop
K
Play forward
L Move playhead to next shot
Move playhead to previous shot
Move playhead back one frame
Move playhead forward one frame
Go to beginning of Timeline
home
Go to end of Timeline
end
Appendix B Keyboard Shortcuts in Color 411Keyboard shortcut Function
Switch playback mode
shift control M
Set In point in Timeline for playback
I
Set Out point in Timeline for playback
O
Grade Shortcuts
Keyboard shortcut Function
Create new grade/switch to grade 1
control 1
Create new grade/switch to grade 2
control 2
Create new grade/switch to grade 3
control 3
Create new grade/switch to grade 4
control 4
Turns grade on/off
control G
Set current grade as the beauty grade
shift control B
Copy current grade to memory bank 1
shift option control 1
Copy current grade to memory bank 2
shift option control 2
Copy current grade to memory bank 3
shift option control 3
Copy current grade to memory bank 4
shift option control 4
Copy current grade to memory bank 5
shift option control 5
Paste grade from memory bank 1
shift option 1
Paste grade from memory bank 2
shift option 2
Paste grade from memory bank 3
shift option 3
Paste grade from memory bank 4
shift option 4
Paste grade from memory bank 5
shift option 5
412 Appendix B Keyboard Shortcuts in ColorTimeline-Specific Shortcuts
Keyboard shortcut Function
Zoom out
Zoom in
=
Zoom to fit every shot into the available width of the Timeline
shift Z
Set Timeline ruler to frames
F
Set Timeline ruler to seconds
S
Set Timeline ruler to minutes
M
Set Timeline ruler to hours
H
Switch Timeline ruler between frames/seconds/minutes/hours
tab
Select all shots in timeline
A
Deselect all shots in timeline
shift A
Shift-click Select a contiguous region of clips in the timeline
Command-click Select a noncontiguous region of clips in the timeline
Editing Shortcuts
Keyboard shortcut Function
Choose Select tool
control S
Choose Roll tool
control R
Choose Ripple tool
control T
Choose Slip tool
control Y
Choose Split tool
control X
Choose Splice tool
control Z
Create an edit at the position of the playhead
control V Merge an edit at the position of the playhead
control B
Appendix B Keyboard Shortcuts in Color 413Keyframing Shortcuts
Keyboard shortcut Function
Change keyframe interpolation type at position of playhead
control 8
Change keyframe interpolation type at position of playhead
8
Add keyframe at position of playhead
control 9
Add keyframe at position of playhead
9
Delete keyframe at position of playhead
control 0
Delete keyframe at position of playhead
0 Move playhead to previous keyframe of current shot in current
room
option
Move playhead to next keyframe of current shot in current room
option
Shortcuts in the Shots Browser
Keyboard shortcut Function
Assign selected shots into a group
G
Center the Shots browser
F
Shortcuts in the Geometry Room
Keyboard shortcut Function
Frame the preview image in the Geometry room
F
Still Store Shortcuts
Keyboard shortcut Function
Enable currently loaded still
control U
Save frame at the current position of the playhead to the Still Store
control I
414 Appendix B Keyboard Shortcuts in ColorRender Queue Shortcuts
Keyboard shortcut Function
Add selected shots to the Render Queue
option A
Add all shots in the Timeline to the Render Queue
shift option A
Start Render
P
Appendix B Keyboard Shortcuts in Color 415The tables in this section show the various Multi-Touch controls that are available in Color.
Multi-Touch controls require a Multi-Touch capable input device.
This appendix covers the following:
• Multi-Touch Control of the Timeline (p. 417)
• Multi-Touch Control in the Shots Browser (p. 417)
• Multi-Touch Control of the Scopes (p. 418)
• Multi-Touch Control in the Geometry Room (p. 418)
• Multi-Touch Control in the Image Preview of the Scopes Window (p. 419)
Multi-Touch Control of the Timeline
The following Multi-Touch controls let you modify the Timeline’s display.
Multi-Touch Gesture Description
Pinch close Zoom out
Pinch open Zoom in
Two-finger scroll Pan/scroll the Timeline
Three-finger swipe left Select the previous shot
Three-finger swipe right Select the next shot
Three-finger swipe up Select the previous grade
Three-finger swipe down Select the next grade
Rotate right Scrub the playhead forward
Rotate left Scrub the playhead back
Multi-Touch Control in the Shots Browser
The following Multi-Touch controls let you navigate the Shots browser when it’s in icon
view.
417
Using Multi-Touch Controls in
Color C AppendixMulti-Touch Gesture Description
Pinch close Shrink icons
Pinch open Enlarge icons
Two-finger scroll Pan around the image preview
Multi-Touch Control of the Scopes
The following Multi-Touch controls let you modify the display of the Video Scopes.
Multi-Touch Gesture Description
Pinch close (Waveform, Zoom out
Vectorscope, 3D scope)
Pinch open (Waveform, Zoom in
Vectorscope, 3D scope)
Rotate left (3D scope) Rotates the 3D scope to the left
Rotate right (3D scope) Rotates the 3D scope to the right
Multi-Touch Control in the Geometry Room
The following Multi-Touch controls let you make adjustments to each shot’s onscreen
controls (scroll, pinch, or rotate inside the onscreen control box), or to the preview display
in the Geometry room (scroll or pinch outside the onscreen control box).
Multi-Touch Gesture Description
Pinch close (inside the onscreen Shrink image
control box)
Pinch open (inside the onscreen Enlarge image
control box)
Two-finger scroll (inside the Pan/scan image
onscreen control box)
Rotate left (inside the onscreen Rotate image left
control box)
Rotate right (inside the onscreen Rotate image right
control box)
Two-finger scroll (outside the Pan/scroll preview image
onscreen control box)
Pinch close (outside the Zoom out of the image preview
onscreen control box)
Pinch open (outside the Zoom into the image preview
onscreen control box)
418 Appendix C Using Multi-Touch Controls in ColorMulti-Touch Control in the Image Preview of the Scopes Window
The following Multi-Touch controls let you make adjustments to each shot’s Pan & Scan
settings in the Geometry room, without having that room open.
Multi-Touch Gesture Description
Pinch close Shrink image
Pinch open Enlarge image
Two-finger scroll Pan/scan image
Rotate left Rotate image left
Rotate right Rotate image right
Appendix C Using Multi-Touch Controls in Color 419Color is compatible with control surfaces from JLCooper and Tangent Devices.
A control surface lets you make simultaneous adjustments to multiple parameters while
you work. Not only is this faster, but it allows you to interactively make complex color
adjustments to different areas of the image at once. This appendix describes how to
connect and configure compatible control surfaces to your computer for use with Color.
This appendix covers the following:
• JLCooper Control Surfaces (p. 421)
• Tangent Devices CP100 Control Surface (p. 426)
• Tangent Devices CP200 Series Control Surface (p. 429)
• Customizing Control Surface Sensitivity (p. 434)
JLCooper Control Surfaces
JLCooper makes a variety of control surfaces that are compatible with both Color and
Final Cut Pro. The MCS family of control surfaces have both navigational and color
correction–specific controls in a variety of configurations. The Eclipse CS is an improved
version of the MCS-3000 and MCS-Spectrum that combines both units into a single control
surface.
R3 B3
B2
R1 B1
R2
PAGE 5
PAGE 6
PAGE 8
PAGE 7
PAGE 1
PAGE 2
PAGE 4
PAGE 3
F1
M1
W1
W2
W3
W4
W5
W6
W7
JOG SHUTTLE
M2 M3
1 2 3 4 5 6 7 8 BANK 1
BANK 2
HOURS MINUTES SECONDS FRAMES
BANK 4
BANK 4
PAGE
ASSIGN
UTILITY
M4 M5
TIME CODE DISPLAY
F2
F3
F4
F5
F6
F7
F8
To use compatible JLCooper control surfaces with Color, you need the following:
• Eclipse CX, MCS-3000, MCS-3400, or MCS-3800 with an MCS-Spectrum
421
Setting Up a Control Surface
D Appendix• Your Controller configured with an Ethernet board supplied in Slot #1
• Multiport hub, router, or switch
• Cat-5 Ethernet cables
The Eclipse CX has a single Ethernet connection. The Ethernet connection for the
MCS-Spectrum is bridged to the MCS-3000 using an Expander Cable. The MCS-3000 then
connects to your computer via Ethernet.
Important: The JLCooper control surfaces cannot be connected to the second Ethernet
port of your Mac Pro; it must be connected to your computer’s primary Ethernet port, if
necessary, through a hub or switch if you need to share the port with an Internet
connection.
For more information, see:
• Configuring the MCS-3000 and MCS-Spectrum Control Surfaces
• Controls for the MCS-3000
• Controls for the MCS-Spectrum
Configuring the MCS-3000 and MCS-Spectrum Control Surfaces
The following procedures describe how to configure and use these control surfaces with
Color.
To set up the MCS-3000 and MCS-Spectrum for use with Color
1 Turn on the MCS-3000 and wait for the unit to power up.
The MCS-3000 works similarly to any other networked computer, so you must enter
Ethernet IP settings into the device itself so that it can network with your computer.
2 Hold down the SHIFT and ASSIGN/UTILITY buttons simultaneously.
The current IP address settings should appear in the display at the top of the unit.
3 Using the numeric keypad on the MCS-3000, type in the following values:
a Enter an IP Address, then press ENTER to accept and continue.
For example, you might enter: 192.168.001.010
Note: The first three period-delimited sets of numbers in the IP address must match
the first three sets of numbers that are used on your particular network. If you’re not
sure what values to use, you can check to see what IP address is used by your computer
(look for your computer’s IP address in the Network settings of System Preferences),
and base the MCS-3000 IP address on that, making sure you change the last three
numbers so that this address isn’t used by any other device on your network.
b Enter a gateway address, then press ENTER to accept and continue.
Note: The first three period-delimited sets of numbers in the gateway address must
match the IP address you used.
422 Appendix D Setting Up a Control Surfacec Enter a Subnet Mask number, then press ENTER to accept and continue.
For example, you might enter: 255.255.255.000
d Enter a port number, then press ENTER to accept and continue.
For example, you might enter: 49153
Note: To be safe, use one of the range of values set aside as “dynamic and/or private
ports” from 49152 through 65535.
4 Turn off both the MCS-3000 and the MCS-Spectrum.
Now that your control surface is configured, you need to set it up within Color.
To use the MCS-3000 and MCS-Spectrum with Color
1 Turn on the MCS-Spectrum first, then turn on the MCS-3000.
2 Open Color.
If you’re opening Color for the first time, you see the Control Surface Startup dialog. If
you’ve already opened Color and have turned off the option for making this dialog appear,
you need to click the Show Control Surface Dialog button in the User Prefs tab of the
Setup room.
3 When the Control Surface Startup dialog appears:
a Choose “JLCooper - MCS3000/Spectrum” from the Control Surface pop-up menu.
b Type the IP address you entered into the MCS-3000 into the IP Address field, then press
Enter.
c Type the Port number you entered into the MCS-3000 into the Port field, then press
Enter.
4 Click Yes.
The MCS-3000 and MCS-Spectrum should now be ready for use with Color.
Appendix D Setting Up a Control Surface 423Controls for the MCS-3000
Many of the controls in the MCS-3000 are identified by the text displays running along
the top of each panel.
F1
M1
W1
W2
W3
W4
W5
W6
W7
JOG SHUTTLE
M2 M3
1 2 3 4 5 6 7 8 BANK 1
BANK 2
HOURS MINUTES SECONDS FRAMES
BANK 4
BANK 4
PAGE
ASSIGN
UTILITY
M4 M5
TIME CODE DISPLAY
F2
F3
F4
F5
F6
F7
F8
The less obvious controls and functions are as follows:
• Page 1-8: Selects one of the eight main rooms in Color
• F1: Change keyframe interpolation
• F2: Add keyframe
• F3: Remove keyframe
• F4 (Secondaries room): Toggle secondary between inside and outside
• F5 (Secondaries room): Toggle vignette off and on
• F6 (Secondaries room): Open previous secondary tab
• F7 (Secondaries room): Open next secondary tab
• Rewind (<<): Jump to beginning of shot or previous shot
• Forward (>>): Jump to end of shot or next shot
• Stop: Stop playback
• Play: Start playback
• Jog wheel: Playhead control
• Key pad: Used for numerical navigation, by either timecode or shot ID
• Locate: Locate Timecode or Shot ID
• Mode: Toggle Locate between Timecode and Shot ID
• Last: Return to last location
• Enter: Cue navigation
• M1: Speed control
424 Appendix D Setting Up a Control Surface• M2: Inch playback
• M3: Disable grade
• Bank1: Switch/Copy/Paste Grade Bank 1
• Bank2: Switch/Copy/Paste Grade Bank 2
• Bank3: Switch/Copy/Paste Grade Bank 3
• Bank4: Switch/Copy/Paste Grade Bank 4
• Assign: Toggle Switch/Copy/Paste grade. (LCD Display would indicate which state you
are in.)
Using the Navigational Controls
There are two different ways to navigate in the Timeline using the keypad on the
MCS-3000.
To switch between timecode and shot number navigation
1 Press Mode Locate or Set Locate on the MCS-3000.
2 Hold down Shift (the blue button under the F-buttons), then press Mode Locate.
The indicators on the MCS-3000 will switch between 00 00 00 00 (Timecode) and 0 (Shot
ID) to let you know what mode you’re in.
To locate a position on the Timeline using timecode (in timecode mode)
1 Press Mode Locate or Set Locate on the MCS-3000.
2 Enter the Timecode you wish to locate, then press Enter.
The playhead moves to that timecode location.
To locate a position on the Timeline using shot numbers (in shot number mode)
1 Press Mode Locate or Set Locate on the MCS-3000.
2 Enter the Shot ID you wish to locate, then press Enter.
The playhead moves to the shot associated with that ID on the Timeline.
Appendix D Setting Up a Control Surface 425Controls for the MCS-Spectrum
Many of the controls in the MCS-3000 and MCS-Spectrum are identified by the text
displays running along the top of each panel.
R3 B3
B2
R1 B1
R2
PAGE 5
PAGE 6
PAGE 8
PAGE 7
PAGE 1
PAGE 2
PAGE 4
PAGE 3
The less obvious controls and functions are as follows:
• R1: Reset Shadow contrast slider
• B1: Reset Shadow color control
• Left joyball: Shadow color control adjustment
• Left wheel: Shadow contrast slider adjustment (black point)
• R2: Reset Midtone contrast slider
• B2: Reset Midtone color control
• Center joyball: Midtone color control adjustment
• Center wheel: Midtone contrast slider adjustment (gamma)
• R3: Reset HIghlight contrast slider
• B3: Reset Highlight color control
• Right joyball: HIghlight color control adjustment
• Right wheel: HIghlight contrast slider adjustment (white point)
Tangent Devices CP100 Control Surface
The Tangent Devices CP100 is a single, large control surface that combines all available
functionality into a single device.
The following procedure describes how to configure and use this control surface with
Color.
Note: You must be logged in as an administrator to set up the Tangent Devices CP100.
426 Appendix D Setting Up a Control SurfaceTo set up and use the CP100 for use with Color
1 Connect the TDLan port of the CP100 to the primary Ethernet port of your computer
using an Ethernet cable.
Important: The CP100 cannot be connected to the second Ethernet port of your Mac Pro;
it must be connected to your computer’s primary Ethernet port, if necessary through a
router or switch if you need to share the port with an Internet connection.
2 Turn on the CP100 and wait for the unit to power up.
3 Open Color.
If you’re opening Color for the first time, you see the Control Surface Startup dialog. If
you’ve already opened Color and have turned off the option for making this dialog appear,
you need to click the Show Control Surface Dialog button in the User Prefs tab of the
Setup room.
4 When the Control Surface Startup dialog appears:
a Choose “Tangent Devices - CP100” from the Control Surface pop-up menu.
b When you’re prompted for your Administrator password, enter it into the field and click
OK.
The CP100 should now be ready for use with Color.
Controls in the CP100
The CP100 features the following controls:
• Do: Copy grade (Mem-Bank 1)
• Undo: Paste grade (Mem-Bank 1)
• Redo: Copy grade from previous edit on Timeline
• Cue: Cue up the navigation (modes are Timecode or Shot ID)
• Mark: Create still
• In: Set play marker In
• Out: Set play marker out
Appendix D Setting Up a Control Surface 427• Select: Toggle playback mode
• Mix: Toggle show still
• Grade: Toggle show grade
• Delete: Return grade to identity or base-mem
• |<: Previous event
• >|: Next event
• <: Play reverse
• []: Stop playback
• >: Play forward
• Button next to jog/shuttle: Toggle x10 speed control
• /< (while holding down Left Alt): Previous keyframe
• >/ (while holding down Left Alt): Next keyframe
• < (while holding down Left Alt): Step backward one frame
• > (while holding down Left Alt): Step forward one frame
• F1: Toggle keyframe interpolation
• F2: Create keyframe
• F3: Delete keyframe
• F4 (Primary In and Out rooms): Alternate panel encoders
• F5 (Primary In and Out rooms): Set scope resolution to 100%
• F6 (Primary In and Out rooms): Set scope resolution to 25%
• F7 (Primary In and Out rooms): Open Parade waveform
• F8 (Primary In and Out rooms): Open Histogram
In the Secondaries room, F5-F9 serve different functions.
• F5 (Secondaries room): Toggle secondary off and on
• F6 (Secondaries room): Toggle secondary between inside and outside
• F7 (Secondaries room): Toggle vignette off and on
• F8 (Secondaries room): Open previous secondary tab
• F9 (Secondaries room): Open next secondary tab
428 Appendix D Setting Up a Control SurfaceTangent Devices CP200 Series Control Surface
The Tangent Devices CP200 is a modular series of controllers all designed to work together.
MEM
ALT
GRACE DELETE
MARK IN OUT
CUE
DO UNDO
MORE
REDO 7
4
1
00 0
2 3 -
5 6 +
8 9
PREV NEXT
MODE
CLEAR
F1 F2 F3
F4 F5 F6
F7 F8 F9
To use the CP200 series of control surfaces with Color, you need the following:
• A CP200-BK Trackerball/Knob panel, CP200-TS Transport/Selection Panel, CP200-K Knob
Panel, and/or CP200-S Selection Panel
• Multiport hub or switch
• Cat-5 Ethernet cables
Important: The CP200 series control surfaces cannot be connected to the second Ethernet
port of your Mac Pro; they must be connected to your computer’s primary Ethernet port,
if necessary through a hub or switch if you need to share the port with an Internet
connection.
For more information, see:
• Configuring the CP200 Series Control Surfaces
• Controls in the CP200-BK (Trackerball/Knob Panel)
• Controls in the CP200-TS (Transport/Selection Panel)
• Controls in the CP200-K (Knob/Button Panel)
Configuring the CP200 Series Control Surfaces
The following procedures describe how to configure and use these control surfaces with
Color.
To set up the CP200 series controllers for use with Color
1 Connect each of the CP200 devices to the router, hub, or switch that’s connected to your
computer.
Appendix D Setting Up a Control Surface 429Important: The CP200 series control surfaces cannot be connected to the second Ethernet
port of your Mac Pro; they must be connected to your computer’s primary Ethernet port,
if necessary through a hub or switch if you need to share the port with an Internet
connection.
2 Before you open Color, turn on each of the CP200 devices you have, and write down the
two- to three-character ID numbers that appear on the display of each.
You use each device’s ID number to set up Color to communicate with these devices.
Note: The ID numbers that Color uses to connect to the CP200 control surfaces are not
the serial numbers that appear on the back or bottom of your CP200 panels.
3 Open Color.
If you’re opening Color for the first time, you see the Control Surface Startup dialog. If
you’ve already opened Color and have turned off the option for making this dialog appear,
you need to click the Show Control Surface Dialog button in the User Prefs tab of the
Setup room.
4 Choose “Tangent Devices - CP200” from the Control Surface pop-up menu.
Each CP200 device that Color is compatible with appears with an Enabled checkbox with
two fields: one for the ID number that you wrote down previously, and one for the IP
address.
5 For each CP200 device you own:
a Select its checkbox.
b Type its ID number into the corresponding field, then press Enter to continue.
c Type an IP address into the corresponding field, then press Enter to continue.
Note: The first three period–delimited sets of numbers in the IP address must match
the first three sets of numbers that are used on your particular network. If you’re not
sure what values to use, you can check to see what IP address is used by your computer,
and base the CP200 IP address on that, making sure you change the last three numbers
so that the address is unique.
6 Click Yes.
430 Appendix D Setting Up a Control SurfaceAfter you click Yes, Color connects with the control surfaces on the network. If this is
successful, then each panel’s display should now go blank.
The CP200 series control surfaces are now ready for use with Color.
Controls in the CP200-BK (Trackerball/Knob Panel)
The CP200-BK has the following controls:
F1 F2 F3
F4 F5 F6
F7 F8 F9
In the Primary In and Out rooms:
• Left (Dot) button above wheels: Reset contrast slider for that zone
• Right (Circle) button above wheels: Reset color control for that zone
• Left joyball: Shadow color control adjustment
• Left wheel: Shadow contrast slider adjustment (black point)
• Center joyball: Midtone color control adjustment
• Center wheel: Midtone contrast slider adjustment (gamma)
• Right joyball: HIghlight color control adjustment
• Right wheel: HIghlight contrast slider adjustment (white point)
• F1: Toggle keyframe Interpolation
• F2: Add keyframe
• F3: Delete keyframe
• F4: Alternate panel encoders
In the Secondaries room:
• F1: Toggle keyframe Interpolation
• F2: Add keyframe
• F3: Delete keyframe
• F4: Alternate panel encoders
Appendix D Setting Up a Control Surface 431• F5: Toggle secondary
• F6: Toggle secondary In/Out control
• F7: Toggle secondary vignette
• F8: Previous secondary
• F9: Next secondary
Note: In the Secondaries room, when switching to preview mode, the vignette controls
will override these controls.
In the Geometry room:
• F1: Change keyframe
• F2: Add keyframe
• F3: Delete keyframe
• F4: Alternate panel encoders
Controls in the CP200-TS (Transport/Selection Panel)
The CP200-TS has the following controls:
MEM
ALT
GRACE DELETE
MARK IN OUT
CUE
DO UNDO
MORE
REDO 7
4
1
00 0
2 3 -
5 6 +
8 9
PREV NEXT
MODE
CLEAR
• Do: Copy grade (mem-bank 1)
• Undo: Paste grade (mem-bank 1)
• Redo: Copy grade from previous edit on Timeline
• Cue: Cue up the navigation (modes are Timecode or Shot ID)
• Mark: Create still
• In: Set play marker In
• Out: Set play marker out
• Mem: Toggle show still
• Grade: Toggle show grade
432 Appendix D Setting Up a Control Surface• Delete: Return grade to Identity or base-mem
• |<: Previous event
• >|: Next event
• <: Play reverse
• []-: Stop playback
• >: Play forward
• Button next to jog/shuttle: Toggle x10 speed control
When Left Alt is held down:
• |<: Previous keyframe
• >|: Next keyframe
• <: Step backward one frame
• >: Step forward one frame
Controls in the CP200-K (Knob/Button Panel)
The CP200-K has the following controls:
• RGB channel controls
Note: When you open the Previews tab in the Secondaries room, the HSL qualifier controls
override the RBG channel controls.
Appendix D Setting Up a Control Surface 433Customizing Control Surface Sensitivity
You can customize the sensitivity of the joyballs, knobs, contrast wheels, and the angle
at which the joyballs adjust color, using settings located in the User Prefs tab of the Setup
room.
For more information, see Control Surface Settings.
434 Appendix D Setting Up a Control Surface
AirPort Extreme
Setup Guide2
Contents
3 Chapter 1: Getting Started
10 Chapter 2: AirPort Extreme Networks
11 Using AirPort Extreme with Your Broadband Internet Service
13 Using AirPort Extreme to Share a USB Printer
15 Using AirPort Extreme to Share a USB Hard Disk
17 Using AirPort Extreme with Your AirPort Network
19 Chapter 3: Setting Up AirPort Extreme
24 Chapter 4: Tips and Troubleshooting
29 Chapter 5: Learning More, Service, and Support
31 Appendix: AirPort Extreme Specifications1
3
1 Getting Started
Congratulations on purchasing AirPort Extreme. Read this
guide to get started using it.
AirPort Extreme is based on an Institute of Electrical and Electronics Engineers (IEEE)
draft 802.11n specification and provides better performance and greater range than
previous IEEE 802.11 standards. AirPort Extreme is compatible with computers using
802.11b, and 802.11g, as well as computers using the 802.11a wireless standards.
With AirPort Extreme, you can:
 Create a wireless network in your home, and then connect to the Internet and share
the connection with multiple computers simultaneously. An entire family or office
can be connected to the Internet at the same time.
 Connect AirPort Extreme to your Ethernet network.Wireless-equipped Macintosh
computers or Windows XP computers can then have access to an entire network
without being connected by a cable.
 Connect a USB printer to your AirPort Extreme. All of the compatible computers on
the AirPort network, both wireless and wired, can print to it.4 Chapter 1 Getting Started
 Connect a USB hard disk to your AirPort Extreme. All of the compatible computers on
the AirPort network, both wireless and wired, can access the information on the hard
disk.
 Connect a USB hub to your AirPort Extreme, and then connect multiple USB devices,
such as printers or hard disks, and all of the computers on the network have access to
those devices.Chapter 1 Getting Started 5
About AirPort Extreme
AirPort Extreme has five ports, located on the back:
 One 10/100 Ethernet Wide Area Network (WAN) port (<) for connecting a DSL or
cable modem, or for connecting to an existing Ethernet network
 Three 10/100 Ethernet Local Area Network (LAN) ports (G) for connecting Ethernet
devices, such as printers or computers, or for connecting to an existing Ethernet
network
 One USB port (d) for connecting a compatible USB printer, hard disk, or hub
Next to the ports is a reset button, which is used for troubleshooting your
AirPort Extreme. The status light on the front of AirPort Extreme shows the current
status.
Status light Internet WAN port
AC adapter
Power port USB port
Ethernet ports Reset button
Security slot6 Chapter 1 Getting Started
About the AirPort Software
AirPort Extreme works with the AirPort software included on the AirPort Extreme CD.
What You Need to Get Started
To set up AirPort Extreme using a Macintosh, you must have the following:
 A Macintosh computer with an AirPort or AirPort Extreme Card installed to set it up
wirelessly
 A Macintosh computer connected to AirPort Extreme with an Ethernet cable to set it
up using Ethernet
 Mac OS X v10.4 or later
 AirPort Utility 5.0 or later
AirPort Utility
AirPort Utility helps you set up your AirPort Extreme to create a wireless network,
connect to the Internet, and share a USB printer or hard disk. You can also connect
your AirPort Extreme to your existing AirPort Extreme or AirPort Extreme wireless
network to extend the range of your network using WDS. Use AirPort Utility to
quickly and easily set up your AirPort Extreme and your wireless network.
AirPort Utility is also an advanced tool for setting up and managing AirPort Extreme
and AirPort Express Base Stations. Use AirPort Utility to adjust network, routing, and
security settings and other advanced options.
Z AirPort status menu in the menu bar
Use the AirPort status menu to switch quickly between AirPort networks, monitor
the signal quality of the current network, create a computer-to-computer network,
and turn AirPort on and off. The status menu is available on computers using
Mac OS X.Chapter 1 Getting Started 7
To set up AirPort Extreme using a Windows PC, you must have the following:
 A Windows PC with 300 MHz or higher processor speed
 Windows XP Home or Professional (with Service Pack 2 installed)
 AirPort Utility v5 or later
You can use AirPort Extreme with a wireless-enabled computer that is compliant with
the IEEE 802.11a, 802.11b, 802.11g standards, or with an IEEE 802.11n draft specification.
To set up AirPort Extreme, your computer must meet the requirements listed above.
Install the AirPort software that came on the CD and follow the instructions on the
following pages to set up your AirPort Extreme and your AirPort wireless network.
Plugging In AirPort Extreme
Before you plug in your AirPort Extreme, first connect the appropriate cables to the
ports you want to use, including:
 The Ethernet cable connected to your DSL or cable modem (if you will connect to the
Internet) to the Ethernet (WAN) port (<)
 USB cable connected to the USB port (d) and to a compatible USB printer (if you will
print to a USB printer), a USB hard disk, or USB hub
 Any Ethernet devices to the Ethernet LAN ports (G)8 Chapter 1 Getting Started
Once you have connected the cables for all the devices you plan to use, connect the
AC plug adapter, and plug AirPort Extreme into the wall. There is no “on” switch.
Important: Use only the AC adapter that came with your AirPort Extreme.
When you plug AirPort Extreme into the wall, the status light flashes green for one
second, and then glows amber while it starts up. Once it has started up completely, the
status light flashes amber. The status light glows solid green once it is set up and
connected to the Internet or a network.
When you connect Ethernet cables to the Ethernet LAN ports (G), the lights above the
ports glow solid.
Power port Ethernet activity light
AC adapterChapter 1 Getting Started 9
AirPort Extreme Status Light
The following table explains AirPort Extreme light sequences and what they indicate.
What’s Next
After you plug in AirPort Extreme, use AirPort Utility to set it up to work with your
Internet connection, USB printer or hard disk, or an existing network. The AirPort Utility
is located in the Utilities folder in the Applications folder on a computer using Mac OS
X, and in Start > All Programs > AirPort on a computer using Windows XP.
See “AirPort Extreme Networks” on page 10 for examples of all the ways you can use
AirPort Extreme, and information about how to set it up.
Light Status/description
Off AirPort Extreme is unplugged.
Flashing green AirPort Extreme is starting up. The light flashes for one second.
Solid green AirPort Extreme is on and working properly. If you choose Flash
On Activity from the Status Light pop-up menu (on the Base
Station pane of AirPort settings in AirPort Utility), the status light
may flash green to indicate normal activity.
Flashing amber AirPort Extreme cannot establish a connection to the network or
the Internet. See “Your AirPort Extreme Status Light Flashes
Amber” on page 26.
Solid amber AirPort Extreme is completing its startup sequence.
Flashing amber and green There may be a problem starting up. AirPort Extreme will restart
and try again.10
2
2 AirPort Extreme Networks
In this chapter you’ll find explanations of the different ways
you can use AirPort Extreme.
This chapter gives examples of the different kinds of networks you can set up using
AirPort Extreme. It provides diagrams and explanations of what you need to do to get
your AirPort Extreme network up and running quickly.
See Chapter 3,“Setting Up AirPort Extreme,” on page 19 to find out more about using
AirPort Utility to help set up your network.Chapter 2 AirPort Extreme Networks 11
Using AirPort Extreme with Your Broadband Internet Service
When you set up AirPort Extreme to provide network and Internet access, Macintosh
computers with AirPort and AirPort Extreme Cards, and 802.11a, 802.11b, 802.11g, and
IEEE 802.11n draft specification wireless-equipped computers can access the wireless
AirPort network to share files, play games, and use Internet applications like web
browsers and email applications.
It looks like this:
DSL or cable modem Internet WAN port
to Internet
<12 Chapter 2 AirPort Extreme Networks
To set it up:
1 Connect your DSL or cable modem to your AirPort Extreme Ethernet WAN port (<).
2 Open AirPort Utility (located in the Utilities folder in the Applications folder on a
computer using Mac OS X, and in Start > All Programs > AirPort on a computer using
Windows), select your base station, and then click Continue.
3 Follow the onscreen instructions to create a new network. (See “Setting Up
AirPort Extreme” on page 19.)
Computers using AirPort and computers using other wireless cards or adapters connect
to the Internet through AirPort Extreme. Computers connected to AirPort Extreme
Ethernet ports can also access the network and connect to the Internet.
Wireless computers and computers connected to the Ethernet ports can also
communicate with one another through AirPort Extreme.Chapter 2 AirPort Extreme Networks 13
Using AirPort Extreme to Share a USB Printer
When you connect a USB printer to your AirPort Extreme, all computers on the network
(wired and wireless) can print to it.
It looks like this:
USB port
Shared printer
d14 Chapter 2 AirPort Extreme Networks
To set it up:
1 Connect the printer to the AirPort Extreme USB port (d) using a USB cable.
2 Open AirPort Utility (located in the Utilities folder in the Applications folder on a
computer using Mac OS X, and in Start > All Programs > AirPort on a computer using
Windows), select your base station, and then click Continue.
3 Follow the onscreen instructions to create a new network.
To print from a computer using Mac OS X v10.2.7 or later:
1 Open Printer Setup Utility (located in the Utilities folder in the Applications folder).
2 Select the printer from the list.
If the printer is not in the list, click Add and choose Bonjour from the pop-up menu,
and then select the printer from the list.
To print from a computer using or Windows XP:
1 Install Bonjour for Windows from the CD that came with your AirPort Extreme.
2 Follow the onscreen instructions to connect your printer.Chapter 2 AirPort Extreme Networks 15
Using AirPort Extreme to Share a USB Hard Disk
When you connect a USB hard disk to your AirPort Extreme, all computers on the
network (wired and wireless) can access the hard disk to access, share, and store files.
It looks like this:
Shared hard disk drive
d USB port16 Chapter 2 AirPort Extreme Networks
To set it up:
1 Connect the hard disk to the AirPort Extreme USB port (d) using a USB cable.
2 Open AirPort Utility (located in the Utilities folder in the Applications folder on a
computer using Mac OS X, and in Start > All Programs > AirPort on a computer using
Windows), select your base station, and then click Continue.
3 Follow the onscreen instructions to create a new network.
Computers can access the hard disk to share or store files using Mac OS X v10.4 or later,
or Windows XP (with Service Pack 2).Chapter 2 AirPort Extreme Networks 17
Using AirPort Extreme with Your AirPort Network
The illustration below shows a wireless network utilizing all the capabilities of AirPort
Extreme.
DSL or cable
to USB port modem
to Internet
to Ethernet
port
Family Room Living Room
Shared hard
disk drive18 Chapter 2 AirPort Extreme Networks
To set it up:
1 Connect all the devices you plan to use in your network.
2 Open AirPort Utility (located in the Utilities folder in the Applications folder on a
computer using Mac OS X, and in Start > All Programs > AirPort on a computer using
Windows), select your base station, and then click Continue.
3 Follow the onscreen instructions to set up your network. (See “Setting Up
AirPort Extreme” on page 19.)3
19
3 Setting Up AirPort Extreme
This chapter provides information and instructions for using
AirPort Utility to set up your AirPort Extreme.
Use the diagrams in the previous chapter to help you decide where you want to use
your AirPort Extreme, and what features you want to set up on your AirPort network.
Then use the instructions in this chapter to easily configure AirPort Extreme and set up
your AirPort network.
This chapter provides an overview for using the setup assistant in AirPort Utility to set
up your network and other features of your AirPort Extreme. For more detailed wireless
networking information, and for information about the advanced features of AirPort
Utility, refer to the “Designing AirPort Extreme 802.11n Networks” document, located at
www.apple.com/support/airport.
You can do most of your network setup and configuration tasks using the setup
assistant in AirPort Utility. To set advanced options, choose Manual Setup from the Base
Station menu of AirPort Utility. See “Setting Advanced Options” on page 23.20 Chapter 3 Setting Up AirPort Extreme
Using the AirPort Utility
To set up and configure your AirPort Extreme to use AirPort for wireless networking
and Internet access, use the setup assistant in AirPort Utility. AirPort Utility is installed
on your computer when you install the software on the AirPort Extreme CD.
On a Macintosh computer using Mac OS X v10.4 or later:
1 Open AirPort Utility, located in the Utilities folder in your Applications folder.
2 Select your base station and click Continue.
3 Follow the onscreen instructions to set up your AirPort Extreme and your wireless
network.
On a computer using Windows XP (with Service Pack 2):
1 Open AirPort Utility, located in Start > All Programs > AirPort.
2 Select your base station and click Continue.
3 Follow the onscreen instructions to set up your AirPort Extreme and your wireless
network.
The setup assistant in AirPort Utility asks you a series of questions about the type of
network you want to use and the services you want to set up. The setup assistant helps
you enter the appropriate settings for the network you are setting up.
If you are using AirPort Extreme to connect to the Internet, you need a broadband (DSL
or cable modem) account with an Internet service provider, or a connection to the
Internet using an existing Ethernet network. If you received specific information from
your ISP (such as a static IP address or a DHCP client ID), you may need to enter it in
AirPort Utility. Have this information available before you set up your AirPort Extreme.Chapter 3 Setting Up AirPort Extreme 21
Creating a New Wireless Network
You can use the setup assistant in AirPort Utility to create a new wireless network. The
setup assistant guides you through the steps necessary to name your network, protect
your network with a password, and set other options.
If you plan to share a USB printer or USB hard disk on your network:
1 Connect the printer or hard disk to the AirPort Extreme USB port (d).
2 Open AirPort Utility, located in Utilities folder in the Applications folder on a Macintosh,
or in Start > All Programs > AirPort on a computer using Windows XP.
3 Follow the onscreen instructions to create a new network.
Configuring and Sharing Internet Access
If you plan to share your Internet connection with wireless-enabled computers on your
network or computers connected to the Ethernet ports, you need to set up your
AirPort Extreme as an AirPort Base Station. Once it is set up, computers access the
Internet via the AirPort network. The base station connects to the Internet and
transmits information to the computers over the AirPort network.
Before you use the AirPort Utility to set up your base station, connect your DSL or cable
modem to the AirPort Extreme Ethernet WAN port (<). If you are using an existing
Ethernet network with Internet access to connect to the Internet, you can connect the
AirPort Extreme to the Ethernet network instead.22 Chapter 3 Setting Up AirPort Extreme
Use the setup assistant in AirPort Utility to enter your ISP settings and configure how
AirPort Extreme shares the settings with other computers.
1 Choose the wireless network you want to change. On a Macintosh, use the AirPort
status menu in the menu bar. On a computer using Windows XP, hold the pointer over
the wireless connection icon until you see your AirPort network name (SSID), and
choose it from the list if there are multiple networks available.
The default network name for an Apple base station is AirPort Network XXXXXX, where
XXXXXX is replaced with the last six digits of the AirPort ID, also know as the Media
Access Control or MAC address. The AirPort ID is printed on the bottom of an AirPort
Extreme and on the electrical-plug side of the AirPort Express.
2 Open AirPort Utility, located in Utilities folder in the Applications folder on a computer
using Mac OS X, or in Start > All Programs > AirPort on a computer using Windows XP.
3 Select your base station and click Continue.
4 Follow the onscreen instructions to configure and share Internet access on your
AirPort Extreme.
Using AirPort Utility is a quick and easy way to set up your base station and network. If
you want to set additional options for your network, such as restricting access to your
network, or setting advanced DHCP options, you can choose Manual Setup from the
Base Station menu of AirPort Utility.Chapter 3 Setting Up AirPort Extreme 23
Setting Advanced Options
To set advanced options, you can use AirPort Utility to set up your AirPort Extreme
manually. You can configure advanced base station settings, such as advanced security
options, closed networks, DHCP lease time, access control, power controls, setting up
user accounts, and more.
To set advanced options:
1 Choose the wireless network you want to change. On a Macintosh, use the AirPort
status menu in the menu bar. On a computer using Windows XP, hold the pointer over
the wireless connection icon until you see your AirPort network name (SSID), and
choose it from the list if there are multiple networks available.
The default network name for an Apple base station is AirPort Network XXXXXX, where
XXXXXX is replaced with the last six digits of the AirPort ID, also know as the Media
Access Control or MAC address. The AirPort ID is printed on the bottom of an AirPort
Extreme and on the electrical-plug side of the AirPort Express.
2 Open AirPort Utility, located in the Utilities folder in the Applications folder on a
Macintosh, and in Start > All Programs > AirPort on a computer using Windows XP.
3 If there is more than one base station in the list, select the base station you want to
configure. If you don’t see the base station you want to configure, click Rescan to scan
for available base stations, then select the base station you want.
4 Choose Manual Setup from the Base Station menu. If you are prompted for a password,
enter it.
For more information and detailed instructions for using the manual setup features
AirPort Utility, see the “Designing AirPort Extreme 802.11n Networks” document, located
at www.apple.com/support/airport.24
4
4 Tips and Troubleshooting
You can quickly solve most problems with AirPort Extreme by
following the advice in this chapter.
You Forgot Your Network or Base Station Password
You can clear the AirPort network or base station password by resetting
AirPort Extreme.
To reset the base station password:
1 Use the end of a straightened paper clip to press and hold the reset button for one (1)
second.
Important: If you hold the reset button for more than one (1) second, you may lose
your network settings.
2 Select your AirPort network.
 On a Macintosh, use the AirPort status menu in the menu bar to select the network
created by AirPort Extreme (the network name does not change).
 On a computer using Windows XP, hold the pointer over the wireless connection icon
until you see your AirPort Network Name (SSID), and choose it from the list if there
are multiple networks available.
3 Open AirPort Utility (in the Utilities folder in the Applications folder on a Macintosh,
and in Start > All Programs > AirPort on a computer using Windows XP).Chapter 4 Tips and Troubleshooting 25
4 Select your base station and then choose Manual Setup from the Base Station menu.
5 Click AirPort in the toolbar, and then click Base Station.
6 Enter a new password for the base station.
7 Click Wireless and choose an encryption method from the Wireless Security pop-up
menu to turn on encryption and activate password protection for your AirPort network.
If you turn on encryption, enter a new password for your AirPort network.
8 Click Update to restart the base station and load the new settings.
Your AirPort Extreme Isn’t Responding
Try unplugging it and plugging it back in.
If your AirPort Extreme stops responding completely, you may need to reset it to the
factory default settings.
Important: This erases all of your base station settings and resets them to the settings
that came with the AirPort Extreme.
To return AirPort Extreme to the factory settings:
m Use the end of a straightened paper clip to press and hold the reset button until the
status light flashes quickly (about 5 seconds).
AirPort Extreme resets with the following settings:
 AirPort Extreme receives its IP address using DHCP.
 The network name reverts to Apple Network XXXXXX (where XXXXXX is replaced
with the last six digits of the AirPort ID).
 The base station password returns to public.26 Chapter 4 Tips and Troubleshooting
If your base station is still not responding, try the following:
1 Unplug AirPort Extreme.
2 Use the end of a straightened paper clip to press and hold the reset button while you
plug in AirPort Extreme.
Your AirPort Extreme Status Light Flashes Amber
The Ethernet cable may not be connected properly, AirPort Extreme may be out of
range of an AirPort network, or there may be a problem with your Internet service
provider. If you are connected to the Internet with a DSL or cable modem, the modem
may have lost its connection to the network or the Internet. Even if the modem seems
to be working properly, try disconnecting the modem from its power supply, waiting a
few seconds, and then reconnecting it. Make sure AirPort Extreme is connected directly
to the modem via Ethernet before reconnecting power to the modem.
For more information about the reason the light flashes, open AirPort Utility, select
your base station, and then choose Manual Setup from the Base Station menu. The
information about the flashing light is displayed on the Summary pane.
Your Printer Isn’t Responding
If you connected a printer to the USB port on AirPort Extreme and the computers on
the AirPort network can’t print, try doing the following:
1 Make sure the printer is plugged in and turned on.
2 Make sure the cables are securely connected to the printer and to the AirPort Extreme
USB port.Chapter 4 Tips and Troubleshooting 27
3 Make sure the printer is selected in the Printer List window on client computers. On a
Macintosh using Mac OS X v10.2.7 or later:
 Open Printer Setup Utility, located in the Utilities folder in the Applications folder.
 If the printer is not in the list, click Add.
 Choose Bonjour from the pop-up menu.
 Select the printer and click Add.
To select your printer on a computer using Windows XP:
 Open “Printers and Faxes” from the Start menu.
 Select the printer. If the printer is not in the list, click Add Printer and then follow the
onscreen instructions.
4 Turn the printer off, wait a few seconds, then turn it back on.
I Want to Update My AirPort Software
Apple periodically updates AirPort software to improve performance or add features.
It is recommended that you update your AirPort Extreme to use the latest software. To
download the latest version of AirPort software, go to
www.support.apple.com/airport.
AirPort Extreme Placement Considerations
The following recommendations can help your AirPort Extreme achieve maximum
wireless range and optimal network coverage.
 Place your AirPort Extreme in an open area where there are few obstructions, such as
large pieces of furniture or walls. Try to place it away from metallic surfaces.
 If you place your AirPort Extreme behind furniture, keep at least an inch of space
between the AirPort Extreme and the edge of the furniture.28 Chapter 4 Tips and Troubleshooting
 Avoid placing your AirPort Extreme in areas surrounded by metal surfaces on
three or more sides.
 If you place your AirPort Extreme in an entertainment center with your stereo
equipment, avoid completely surrounding AirPort Extreme with audio, video, or
power cables. Place your AirPort Extreme so that the cables are to one side. Maintain
as much space as possible between AirPort Extreme and the cables.
 Try to place your AirPort Extreme at least 25 feet from a microwave oven, 2.4 or 5
gigahertz (GHz) cordless phones, or other sources of interference.
Items That Can Cause Interference With AirPort
The farther away the interference source, the less likely it is to cause a problem. The
following items can cause interference with AirPort communication:
 Microwave ovens
 Direct Satellite Service (DSS) radio frequency leakage
 The original coaxial cable that came with certain types of satellite dishes. Contact the
device manufacturer and obtain newer cables.
 Certain electrical devices such as power lines, electrical railroad tracks, and power
stations
 Cordless telephones that operate in the 2.4 or 5 GHz range. If you have problems
with your phone or AirPort communication, change the channel your base station or
AirPort Extreme uses, or change the channel your phone uses.
 Nearby base stations using adjacent channels. For example, if base station A is set to
channel 1, base station B should be set to channel 6 or 11.5
29
5 Learning More,
Service, and Support
You can find more information about using AirPort Extreme on
the web and in onscreen help.
Online Resources
For the latest information on AirPort Extreme, go to www.apple.com/airport.
To register AirPort Extreme (if you didn’t do it when you installed the software on the
AirPort Extreme CD), go to www.apple.com/register.
For AirPort service and support information, a variety of forums with product-specific
information and feedback, and the latest Apple software downloads, go to
www.apple.com/support/airport.
For support outside of the United States, go to www.apple.com/support, and then
choose your country from the pop-up menu.30 Chapter 5 Learning More, Service, and Support
Onscreen Help
m To learn more about using AirPort, open AirPort Utility and choose Help > AirPort
Utility Help.
Obtaining Warranty Service
If the product appears to be damaged or does not function properly, please follow the
advice in this booklet, the onscreen help, and the online resources.
If the base station still does not function, go to www.apple.com/support for
instructions about how to obtain warranty service.
Finding the Serial Number of Your AirPort Extreme
The serial number is printed on the bottom of your AirPort Extreme.31
Appendix
AirPort Extreme Specifications
AirPort Specifications
 Frequency Band: 2.4 and 5 GHz
 Radio Output Power: 20 dBm (nominal)
 Standards: 802.11 DSSS 1 and 2 Mbps standard, 802.11a, 802.11b, 802.11g
specifications, and a draft 802.11n specification
Interfaces
 1 RJ-45 10/100Base-T Ethernet WAN (<)
 3 RJ-45 10/100Base-T Ethernet LAN (G)
 Universal Serial Bus (USB d)
 AirPort Extreme wireless
Environmental Specifications
 Operating Temperature: 32° F to 95° F (0° C to 35° C)
 Storage Temperature: –13° F to 140° F (–25° C to 60° C)
 Relative Humidity (Operational): 20% to 80% relative humidity
 Relative Humidity (Storage): 10% to 90% relative humidity, noncondensing32 Appendix AirPort Extreme Specifications
Size and Weight
 Length: 6.50 inches (165.0mm)
 Width: 6.50 inches (165.0mm)
 Thickness: 1.34 inches (34.0mm)
 Weight: 1.66 pounds (753 grams)
Hardware Media Access Control (MAC) Addresses
The AirPort Extreme has two hardware addresses printed on the bottom of the case:
 AirPort ID: The address used to identify AirPort Extreme on a wireless network.
 Ethernet ID: You may need to provide this address to your ISP to connect
AirPort Extreme to the Internet.
Using AirPort Extreme
 The only way to shut off power completely to your AirPort Extreme is to disconnect it
from the power source.
 When connecting or disconnecting your AirPort Extreme, always hold the plug by its
sides. Keep fingers away from the metal part of the plug.
 Your AirPort Extreme should not be opened for any reason, even when the
AirPort Extreme is unplugged. If your AirPort Extreme needs service, see “Learning
More, Service, and Support” on page 29.
 Never force a connector into the ports. If the connector and port do not join with
reasonable ease, they probably don’t match. Make sure that the connector matches
the port and that you have positioned the connector correctly in relation to the port.Appendix AirPort Extreme Specifications 33
About Operating and Storage Temperatures
 When you are using your AirPort Extreme, it is normal for the case to get warm. The
AirPort Extreme case functions as a cooling surface that transfers heat from inside the
unit to the cooler air outside.
Avoid Wet Locations
 Keep AirPort Extreme away from sources of liquids, such as drinks, washbasins,
bathtubs, shower stalls, and so on.
 Protect AirPort Extreme from direct sunlight and rain or other moisture.
 Take care not to spill any food or liquid on your AirPort Extreme. If you do, unplug
AirPort Extreme before cleaning up the spill.
 Do not use AirPort Extreme outdoors. AirPort Extreme is an indoor product.
Do Not Make Repairs Yourself
About Handling
Your AirPort Extreme may be damaged by improper storage or handling. Be careful not
to drop your AirPort Extreme when transporting the device.
Warning: To reduce the chance of shock or injury, do not use your AirPort Extreme in
or near water or wet locations.
Warning: Do not attempt to open your AirPort Extreme or disassemble it. You run
the risk of electric shock and voiding the limited warranty. No user-serviceable parts
are inside.34
Communications Regulation Information
Wireless Radio Use
This device is restricted to indoor use due to its
operation in the 5.15 to 5.25 GHz frequency range to
reduce the potential for harmful interference to cochannel Mobile Satellite systems.
Cet appareil doit être utilisé à l’intérieur.
Exposure to Radio Frequency Energy
The radiated output power of this device is well below
the FCC radio frequency exposure limits. However, this
device should be operated with a minimum distance of
at least 20 cm between its antennas and a person's
body and the antennas used with this transmitter must
not be co-located or operated in conjunction with any
other antenna or transmitter subject to the conditions
of the FCC Grant.
FCC Declaration of Conformity
This device complies with part 15 of the FCC rules.
Operation is subject to the following two conditions: (1)
This device may not cause harmful interference, and (2)
this device must accept any interference received,
including interference that may cause undesired
operation. See instructions if interference to radio or
television reception is suspected.
Radio and Television Interference
This computer equipment generates, uses, and can
radiate radio-frequency energy. If it is not installed and
used properly—that is, in strict accordance with Apple’s
instructions—it may cause interference with radio and
television reception.
This equipment has been tested and found to comply
with the limits for a Class B digital device in accordance
with the specifications in Part 15 of FCC rules. These
specifications are designed to provide reasonable
protection against such interference in a residential
installation. However, there is no guarantee that
interference will not occur in a particular installation.
You can determine whether your computer system is
causing interference by turning it off. If the interference
stops, it was probably caused by the computer or one of
the peripheral devices.
If your computer system does cause interference to
radio or television reception, try to correct the
interference by using one or more of the following
measures:
 Turn the television or radio antenna until the
interference stops.
 Move the computer to one side or the other of the
television or radio.
 Move the computer farther away from the television or
radio.
 Plug the computer into an outlet that is on a different
circuit from the television or radio. (That is, make
certain the computer and the television or radio are on
circuits controlled by different circuit breakers or
fuses.)
If necessary, consult an Apple Authorized Service
Provider or Apple. See the service and support
information that came with your Apple product. Or,
consult an experienced radio/television technician for
additional suggestions.35
Important: Changes or modifications to this product
not authorized by Apple Computer, Inc. could void the
EMC compliance and negate your authority to operate
the product.
This product was tested for FCC compliance under
conditions that included the use of Apple peripheral
devices and Apple shielded cables and connectors
between system components. It is important that you
use Apple peripheral devices and shielded cables and
connectors between system components to reduce the
possibility of causing interference to radios, television
sets, and other electronic devices. You can obtain Apple
peripheral devices and the proper shielded cables and
connectors through an Apple-authorized dealer. For
non-Apple peripheral devices, contact the manufacturer
or dealer for assistance.
Responsible party (contact for FCC matters only):
Apple Computer, Inc., Product Compliance,
1 Infinite Loop M/S 26-A, Cupertino, CA 95014-2084,
408-974-2000.
Industry Canada Statement
Complies with the Canadian ICES-003 Class B
specifications. This device complies with RSS 210 of
Industry Canada.
Cet appareil numérique de la classe B est conforme à la
norme NMB-003 du Canada.
VCCI Class B Statement
Europe—EU Declaration of Conformity
The equipment complies with the RF Exposure
Requirement 1999/519/EC, Council Recommendation of
12 July 1999 on the limitation of exposure of the general
public to electromagnetic fields (0 Hz to 300 GHz). This
equipment meets the following conformance standards:
EN300 328, EN301 893, EN301 489-17, EN60950.
Hereby, Apple Computer, Inc., declares that this device is
in compliance with the essential requirements and other
relevant provisions of Directive 1999/5/EC.
Disposal and Recycling Information
AirPort Extreme has an internal battery. Please dispose
of it according to your local environmental laws and
guidelines. For information about Apple's recycling
program, go to www.apple.com/environment.
California: The coin cell battery in your product
contains perchlorates. Special handling and disposal
may apply. Refer to www.dtsc.ca.gov/hazardouswaste/
perchlorate.European Union—Disposal Information:
This symbol means that according to local laws and
regulations your product should be disposed of
separately from household waste.When this product
reaches its end of life, take it to a collection point
designated by local authorities. Some collection points
accept products for free. The separate collection and
recycling of your product at the time of disposal will
help conserve natural resources and ensure that it is
recycled in a manner that protects human health and
the environment.
Deutschland: Dieses Gerät enthält Batterien. Bitte nicht
in den Hausmüll werfen. Entsorgen Sie dieses Gerätes
am Ende seines Lebenszyklus entsprechend der
maßgeblichen gesetzlichen Regelungen.
Nederlands: Gebruikte batterijen kunnen worden
ingeleverd bij de chemokar of in een speciale
batterijcontainer voor klein chemisch afval (kca) worden
gedeponeerd.
Taiwan:
© 2007 Apple Computer, Inc. All rights reserved.
Apple, the Apple logo, AirPort, AirPort Extreme, Bonjour,
iTunes, Mac, Macintosh, and Mac OS are trademarks of
Apple Computer, Inc., registered in the U.S. and other
countries. AirPort Express is a trademark of Apple
Computer, Inc.
Other product and company names mentioned herein
may be trademarks of their respective companies.
www.apple.com/airport
www.apple.com/support/airport
034-3422-A
Printed in XXXX
iPod shuffle
User Guide2
2 Contents
Chapter 1 3 About iPod shuffle
Chapter 2 4 iPod shuffle Basics
4 iPod shuffle at a Glance
5 Using the iPod shuffle Controls
6 Connecting and Disconnecting iPod shuffle
8 Charging the Battery
Chapter 3 10 Setting Up iPod shuffle
10 About iTunes
11 Importing Music into Your iTunes Library
14 Organizing Your Music
15 Adding Music to iPod shuffle
Chapter 4 20 Listening to Music
20 Playing Music
22 Using the VoiceOver Feature
Chapter 5 26 Storing Files on iPod shuffle
26 Using iPod shuffle as an External Disk
Chapter 6 28 Tips and Troubleshooting
31 Updating and Restoring iPod shuffle Software
Chapter 7 32 Safety and Handling
32 Important Safety Information
34 Important Handling Information
Chapter 8 35 Learning More, Service, and Support
Index 381
3
1 About iPod shuffle
Congratulations on purchasing iPod shuffle. Read this
chapter to learn about the features of iPod shuffle, how
to use its controls, and more.
To use iPod shuffle, you put songs and other audio files on your computer and then
sync them with iPod shuffle.
Use iPod shuffle to:
 Sync songs and playlists for listening on the go
 Listen to podcasts, downloadable radio-style shows, delivered over the Internet
 Listen to audiobooks purchased from the iTunes Store or audible.com
 Store or back up files and other data, using iPod shuffle as an external disk
What’s New in iPod shuffle
 Apple Earphones with Remote to control iPod shuffle easily while you’re on the go
 Support for multiple playlists and audiobooks
 New VoiceOver feature that announces the song and artist names, a menu of your
playlists, audiobooks, and podcasts, and battery status and other messages
 Improved flexibility with syncing music and other content in iTunes
± WARNING: To avoid injury, read all operating instructions in this guide and
the safety information in “Safety and Handling” on page 32 before using
iPod shuffle. 2
4
2 iPod shuffle Basics
Read this chapter to learn about the features of
iPod shuffle, how to use its controls, and more.
Your iPod shuffle package includes iPod shuffle, the Apple Earphones with Remote, and
a USB 2.0 cable to connect iPod shuffle to your computer.
iPod shuffle at a Glance
Clip (in back)
Volume up
Center button
Volume down
Earphone port
Status light
Three-way switchChapter 2 iPod shuffle Basics 5
To use the Apple Earphones with Remote:
m Plug the earphones into the earphone port on iPod shuffle. Then place the earbuds
in your ears as shown. Use the buttons on the remote to control playback.
You can purchase other accessories, such as the Apple In-Ear Earphones with Remote
and Mic, and the Apple Earphones with Remote and Mic, at
www.apple.com/ipodstore. The microphone capability isn’t supported on iPod shuffle.
Using the iPod shuffle Controls
The simple three-way switch (OFF, play in order ⁄, or shuffle ¡) on iPod shuffle and
the buttons on the earphone remote make it easy to play songs, audiobooks, and
audio podcasts on iPod shuffle, as described below.
WARNING: Read all safety instructions about avoiding hearing damage on page 33
before use.
The earphone
cord is adjustable.
Remote
To Do this on iPod shuffle
Turn iPod shuffle on or off Slide the three-way switch (green shading on switch indicates
iPod shuffle is on).
Set the play order Slide the three-way switch to play in order (⁄) or shuffle (¡).
Reset iPod shuffle
(if iPod shuffle isn’t responding or
the status light is solid red)
Disconnect iPod shuffle from the computer. Turn iPod shuffle off,
wait 10 seconds, and then turn it back on again.
Find the iPod shuffle serial
number
Look under the clip on iPod shuffle. Or, in iTunes (with
iPod shuffle connected to your computer), select iPod shuffle
under Devices in iTunes and click the Summary tab.6 Chapter 2 iPod shuffle Basics
Connecting and Disconnecting iPod shuffle
Connect iPod shuffle to your computer to sync songs and other audio files, and to
charge the battery. Disconnect iPod shuffle when you’re done.
Important: Use only the USB 2.0 cable that came with iPod shuffle to connect it to your
computer.
Connecting iPod shuffle
To connect iPod shuffle to your computer:
m Plug one end of the included USB cable into the earphone port of iPod shuffle, and the
other end into a USB 2.0 port on your computer.
Note: Charging or syncing iPod shuffle is faster if you connect it to a high-power USB
2.0 port. On most keyboards, the USB port doesn’t provide enough power to charge at
optimal speed.
A longer USB cable is available separately at www.apple.com/ipodstore.
To Do this with the earphone remote
Play or pause Click the Center button.
Change the volume Click the Volume Up (∂) or Volume Down (D) button.
Go to the next track Double-click the Center button.
Fast-forward Double-click and hold the Center button.
Go to the previous track Triple-click the Center button within 6 seconds of the track
starting. Triple-clicking after 6 seconds restarts the current track.
Rewind Triple-click and hold the Center button.
Hear song titles, artist names,
and playlist names
To hear the current song title and artist name, click and hold the
Center button. To listen to your playlist names, keep holding, and
release when you hear a tone; then click to select the playlist you
want. For more information, see “Using the VoiceOver Feature” on
page 22.Chapter 2 iPod shuffle Basics 7
The first time you connect iPod shuffle to your computer, iTunes helps you configure
iPod shuffle and sync it with your iTunes library. By default, iTunes automatically syncs
songs on iPod shuffle when you connect it to your computer. When iTunes is finished,
you can disconnect iPod shuffle. You can sync songs while your battery is charging.
If you connect iPod shuffle to a different computer and iPod shuffle is set to sync music
automatically, iTunes prompts you before syncing any music. If you click Yes, the songs
and other audio files already on iPod shuffle will be erased and replaced with songs
and other audio files on the computer iPod shuffle is connected to. For information
about adding music to iPod shuffle and using iPod shuffle with more than one
computer, see Chapter 4, “Listening to Music,” on page 20.
Disconnecting iPod shuffle
It’s important not to disconnect iPod shuffle from your computer while audio files are
syncing or when iPod shuffle is being used as an external disk. It’s OK to disconnect
iPod shuffle if the status light is not blinking orange, or if you see the “OK to
disconnect” message at the top of the iTunes window.
Important: If you see the “Do not disconnect” message in iTunes or if the status light
on iPod shuffle is blinking orange, you must first eject iPod shuffle before
disconnecting it. Failing to do so may damage files on iPod shuffle and require you to
restore iPod shuffle in iTunes. For information about restoring, see “Updating and
Restoring iPod shuffle Software” on page 31.
If you enable iPod shuffle for disk use (see page 26), you must always eject iPod shuffle
before disconnecting it.
To eject iPod shuffle:
m Click the Eject (C) button next to iPod shuffle in the list of devices in iTunes.
If you’re using a Mac, you can also eject iPod shuffle by dragging the iPod shuffle icon
on the desktop to the Trash.
If you’re using a Windows PC, you can also eject iPod shuffle in My Computer or by
clicking the Safely Remove Hardware icon in the Windows system tray and selecting
iPod shuffle.
To disconnect iPod shuffle:
m Detach the USB cable from iPod shuffle and from the computer.8 Chapter 2 iPod shuffle Basics
Charging the Battery
iPod shuffle has an internal battery that is rechargeable and not user-replaceable. For
best results, the first time you use iPod shuffle, let it charge for about three hours to
fully charge it. The battery is 80-percent charged in about two hours and fully charged
in about three hours. If iPod shuffle isn’t used for a while, the battery might need to be
recharged.
You can sync music while the battery is charging. You can disconnect and use
iPod shuffle before it’s fully charged.
In iTunes, the battery icon next to your iPod shuffle name shows the battery status. The
icon displays a lightning bolt when the battery is charging and a plug when the battery
is fully charged.
You can charge the iPod shuffle battery in two ways:
 Connect iPod shuffle to your computer.
 Use the Apple USB Power Adapter, available separately.
To charge the battery using your computer:
m Connect iPod shuffle to a high-power USB 2.0 port on your computer using the
included USB cable. The computer must be turned on and not in sleep mode.
When the battery is charging, the status light on iPod shuffle is solid orange. When the
battery is fully charged, the status light turns green.
If iPod shuffle is being used as an external disk or is syncing with iTunes, the status light
blinks orange to let you know that you must eject iPod shuffle before disconnecting it.
In this case, your battery may be either still charging or fully charged. You can check
the status by viewing the battery icon next to your iPod shuffle name in the list of
devices in iTunes.
If you don’t see the status light, iPod shuffle might not be connected to a high-power
USB 2.0 port. Try another USB 2.0 port on your computer.
If you want to charge the battery when you’re away from your computer, you can
connect iPod shuffle to an Apple USB Power Adapter. To purchase iPod shuffle
accessories, go to www.apple.com/ipodstore.
To charge the battery using the Apple USB Power Adapter:
1 Connect the AC plug adapter to the power adapter (they might already be connected).
2 Plug the USB connector of the USB cable into the power adapter.
3 Connect the other end of the USB cable to iPod shuffle.Chapter 2 iPod shuffle Basics 9
4 Plug the power adapter into a working power outlet.
Rechargeable batteries have a limited number of charge cycles. Battery life
and number of charge cycles vary by use and settings. For information, go to
www.apple.com/batteries.
Checking the Battery Status
You can check the battery status of iPod shuffle when it’s connected to your computer
or disconnected. The status light tells you approximately how much charge is in the
battery.
If iPod shuffle is on and not connected to a computer, check the battery status without
interrupting playback by quickly turning iPod shuffle off and then on again. You can
also use VoiceOver to hear battery status information.
WARNING: Make sure the power adapter is fully assembled before plugging it into a
power outlet. Read all safety instructions about using the Apple USB Power Adapter
on page 33 before use.
Apple USB Power Adapter
(your adapter may look different)
iPod shuffle USB cable
Status light when connected
Solid green Fully charged
Solid orange Charging
Blinking orange
Do not disconnect (iTunes is syncing, or iPod shuffle is enabled for
disk use); may be still charging or may be fully charged
Status light when disconnected VoiceOver
Solid green Good charge “Battery full”
“Battery 75%”
“Battery 50%”
Solid orange Low charge “Battery 25%”
Solid red Very low charge “Battery low”3
10
3 Setting Up iPod shuffle
To set up iPod shuffle, you use iTunes on your computer
to import, buy, and organize your music, audio podcasts,
and audiobooks. Then you connect iPod shuffle to your
computer and sync it to your iTunes library.
Read on to learn more about the steps in this process, including:
 Getting music from your CD collection, hard disk, or the iTunes Store (part of iTunes
and available in some countries only) into the iTunes application on your computer
 Organizing your music and other audio into playlists
 Syncing songs, audiobooks, and podcasts (free downloadable radio-style shows) in
your iTunes library with iPod shuffle
 Listening to music or other audio on the go
About iTunes
iTunes is the software application you use to sync music, audiobooks, and audio
podcasts with iPod shuffle. To download iTunes version 8.1 or later (required for
iPod shuffle), go to www.apple.com/ipod/start. After you install iTunes, it opens
automatically when you connect iPod shuffle to your computer.
This chapter explains how to use iTunes to download songs and other audio to your
computer, create personal compilations of your favorite songs (called playlists), sync
iPod shuffle, and adjust iPod shuffle settings.
iTunes also has a feature called Genius, which creates instant playlists of songs from
your iTunes library that go great together. You can create Genius playlists in iTunes and
sync them to iPod shuffle. To learn how to set up Genius in iTunes, see “Using Genius in
iTunes” on page 14.
iTunes has many other features. You can burn your own CDs that play in standard CD
players (if your computer has a recordable CD drive); listen to streaming Internet radio;
watch videos and TV shows; rate songs according to preference; and much more. For
information about using these features, open iTunes and choose Help > iTunes Help.Chapter 3 Setting Up iPod shuffle 11
If you already have iTunes installed on your computer and you’ve set up your iTunes
library, you can skip to the next section, “Adding Music to iPod shuffle” on page 15.
Importing Music into Your iTunes Library
To listen to music on iPod shuffle, you first need to get that music into your iTunes
library on your computer.
There are three ways to get music and other audio into your iTunes library:
 Purchase music and audiobooks or download podcasts online from the iTunes Store.
 Import music and other audio from audio CDs.
 Add music and other audio that’s already on your computer.
Buying Songs and Downloading Podcasts Using the iTunes Store
If you have an Internet connection, you can easily purchase and download songs,
albums, and audiobooks online using the iTunes Store. You can also subscribe to and
download free radio-style audio podcasts. Video podcasts can’t be synced to
iPod shuffle.
To purchase music online using the iTunes Store, you set up an Apple account in
iTunes, find the songs you want, and then buy them. If you already have an Apple
account, or if you have an America Online (AOL) account (available in some countries
only), you can use that account to sign in to the iTunes Store and buy songs.
You don’t need an iTunes Store account to download or subscribe to podcasts.
To sign in to the iTunes Store:
m Open iTunes and then:
 If you already have an iTunes account, choose Store > Sign In, and then sign in.
 If you don’t already have an iTunes account, choose Store > Create Account and follow
the onscreen instructions to set up an Apple account or enter your existing Apple or
AOL account information.12 Chapter 3 Setting Up iPod shuffle
You can browse or search the iTunes Store to find the album, song, or artist you’re
looking for. Open iTunes and click iTunes Store in the list on the left.
 To browse the iTunes Store, choose a category (for example, Music) on the left side of
the iTunes Store home page. You can choose a genre, look at new releases, click one
of the featured songs, look at Top Songs and more, or click Browse under Quick Links
in the main iTunes Store window.
 To browse for podcasts, click the Podcasts link on the left side of the iTunes Store
home page.
 To search the iTunes Store, type the name of an album, song, artist, or composer in the
search field. Press Return or choose an item from the list that appears.
 To narrow your search results, choose an item from the pop-up menu in the upper left
(the default is All Results). For example, to narrow your search to songs and albums,
choose Music from the pop-up menu.
 To search for a combination of items, click Power Search in the Search Results page.
 To return to the home page of the iTunes Store, click the Home button in the status line
at the top of the page.
To buy a song, album, or audiobook:
1 Select iTunes Store, and then find the item you want to buy.
You can double-click a song or other item to listen to a portion of it and make sure it’s
what you want. (If your network connection is slower than 128 kbps, choose iTunes
Preferences, and in the Store pane, select “Load complete preview before playing.”)
2 Click Buy Song, Buy Album, or Buy Book.
The item is downloaded to your computer and charged to the credit card listed in your
Apple or AOL account.
To download or subscribe to a podcast:
1 Select iTunes Store.
2 Click the Podcasts link on the left side of the home page in the iTunes Store.
3 Browse for the podcast you want to download.
 To download a single podcast episode, click the Get Episode button next to the
episode.
 To subscribe to a podcast, click the Subscribe button next to the podcast graphic.
iTunes downloads the most recent episode. As new episodes become available,
they’re automatically downloaded to iTunes when you connect to the Internet.
Adding Songs Already on Your Computer to Your iTunes Library
If you have songs on your computer encoded in file formats that iTunes supports, you
can easily add the songs to iTunes.Chapter 3 Setting Up iPod shuffle 13
To add songs on your computer to your iTunes library:
m Drag the folder or disk containing the audio files to your iTunes library (or choose File >
Add to Library and select the folder or disk). If iTunes supports the song file format, the
songs are automatically added to your iTunes library.
You can also drag individual song files to iTunes.
Note: Using iTunes for Windows, you can convert nonprotected WMA files to AAC or
MP3 format. This can be useful if you have a library of music encoded in WMA format.
For more information, open iTunes and choose Help > iTunes Help.
Importing Music from Your Audio CDs into iTunes
Follow these instructions to get music from your CDs into iTunes.
To import music from an audio CD into iTunes:
1 Insert a CD into your computer and open iTunes.
If you have an Internet connection, iTunes gets the names of the songs on the CD from
the Internet (if available) and lists them in the window.
If you don’t have an Internet connection, you can import your CDs, and later, when you’re
connected to the Internet, select the songs in iTunes and then choose Advanced > Get
CD Track Names. iTunes will bring in the track names for the imported CDs.
If the CD track names aren’t available online, you can enter the names of the songs
manually. See the following section, “Entering Names of Songs and Other Details.”
With song information entered, you can browse for songs in iTunes by title, artist,
album, and more.
2 Click to remove the checkmark next to any song you don’t want to import.
3 Click the Import button. The display area at the top of the iTunes page shows how long
it will take to import each song.
By default, iTunes plays songs as they’re imported. If you’re importing a lot of songs,
you might want to stop the songs from playing to improve performance.
4 To eject the CD, click the Eject (C) button.
You can’t eject a CD until the import is done.
5 Repeat these steps for any other CDs with songs you want to import.
Entering Names of Songs and Other Details
You can manually enter song titles and other information, including comments, for
songs and other items in your iTunes library.
To enter CD song names and other information manually:
1 Select the first song on the CD and choose File > Get Info.
2 Click Info.14 Chapter 3 Setting Up iPod shuffle
3 Enter the song information.
4 Click Next to enter information for the next track.
5 Click OK when you finish.
Organizing Your Music
Using iTunes, you can organize songs and other items into lists, called playlists, in any
way you want. For example, you can create playlists with songs to listen to while
exercising, or playlists with songs for a particular mood.
You can also create Smart Playlists that update automatically based on rules you define.
When you add songs to iTunes that match the rules, they get added automatically to
the Smart Playlist. You can also pick a song and use the Genius feature to create a
playlist for you (see the next section for more information). You can’t create a playlist on
iPod shuffle when it’s disconnected from iTunes.
You can create as many playlists as you like, using any of the songs in your iTunes
library. Changes you make to any of your playlists in iTunes, such as adding or
removing songs, won’t change the contents of your iTunes library.
When you listen to playlists on iPod shuffle, all playlists created in iTunes behave the
same way. You can choose them by name on your iPod shuffle.
To create a playlist in iTunes:
1 Click the Add (∂) button or choose File > New Playlist.
2 Type a name for the playlist.
3 Click Music in the Library list, and then drag a song or other item to the playlist.
To select multiple songs, hold down the Shift key or the Command (x) key on a Mac,
or the Shift key or the Control key on a Windows PC, as you click each song.
To create a Smart Playlist:
m Choose File > New Smart Playlist and define the rules for your playlist.
Smart playlists created in iTunes can be synced to iPod shuffle like any other iTunes
playlist.
Using Genius in iTunes
Genius automatically creates playlists containing songs in your library that go great
together. To play Genius playlists on iPod shuffle, you first need to set up Genius in
iTunes. Genius is a free service, but you need an iTunes Store account (if you don’t have
one, you can set one up when you turn on Genius).
To set up Genius:
1 In iTunes, choose Store > Turn on Genius.Chapter 3 Setting Up iPod shuffle 15
2 Follow the onscreen instructions.
3 Connect and sync iPod shuffle.
You can now use Genius to create a Genius playlist that you can sync to iPod shuffle.
To create a Genius playlist in iTunes:
1 Click Music in the Library list or select a playlist.
2 Select a song.
3 Click the Genius button at the bottom of the iTunes window.
4 To change the maximum number of songs included in the playlist, choose a number
from the pop-up menu.
5 To save the playlist, click Save Playlist. You can change a saved playlist by adding or
removing items. You can also click Refresh to create a new playlist based on the same
original song.
Genius playlists created in iTunes can be synced to iPod shuffle like any other iTunes
playlist.
Adding Music to iPod shuffle
After your music is imported and organized in iTunes, you can easily add it to
iPod shuffle.
To set how music is added from iTunes on your computer to iPod shuffle, you connect
iPod shuffle to your computer, and then use iTunes preferences to choose iPod shuffle
settings.
You can set iTunes to add music to iPod shuffle in three ways:
 Sync songs and playlists: When you connect iPod shuffle, it’s automatically updated to
match the songs and other items in your iTunes library. You can sync all songs and
playlists or selected playlists. Any other songs on iPod shuffle are deleted. See the
following section for more information.16 Chapter 3 Setting Up iPod shuffle
 Manually add music to iPod shuffle: When you connect iPod shuffle, you can drag
songs and playlists individually to iPod shuffle, and delete songs and playlists
individually from iPod shuffle. Using this option, you can add songs from more than
one computer without erasing songs from iPod shuffle. When you manage music
yourself, you must always eject iPod shuffle from iTunes before you can disconnect it.
See “Managing iPod shuffle Manually” on page 17.
 Autofill iPod shuffle: When you choose to manually manage content on iPod shuffle,
you can have iTunes automatically fill iPod shuffle with a selection of songs and other
content that you specify. See “Autofilling iPod shuffle” on page 18.
Syncing Music Automatically
By default, iPod shuffle is set to sync all songs and playlists when you connect it to your
computer. This is the simplest way to add music to iPod shuffle. You just connect
iPod shuffle to your computer, let it add songs, audiobooks, and audio podcasts
automatically, and then disconnect it and go. If you added any songs to iTunes since
the last time you connected iPod shuffle, they’re synced with iPod shuffle. If you
deleted songs from iTunes, they’re removed from iPod shuffle.
To sync music with iPod shuffle:
m Simply connect iPod shuffle to your computer. If iPod shuffle is set to sync
automatically, the update begins.
Important: The first time you connect iPod shuffle to a computer, a message asks if you
want to sync songs automatically. If you accept, all songs, audiobooks, and podcasts
are erased from iPod shuffle and replaced with songs and other items from that
computer. If you don’t accept, you can still add songs to iPod shuffle manually without
erasing any of the songs already on iPod shuffle.
While music is being synced from your computer to iPod shuffle, the iTunes status
window shows progress, and you see a sync icon next to iPod shuffle in the list of
devices. When the update is done, a message in iTunes says “iPod update is complete.”
If, during iPod shuffle setup, you didn’t choose to automatically sync music to
iPod shuffle, you can do it later. You can sync all songs and playlists, or just selected
playlists.
To set up iTunes to automatically sync music with iPod shuffle:
1 Connect iPod shuffle to your computer.
2 In iTunes, select iPod shuffle in the list of devices, and then click the Music tab.
3 Select “Sync music.”
4 Choose “All songs and playlists,” or choose “Selected playlists” and then select the
playlists you want to sync.Chapter 3 Setting Up iPod shuffle 17
You can sync audiobooks when you sync music. Audiobooks appear in the list of
selected playlists. You can choose to sync all or none of the audiobooks in your iTunes
library.
5 Click Apply.
The update begins automatically.
If “Sync only checked songs” is selected in the Summary pane, iTunes syncs only items
that are checked in your Music and other libraries.
Syncing Podcasts Automatically
The settings for adding podcasts to iPod shuffle are unrelated to the settings for
adding songs. Podcast settings don’t affect song settings, and vice versa. You can set
iTunes to automatically sync all podcasts or selected podcasts, or you can add podcasts
to iPod shuffle manually. You can’t sync video podcasts to iPod shuffle.
To set iTunes to update the podcasts on iPod shuffle automatically:
1 Connect iPod shuffle to your computer.
2 In iTunes, select iPod shuffle in the list of devices, and then click the Podcasts tab.
3 Select “Sync ... episodes of” and choose the number of episodes you want from the
pop-up menu.
4 Select “All podcasts,” or “Selected podcasts.” If you click “Selected podcasts,” also select
the podcasts that you want to sync.
5 Click Apply.
When you set iTunes to sync podcasts automatically, iPod shuffle is updated each time
you connect it to your computer.
Managing iPod shuffle Manually
Setting iTunes to let you manage iPod shuffle manually gives you the most flexibility
for managing music and other content on iPod shuffle. You can add and remove
individual songs, playlists, podcasts, and audiobooks. You can add music and other
audio content from multiple computers to iPod shuffle without erasing items already
on iPod shuffle.
To set iTunes to let you manage audio content on iPod shuffle manually:
1 In iTunes, select iPod shuffle in the list of devices, and then click the Summary tab.
2 In the Options section, select “Manually manage music.”
3 Click Apply.
When you manage iPod shuffle manually, you must always eject iPod shuffle from
iTunes before you disconnect it.
To add a song or other item to iPod shuffle:
1 Click Music or another Library item in iTunes.18 Chapter 3 Setting Up iPod shuffle
2 Drag a song or other item to iPod shuffle.
You can also drag entire playlists to sync them with iPod shuffle. You can select
multiple items and drag them all at once to iPod shuffle.
To remove a song or other item from iPod shuffle:
1 In iTunes, select iPod shuffle in the list of devices.
2 Select Music, Audiobooks, or Podcasts under iPod shuffle.
3 Select a song or other item and press the Delete or Backspace key on your keyboard.
To use iTunes to create a new playlist on iPod shuffle:
1 In iTunes, select iPod shuffle in the list of devices, and then click the Add (∂) button or
choose File > New Playlist.
2 Type a name for the playlist.
3 Click an item, such as Music, in the Library list, and then drag songs or other items to
the playlist.
To add songs to or remove songs from a playlist on iPod shuffle:
m Drag a song to a playlist on iPod shuffle to add the song. Select a song in a playlist and
press the Delete key on your keyboard to delete the song.
Keep these points in mind if you manually manage your content on iPod shuffle:
 If you make changes to any of your playlists, remember to drag the changed playlist
to iPod shuffle when it’s connected to iTunes.
 If you remove a song or other item from iPod shuffle, it isn’t deleted from your iTunes
library.
 If you set iTunes to manage music manually, you can reset it later to sync
automatically. For information, see page 16.
Autofilling iPod shuffle
If you manually manage music, you can have iTunes automatically sync a selection of
your songs onto iPod shuffle when you click the Autofill button. You can choose your
entire library or a specific playlist to gets songs from, and set other Autofill options.
Using Autofill gives you more control over the content that gets added to iPod shuffle
than automatically syncing, and lets you quickly “top off” your iPod shuffle when you
manually manage the contents.
To autofill music onto iPod shuffle:
1 Connect iPod shuffle to your computer.
2 Select Music under iPod shuffle in the list of devices.
3 Choose the playlist you want to autofill from using the “Autofill from” pop-up menu.
To autofill music from your entire library, choose Music.Chapter 3 Setting Up iPod shuffle 19
4 Click the Settings button to select from the following options:
Replace all items when Autofilling: iTunes replaces the songs on iPod shuffle with the
new songs you’ve chosen. If this option isn’t selected, songs you’ve already synced with
iPod shuffle remain and iTunes selects more songs to fill the available space.
Choose items randomly: iTunes shuffles the order of songs as it syncs them with
iPod shuffle. If this option isn’t selected, iTunes downloads songs in the order they
appear in your library or selected playlist.
Choose higher rated items more often: iTunes autofills iPod shuffle, giving preference to
songs that you’ve rated with a higher number of stars.
5 To reserve space for disk use, adjust the slider to set how much space to reserve for
iTunes content and how much for data.
For more information about using iPod shuffle as a hard disk, see “Using iPod shuffle as
an External Disk” on page 26.
6 Click OK in the Autofill Settings dialog, and then click Autofill in the iTunes window.
While music is being synced from iTunes to iPod shuffle, the iTunes status window
shows the progress. When the autofill is done, a message in iTunes says “iPod update is
complete.”
Fitting More Songs onto iPod shuffle
If you’ve imported songs into iTunes at higher bit-rate formats, such as iTunes Plus,
Apple Lossless, or WAV, you can set iTunes to automatically convert songs to 128 kbps
AAC files as they’re synced with iPod shuffle. This doesn’t affect the quality or size of
the songs in iTunes.
Note: Songs in formats not supported by iPod shuffle must be converted if you want to
sync them with iPod shuffle. For more information about formats supported by
iPod shuffle, see “If you can’t sync a song or other item onto iPod shuffle” on page 29.
To convert higher bit-rate songs to AAC files:
1 Connect iPod shuffle to your computer.
2 In iTunes, select iPod shuffle in the list of devices.
3 Click the Summary tab.
4 Select “Convert higher bit rate songs to 128 kbps AAC.”
5 Click Apply.4
20
4 Listening to Music
After you set up iPod shuffle, you can listen to songs,
audiobooks, and podcasts. Read this chapter to learn
about listening to iPod shuffle on the go.
When you disconnect iPod shuffle from your computer, you can clip on iPod shuffle
and listen to music, audiobooks, and podcasts, while controlling playback with the
earphone remote. VoiceOver lets you hear the name of the song you’re playing, choose
from a spoken menu of playlists, or get battery status.
Playing Music
After you sync iPod shuffle with music and other audio content, you can listen to it.
To listen to songs and other items on iPod shuffle:
1 Plug the earphones into iPod shuffle and place the earbuds in your ears.
2 Slide the three-way switch on iPod shuffle from OFF to play in order (⁄) or
shuffle (¡).
Playback begins. If iPod shuffle is on when you plug in earphones, playback doesn’t
start automatically. Click the Center button on the remote or slide the three-way
switch off and on again to start playback.
To preserve battery life when you aren’t using iPod shuffle, slide the three-way
switch to OFF.
When you plug in earphones, wait until the green status light turns off before
clicking buttons on the remote. Refer to the table that follows for information about
controlling playback with the earphone remote.
WARNING: Read all safety instructions about avoiding hearing damage on page 33
before use.Chapter 4 Listening to Music 21
The status light on iPod shuffle blinks in response when you click the buttons on the
earphone remote.
Setting iPod shuffle to Play Songs in Order or Shuffle Songs
You can set iPod shuffle to shuffle songs or play them in the order in which they’re
organized in iTunes. You hear a tone when you slide the three-way switch.
To set iPod shuffle to play songs in order:
m Slide the three-way switch to play in order (⁄).
After the last song plays, iPod shuffle starts playing the first song again.
Important: When you listen to audiobooks or podcasts, slide the three-way switch to
play in order so chapters or episodes play in the recorded order.
To set iPod shuffle to shuffle:
m Slide the three-way switch to shuffle (¡).
To reshuffle songs, slide the three-way switch from shuffle (¡) to play in order (⁄)
and back to shuffle again.
To Do this Status light response
Play Click the Center button once. Blinks green once
Pause Click the Center button once. Blinks green for
30 seconds
Change the volume Click the Volume Up (∂) or Volume
Down (D) button to increase or decrease
the volume. You hear a tone when you
change the volume while iPod shuffle is
paused.
Blinks green for each
volume increment
Blinks orange three
times when the upper or lower
volume limit is reached
Go to the next track (or
audiobook chapter)
Double-click the Center button. Blinks green once
Go to the previous track
(or audiobook chapter)
Triple-click the Center button within 6
seconds of the track starting. To restart
the current track, triple-click after 6
seconds.
Blinks green once
Fast-forward Double-click and hold the Center button. Blinks green once
Rewind Triple-click and hold the Center button. Blinks green once
Hear song title and artist
names
Click and hold the Center button. Blinks green once
Hear playlist menu Click the Center button until you hear a
tone, and then release to hear the
playlist menu. When you hear the name
of the playlist you want, click to select it.
You can click ∂ or D to move quickly
through the playlist menu.
Blinks green once
Exit the playlist menu Click and hold the Center button. Blinks green once22 Chapter 4 Listening to Music
Using the VoiceOver Feature
iPod shuffle can provide more control over your playback options by speaking your
song titles and artist names, and announcing a menu of playlists for you to choose
from. VoiceOver also tells you battery status and other messages. VoiceOver is available
in selected languages.
To hear these announcements, install the VoiceOver Kit and enable the VoiceOver
feature in iTunes. You can enable VoiceOver when you first set up iPod shuffle, or you
can do it later.
You set VoiceOver options on the Summary tab in iTunes. The following sections
describe how to turn on and customize this feature.
To enable VoiceOver when you set up iPod shuffle:
1 Connect iPod shuffle to your computer.
2 Follow the onscreen instructions in iTunes. Enable VoiceOver is selected by default.
3 Click Continue, and then follow the onscreen instructions to download and install the
VoiceOver Kit.
4 In the Summary tab, under Voice Feedback, choose the language you want from the
Language pop-up menu.
This sets the language for your spoken system messages and playlist names, as well as
many of the song titles and artist names.
Note: To pick a different language for specific songs, select them in iTunes, choose
File > Get Info, choose a VoiceOver language from the pop-up menu on the Options
tab, and then click OK.
5 Click Apply.
When setup is complete, VoiceOver is enabled on iPod shuffle.
To enable VoiceOver at a later time:
1 Connect iPod shuffle to your computer.
2 In iTunes, select iPod shuffle in the list of devices, and click the Summary tab.
3 Under Voice Feedback, select Enable VoiceOver.
4 Click Apply.
5 Follow the onscreen instructions to download and install the VoiceOver Kit.
6 Choose the language you want from the pop-up menu under Voice Feedback.
7 Click Apply.
When syncing is finished, VoiceOver is enabled.Chapter 4 Listening to Music 23
To disable VoiceOver:
1 In iTunes, select iPod shuffle in the list of devices, and click the Summary tab.
2 Under Voice Feedback, click to deselect Enable VoiceOver.
3 Click Apply.
When syncing is finished, VoiceOver is disabled. You’ll still hear some system
announcements in English on iPod shuffle, such as battery status, error messages, and a
generic numbered playlist menu. You won’t hear song titles and artist names.
Hearing Song Announcements
The VoiceOver feature can speak the current song title and artist name while you’re
listening to iPod shuffle. If you don’t want to hear song titles and artist names, you can
disable VoiceOver in iTunes (see “Using the VoiceOver Feature” on page 22).
To hear the current song announcement:
m Click and hold the Center button on the remote.
You hear the current song title and artist name. If you’re listening to an audiobook, you
hear the book title.
You can use VoiceOver to navigate to another song when you’re listening to song
announcements.
To navigate using song announcements:
 If iPod shuffle is playing, click and hold the Center button to hear the current song
announcement; double-click to hear the next announcement while the next song
plays; triple-click to hear the previous announcement while the previous song plays.
 If iPod shuffle is paused, click and hold the Center button to hear the current song
announcement; double-click to hear the next announcement; triple-click to hear the
previous announcement. Press the Center button to play the announced song.
Using the Playlist Menu
When VoiceOver is enabled, you can choose from a spoken menu to listen to any
playlist you’ve synced from iTunes to iPod shuffle. If audiobooks and audio podcasts are
synced to iPod shuffle, their titles are also read as part of the playlist menu. If VoiceOver
is disabled in iTunes, you hear an abbreviated menu of playlists in numbered order, but
not by name (for example, “Playlist 1, Playlist 2,” and so on).24 Chapter 4 Listening to Music
The playlist menu announces items in this order:
 The current playlist (if applicable)
 “All Songs” (default playlist of all the songs on iPod shuffle)
 Any remaining playlists in order
 “Podcasts” (if you choose this, you go to the first podcast in your list; you can
navigate from there to other podcasts)
 Audiobooks (each audiobook title is a separate playlist announcement)
To choose an item from the playlist menu:
1 Click and hold the Center button on the remote.
2 Continue holding after you hear the current song announcement, until you hear a tone.
3 Release the Center button at the tone. You hear the names of your playlists.
When you’re listening to the playlist menu, you can click the Volume Up (∂) or Volume
Down (D) button to move forward or backward in the playlist menu.
4 When you hear the name of the playlist you want, click the Center button to select it.
You hear a tone, and then the first item in your playlist plays.
To restart a playlist, follow these steps to select the playlist you want.
To exit from the playlist menu:
m Click and hold the Center button on the remote.
Setting Songs to Play at the Same Volume Level
The loudness of songs and other audio may vary depending on how the audio was
recorded or encoded. You can set iTunes to automatically adjust the volume of songs
so they play at the same relative volume level, and you can set iPod shuffle to use
those same iTunes volume settings.
To set iTunes to play songs at the same volume level:
1 In iTunes, choose iTunes > Preferences if you’re using a Mac, or choose Edit >
Preferences if you’re using a Windows PC.
2 Click Playback and select Sound Check.
To set iPod shuffle to use the iTunes volume settings:
1 Connect iPod shuffle to your computer.
2 In iTunes, select iPod shuffle in the list of devices.
3 Click the Summary tab.
4 Select Enable Sound Check.
5 Click Apply.
If you haven’t turned on Sound Check in iTunes, setting it on iPod shuffle has no effect.Chapter 4 Listening to Music 25
Setting a Volume Limit
You can set a limit for the volume on iPod shuffle. You can also set a password in iTunes
to prevent anyone else from changing this setting.
If you’ve set a volume limit on iPod shuffle, the status light blinks orange three times if
you try to increase the volume beyond the limit.
To set a volume limit for iPod shuffle:
1 Set iPod shuffle to the desired maximum volume.
2 Connect iPod shuffle to your computer.
3 In iTunes, select iPod shuffle in the list of devices, and then click the Summary tab.
4 Select “Limit maximum volume.”
5 Drag the slider to the desired maximum volume.
The initial slider setting shows the volume iPod shuffle was set to when you selected
the “Limit maximum volume” checkbox.
6 To require a password to change this setting, click the lock and then enter and verify a
password.
If you set a password, you must enter it before you can change or remove the volume
limit.
Note: The volume level may vary if you use different earphones or headphones.
To remove the volume limit:
1 Connect iPod shuffle to your computer.
2 In iTunes, select iPod shuffle in the list of devices.
3 Click the Summary tab.
4 Deselect “Limit maximum volume.”
Enter the password, if required.
If you forget the password, you can restore iPod shuffle. See “Updating and Restoring
iPod shuffle Software” on page 31.5
26
5 Storing Files on iPod shuffle
Use iPod shuffle to carry your data as well as your music.
Read this chapter to find out how to use iPod shuffle as an external disk.
Using iPod shuffle as an External Disk
You can use iPod shuffle as an external disk to store data files.
To sync iPod shuffle with music and other audio that you want to listen to, you must
use iTunes. You can’t play audio files that you’ve copied to iPod shuffle using the
Macintosh Finder or Windows Explorer.
To enable iPod shuffle as an external disk:
1 Connect iPod shuffle to your computer.
2 In iTunes, select iPod shuffle in the list of devices.
3 Click the Summary tab.
4 In the Options section, select “Enable disk use.”
You may need to scroll down to see the disk settings.
5 Click Apply.
When you set iTunes to autofill iPod shuffle, you can reserve space for disk use. See
“Autofilling iPod shuffle” on page 18.
When you use iPod shuffle as an external disk, the iPod shuffle disk icon appears on the
desktop on a Mac, or as the next available drive letter in Windows Explorer on a
Windows PC.
When iPod shuffle is enabled as a hard disk and you connect it to your computer, the
status light blinks orange continuously. Be sure to eject iPod shuffle in iTunes before
you disconnect it from your computer.Chapter 5 Storing Files on iPod shuffle 27
Transferring Files Between Computers
When you enable disk use on iPod shuffle, you can transfer files from one computer to
another. iPod shuffle is formatted as a FAT-32 volume, which is supported by both Macs
and PCs. This allows you to use iPod shuffle to transfer files between computers with
different operating systems.
To transfer files between computers:
1 After enabling disk use on iPod shuffle, connect it to the computer you want to get the
files from.
Important: If iPod shuffle is set to sync automatically, when you connect iPod shuffle to
a different computer or user account, a message asks if you want to erase iPod shuffle
and sync with the new iTunes library. Click Cancel if you don’t want to erase what’s on
iPod shuffle.
2 Using the computer’s file system (the Finder on a Mac, Windows Explorer on a PC), drag
the files to your iPod shuffle.
3 Disconnect iPod shuffle, and then connect it to the other computer.
Again, click Cancel if you don’t want to erase what’s on iPod shuffle.
4 Drag the files from iPod shuffle to a location on the other computer.
Preventing iTunes from Opening Automatically
You can keep iTunes from opening automatically when you connect iPod shuffle to
your computer.
To prevent iTunes from opening automatically:
1 Connect iPod shuffle to your computer.
2 In iTunes, select iPod shuffle in the list of devices.
3 Click the Summary tab.
4 In the Options section, deselect “Open iTunes when this iPod is connected.”
5 Click Apply.6
28
6 Tips and Troubleshooting
Most problems with iPod shuffle can be solved quickly by
following the advice in this chapter.
If the status light glows red persistently or you hear the error message “Please use
iTunes to restore”
Connect iPod shuffle to your computer and restore it in iTunes. See “Updating and
Restoring iPod shuffle Software” on page 31.
If iPod shuffle won’t turn on or respond
 Connect iPod shuffle to a high-power USB 2.0 port on your computer. Your
iPod shuffle battery may need to be recharged.
 Turn iPod shuffle off, wait 10 seconds, and then turn it on again.
 You may need to restore iPod shuffle software. See “Updating and Restoring
iPod shuffle Software” on page 31.
The 5 Rs: Reset, Retry, Restart, Reinstall, Restore
Remember these five basic suggestions if you have a problem with iPod shuffle. Try
these steps one at a time until the problem is resolved. If one of the following doesn’t
help, read on for solutions to specific problems.
 Reset iPod shuffle by turning it off, waiting 10 seconds, and then turning it back on
again.
 Retry with a different USB 2.0 port if you cannot see iPod shuffle in iTunes.
 Restart your computer, and make sure you have the latest software updates
installed.
 Reinstall iTunes software from the latest version on the web.
 Restore iPod shuffle. See “Updating and Restoring iPod shuffle Software” on
page 31.Chapter 6 Tips and Troubleshooting 29
If iPod shuffle isn’t playing music
 iPod shuffle might not have any music on it. If you hear the message “Please use
iTunes to sync music,” connect iPod shuffle to your computer to sync music to it.
 Slide the three-way switch off and then on again.
 Make sure the earphone or headphone connector is pushed in all the way.
 Make sure the volume is adjusted properly. A volume limit might be set. See “Setting
a Volume Limit” on page 25.
 iPod shuffle might be paused. Try clicking the Center button on the earphone
remote.
If you connect iPod shuffle to your computer and nothing happens
 Connect iPod shuffle to a high-power USB 2.0 port on your computer. Your
iPod shuffle battery may need to be recharged.
 Make sure you’ve installed the latest iTunes software from
www.apple.com/ipod/start.
 Try connecting the USB cable to a different USB 2.0 port on your computer. Make
sure the USB cable is firmly connected toiPod shuffle and to the computer. Make sure
the USB connector is oriented correctly. It can be inserted only one way.
 iPod shuffle might need to be reset. Turn iPod shuffle off, wait 10 seconds, and then
turn it back on again.
 If iPod shuffle doesn’t appear in iTunes or the Finder, the battery may be completely
discharged. Let iPod shuffle charge for several minutes to see if it comes back to life.
 Make sure you have the required computer and software. See “If you want to doublecheck the system requirements” on page 30.
 Try restarting your computer.
 You might need to restore iPod software. See “Updating and Restoring iPod shuffle
Software” on page 31.
 iPod shuffle may need to be repaired. You can arrange for service on the iPod shuffle
Service & Support website at www.apple.com/support/ipodshuffle/service.
If you can’t sync a song or other item onto iPod shuffle
The song might have been encoded in a format that iPod shuffle doesn’t support. The
following audio file formats are supported by iPod shuffle. These include formats for
audiobooks and podcasts:
 AAC (M4A, M4B, M4P) (up to 320 kbps)
 Apple Lossless (a high-quality compressed format)
 MP3 (up to 320 kbps)
 MP3 Variable Bit Rate (VBR)
 WAV
 AA (audible.com spoken word, formats 2, 3, and 4)
 AIFF30 Chapter 6 Tips and Troubleshooting
A song encoded using Apple Lossless format has near full CD-quality sound, but takes
up only about half as much space as a song encoded using AIFF or WAV format. The
same song encoded in AAC or MP3 format takes up even less space. When you import
music from a CD using iTunes, it’s converted to AAC format by default.
You can have iPod shuffle automatically convert files encoded at higher bit rates to 128
kbps AAC files as they’re synced with iPod shuffle. See “Fitting More Songs onto
iPod shuffle” on page 19.
Using iTunes for Windows, you can convert unprotected WMA files to AAC or MP3
format. This can be useful if you have a collection of music encoded in WMA format.
iPod shuffle doesn’t support WMA, MPEG Layer 1, MPEG Layer 2 audio files, or
audible.com format 1.
If you have a song in iTunes that isn’t supported by iPod shuffle, you can convert it to a
format iPod shuffle supports. For more information, see iTunes Help.
If your podcasts or audiobooks don’t play correctly
 Make sure the three-way switch on iPod shuffle is set to play in order (⁄). If a
playlist includes mixed tracks (songs, podcasts, and audiobooks, for example),
audiobooks and podcasts won’t play if iPod shuffle is set to shuffle (¡).
 If the chapters of an audiobook were added to iPod shuffle out of order, connect
iPod shuffle to your computer and rearrange the tracks using iTunes.
If you want to double-check the system requirements
To use iPod shuffle, you must have:
 One of the following computer configurations:
 A Macintosh with a USB 2.0 port
 A Windows PC with a USB 2.0 port or a USB 2.0 card installed
 One of the following operating systems: Mac OS X v10.4.11 or later, Windows Vista, or
Windows XP Home or Professional with Service Pack 3 or later
 Internet access (a broadband connection is recommended)
 iTunes 8.1 or later (iTunes can be downloaded from www.apple.com/ipod/start)
If your Windows PC doesn’t have a high-power USB 2.0 port, you can purchase and install
a USB 2.0 card.
High-power USB 2.0 portChapter 6 Tips and Troubleshooting 31
If you want to use iPod shuffle with a Mac and a Windows PC
If your iPod shuffle is set to manually manage music, you can add content to it from
more than one iTunes library, regardless of the operating system on the computer. If
your iPod shuffle is set to sync automatically, when you connect iPod shuffle to a
different computer or user account, a message asks if you want to erase iPod shuffle
and sync with the new iTunes library. Click Cancel if you want to keep the contents of
iPod shuffle as is.
You can use iPod shuffle as an external disk with both Macintosh computers and PCs,
allowing you to transfer files from one operating system to the other. See Chapter 5,
“Storing Files on iPod shuffle,” on page 26.
Updating and Restoring iPod shuffle Software
You can use iTunes to update or restore the iPod shuffle software. It’s recommended
that you update iPod shuffle to use the latest software. You can also restore the
software, which returns iPod shuffle to its original state.
 If you choose to update, the software is updated but your settings, songs, and other
data aren’t affected.
 If you choose to restore, all data is erased from iPod shuffle, including songs and any
other data. All iPod shuffle settings are restored to their original state.
To update or restore iPod shuffle:
1 Make sure you have an Internet connection and have installed the latest version of
iTunes from www.apple.com/ipod/start.
2 Connect iPod shuffle to your computer.
3 In iTunes, select iPod shuffle in the list of devices, and click the Summary tab.
The Version section tells you whether iPod shuffle is up to date or needs a newer
version of the software.
4 Do one of the following:
 To install the latest version of the software, click Update.
 To restore iPod shuffle to its original settings, click Restore. This erases all data from
iPod shuffle. Follow the onscreen instructions to complete the restore process.7
32
7 Safety and Handling
This chapter contains important safety and handling
information for iPod shuffle.
Keep this user guide for your iPod shuffle handy for future reference.
Important Safety Information
Handling iPod shuffle Do not drop, dissemble, open, crush, bend, deform, puncture,
shred, microwave, incinerate, paint, or insert foreign objects into iPod shuffle.
Avoiding water and wet locations Do not use iPod shuffle in rain, or near washbasins
or other wet locations. Take care not to spill any food or liquid on iPod shuffle. In case
iPod shuffle gets wet, unplug all cables, turn off iPod shuffle (slide the three-way switch
to OFF) before cleaning, and allow it to dry thoroughly before turning it on again. Do
not attempt to dry iPod shuffle with an external heat source, such as a microwave oven
or hair dryer.
Repairing iPod shuffle Never attempt to repair or modify iPod shuffle yourself.
iPod shuffle doesn’t contain any user-serviceable parts. If iPod shuffle has been
submerged in water, punctured, or subjected to a severe fall, do not use it until you
take it to an Apple Authorized Service Provider. For service information, choose iPod
Help from the Help menu in iTunes or go to www.apple.com/support/ipod/service. The
rechargeable battery in iPod shuffle should be replaced only by an Apple Authorized
Service Provider. For more information about batteries, go to www.apple.com/batteries.
± Read all safety information below and operating instructions before using
iPod shuffle to avoid injury.
WARNING: Failure to follow these safety instructions could result in fire, electric shock,
or other injury or damage.Chapter 7 Safety and Handling 33
Using the Apple USB Power Adapter (available separately) If you use the Apple USB
Power Adapter (sold separately at www.apple.com/ipodstore) to charge iPod shuffle,
make sure that the power adapter is fully assembled before you plug it into a power
outlet. Then insert the Apple USB Power Adapter firmly into the power outlet. Do not
connect or disconnect the Apple USB Power Adapter with wet hands. Do not use any
power adapter other than the Apple USB Power Adapter to charge your iPod shuffle.
The Apple USB Power Adapter may become warm during normal use. Always allow
adequate ventilation around the Apple USB Power Adapter and use care when
handling.
Unplug the Apple USB Power Adapter if any of the following conditions exist:
 The power cord or plug has become frayed or damaged.
 The adapter is exposed to rain, liquids, or excessive moisture.
 The adapter case has become damaged.
 You suspect the adapter needs service or repair.
 You want to clean the adapter.
Avoiding hearing damage Permanent hearing loss may occur if earbuds or
headphones are used at high volume. Set the volume to a safe level. You can adapt
over time to a higher volume of sound that may sound normal but can be damaging to
your hearing. If you experience ringing in your ears or muffled speech, stop listening
and have your hearing checked. The louder the volume, the less time is required before
your hearing could be affected. Hearing experts suggest that to protect your hearing:
 Limit the amount of time you use earbuds or headphones at high volume.
 Avoid turning up the volume to block out noisy surroundings.
 Turn the volume down if you can’t hear people speaking near you.
For information about how to set a volume limit on iPod shuffle, see “Setting a Volume
Limit” on page 25.
Driving safely Use of iPod shuffle alone, or with earphones (even if used in only one
ear) while operating a vehicle is not recommended and is illegal in some areas. Be
careful and attentive while driving. Stop using iPod shuffle if you find it disruptive or
distracting while operating any type of vehicle, or performing any other activity that
requires your full attention.34 Chapter 7 Safety and Handling
Important Handling Information
Carrying iPod shuffle iPod shuffle contains sensitive components. Do not bend, drop,
or crush iPod shuffle.
Using connectors and ports Never force a connector into a port. Check for
obstructions on the port. If the connector and port don’t join with reasonable ease,
they probably don’t match. Make sure that the connector matches the port and that
you have positioned the connector correctly in relation to the port.
Keeping iPod shuffle within acceptable temperatures Operate iPod shuffle in a place
where the temperature is always between 32º and 95º F (0º and 35º C). iPod shuffle
play time might temporarily shorten in low-temperature conditions.
Store iPod shuffle in a place where the temperature is always between -4º and 113º F
(-20º and 45º C). Don’t leave iPod shuffle in your car, because temperatures in parked
cars can exceed this range.
When you’re using iPod shuffle or charging the battery, it’s normal for iPod shuffle to
get warm. The exterior of iPod shuffle functions as a cooling surface that transfers heat
from inside the unit to the cooler air outside.
Keeping the outside of iPod shuffle clean To clean iPod shuffle, unplug all cables, turn
it off (slide the three-way switch to OFF), and use a soft, slightly damp, lint-free cloth.
Avoid getting moisture in openings. Don’t use window cleaners, household cleaners,
aerosol sprays, solvents, alcohol, ammonia, or abrasives to clean iPod shuffle.
Disposing of iPod shuffle properly For information about the proper disposal of
iPod shuffle, including other important regulatory compliance information, see
“Regulatory Compliance Information” on page 36.
NOTICE: Failure to follow these handling instructions could result in damage to
iPod shuffle or other property.8
35
8 Learning More, Service,
and Support
You can find more information about using iPod shuffle
in onscreen help and on the web.
The following table describes where to get iPod-related software and service
information.
To learn about Do this
Service and support,
discussions, tutorials, and
Apple software downloads
Go to: www.apple.com/support/ipodshuffle
Using iTunes Open iTunes and choose Help > iTunes Help.
For an online iTunes tutorial (available in some areas only), go to:
www.apple.com/itunes/tutorials
The latest information about
iPod shuffle
Go to: www.apple.com/ipodshuffle
Registering iPod shuffle To register iPod shuffle, install iTunes on your computer and
connect iPod shuffle.
Finding the iPod shuffle serial
number
Look under the clip on iPod shuffle. Or, in iTunes (with iPod shuffle
connected to your computer), select iPod shuffle in the list of
devices, and click the Summary tab.
Obtaining warranty service First follow the advice in this booklet, the onscreen help, and
online resources, and then go to:
www.apple.com/support/ipodshuffle/service36
Regulatory Compliance Information
FCC Compliance Statement
This device complies with part 15 of the FCC rules.
Operation is subject to the following two conditions:
(1) This device may not cause harmful interference,
and (2) this device must accept any interference
received, including interference that may cause
undesired operation. See instructions if interference
to radio or television reception is suspected.
Radio and Television Interference
This computer equipment generates, uses, and can
radiate radio-frequency energy. If it is not installed
and used properly—that is, in strict accordance with
Apple’s instructions—it may cause interference with
radio and television reception.
This equipment has been tested and found to
comply with the limits for a Class B digital device in
accordance with the specifications in Part 15 of FCC
rules. These specifications are designed to provide
reasonable protection against such interference in a
residential installation. However, there is no
guarantee that interference will not occur in a
particular installation.
You can determine whether your computer system is
causing interference by turning it off. If the
interference stops, it was probably caused by the
computer or one of the peripheral devices.
If your computer system does cause interference to
radio or television reception, try to correct the
interference by using one or more of the following
measures:
 Turn the television or radio antenna until the
interference stops.
 Move the computer to one side or the other of the
television or radio.
 Move the computer farther away from the
television or radio.
 Plug the computer into an outlet that is on a
different circuit from the television or radio. (That
is, make certain the computer and the television or
radio are on circuits controlled by different circuit
breakers or fuses.)
If necessary, consult an Apple-authorized service
provider or Apple. See the service and support
information that came with your Apple product. Or,
consult an experienced radio/television technician
for additional suggestions.
Important: Changes or modifications to this product
not authorized by Apple Inc. could void the EMC
compliance and negate your authority to operate
the product.
This product was tested for EMC compliance under
conditions that included the use of Apple peripheral
devices and Apple shielded cables and connectors
between system components.
It is important that you use Apple peripheral devices
and shielded cables and connectors between system
components to reduce the possibility of causing
interference to radios, television sets, and other
electronic devices. You can obtain Apple peripheral
devices and the proper shielded cables and
connectors through an Apple Authorized Reseller.
For non-Apple peripheral devices, contact the
manufacturer or dealer for assistance.
Responsible party (contact for FCC matters only):
Apple Inc. Corporate Compliance
1 Infinite Loop, MS 26-A
Cupertino, CA 95014-2084
Industry Canada Statement
This Class B device meets all requirements of the
Canadian interference-causing equipment
regulations.
Cet appareil numérique de la classe B respecte
toutes les exigences du Règlement sur le matériel
brouilleur du Canada.
VCCI Class B Statement
Korea Class B Statement 37
Russia
European Community
Complies with European Directives 2006/95/EEC and
89/336/EEC.
Disposal and Recycling Information
This symbol indicates that your product must be
disposed of properly according to local laws and
regulations. When your product reaches its end of
life, contact Apple or your local authorities to learn
about recycling options.
For information about Apple’s recycling program,
go to: www.apple.com/environment/recycling
Battery Replacement
The rechargeable battery in iPod shuffle should be
replaced only by an authorized service provider. For
battery replacement services, go to:
www.apple.com/batteries/replacements.html
Battery Disposal Information
Your iPod shuffle contains a battery. Dispose of your
iPod shuffle according to your local environmental
laws and guidelines.
Deutschland: Dieses Gerät enthält Batterien. Bitte
nicht in den Hausmüll werfen. Entsorgen Sie dieses
Gerätes am Ende seines Lebenszyklus entsprechend
der maßgeblichen gesetzlichen Regelungen.
China:
Nederlands: Gebruikte batterijen kunnen worden
ingeleverd bij de chemokar of in een speciale
batterijcontainer voor klein chemisch afval (kca)
worden gedeponeerd.
Taiwan:
European Union—Disposal Information:
This symbol means that according to local laws and
regulations your product should be disposed of
separately from household waste. When this product
reaches its end of life, take it to a collection point
designated by local authorities. Some collection
points accept products for free. The separate
collection and recycling of your product at the time
of disposal will help conserve natural resources and
ensure that it is recycled in a manner that protects
human health and the environment.
Apple and the Environment
At Apple, we recognize our responsibility to
minimize the environmental impacts of our
operations and products.
For more information, go to:
www.apple.com/environment
© 2009 Apple Inc. All rights reserved. Apple, the Apple logo, iPod,
iTunes, Mac, Macintosh, and Mac OS are trademarks of Apple Inc.,
registered in the U.S. and other countries. Finder and Shuffle are
trademarks of Apple Inc. Apple Store and iTunes Store are service
marks of Apple Inc., registered in the U.S. and other countries. Other
company and product names mentioned herein may be trademarks of
their respective companies.
Mention of third-party products is for informational purposes only and
constitutes neither an endorsement nor a recommendation. Apple
assumes no responsibility with regard to the performance or use of
these products. All understandings, agreements, or warranties, if any,
take place directly between the vendors and the prospective users.
Every effort has been made to ensure that the information in this
manual is accurate. Apple is not responsible for printing or clerical
errors.
019-1531/2009-04Index
38
Index
A
AAC, converting songs to 19
albums, purchasing 12
announcements
playlist menu order 24
song title and artist name 23
song titles and artist names 23
Apple Earphones with Remote 5
Apple USB Power Adapter 33
artist names, announcing 23
audiobooks
hearing name 23
listening to 21
purchasing 12
syncing 17
audio file formats 29
autofilling iPod shuffle 18
B
battery
charge status 8
charging 8, 28
checking status in iTunes 8
rechargeable 9
replacement information 32
status 9
status lights when connected 9
status lights when disconnected 9
bit-rate formats 19
browsing iTunes Store 12
C
CDs, importing into iTunes 13
charging the battery
about 8, 28
using the Apple USB Power Adapter 8
using your computer 8
choosing playlists 23
compressing songs 19
computer
charging the battery 8
connecting iPod shuffle 6
problems connecting iPod shuffle 29
requirements 30
connecting iPod shuffle
about 6
charging the battery 8
controls
status light response 21
using 5
converting songs to AAC files 19
converting unprotected WMA files 30
D
data files, storing on iPod shuffle 26
deleting songs 18
disconnecting iPod shuffle
about 6
during music update 7
eject first 7
instructions 7
disk, using iPod shuffle as 26
downloading podcasts 12
E
earphone remote 5, 21, 23, 24
Eject button in iTunes 7
ejecting iPod shuffle before disconnecting 7
enabling VoiceOver feature 22
entering song information manually 13
exiting the playlist menu 24
external disk, using iPod shuffle as 26
F
fast-forwarding 6
features of iPod shuffle 3
fitting more songs onto iPod shuffle 19
formats, audio file 29
G
Genius
button in iTunes 15
creating a playlist in iTunes 15
getting help 35Index 39
H
headphones. See earphones
hearing damage, avoiding 33
help, getting 35
higher bit rate songs 19
high-power USB port 6, 8, 28, 29, 30
I
importing CDs into iTunes 13
iTunes
ejecting iPod shuffle 7
getting help 35
importing CDs 13
iTunes Store 12
setting not to open automatically 27
version required 30
iTunes Library, adding songs 13
iTunes Store
browsing 12
downloading podcasts 12
purchasing audiobooks 12
purchasing songs and albums 12
searching 12
signing in 11
L
library, adding songs 13
listening to an audiobook 21
M
Mac OS X version 30
manually managing music 17
maximum volume limit, setting 25
music
iPod shuffle not playing 29
purchasing 12
tutorial 35
See also songs; syncing music
N
names
spoken song titles 23
navigating by song title 23
next track 6
O
operating system requirements 30
overview of iPod shuffle features 3
P
pausing a song 6
playing
previous song 6
songs 6
songs in order 5
play in order 5, 21
playlist menu
choosing an item 24
exiting 24
order of announcements 24
playlists
choosing from spoken menu 23
Genius 15
hearing spoken menu of 23
restarting 24
See also playlist menu
podcasts
browsing for 12
downloading 12
hearing name 23
listening to 21
syncing 17
ports
earphone 4, 5
high-power USB 6, 8, 28, 29, 30
troubleshooting iPod shuffle connection 29
USB 2.0 28, 29, 30
power adapter, USB 8
Power Search in iTunes Store 12
power switch 4
preventing iTunes from opening automatically 27
previous track 6
problems. See troubleshooting
purchasing songs, albums, audiobooks 12
R
random play 5
rechargeable batteries 9
registering iPod shuffle 35
relative volume, playing songs at 24
remote, earphone. See earphone remote
removing songs 18
requirements
computer 30
iTunes version 30
operating system 30
resetting iPod shuffle 5, 28
reshuffling songs 21
restart current track 6, 21
restarting a playlist 24
restoring iPod software 31
rewinding 6
S
Safely Remove Hardware icon 7
safety considerations 32
searching iTunes Store 12
serial number, locating 5, 35
service and support 35
setting play order of songs 540 Index
settings
autofill 19
manually manage music 17
playing songs at relative volume 24
shuffle songs 21
speech options 22
volume limit 25
shuffling songs 5, 21
skipping to next track 6
sleep mode and charging the battery 8
software, updating and restoring 31
songs
autofilling 18
deleting 18
entering information manually 13
fast-forwarding 6
hearing title of currently playing song 23
playing and pausing 6
playing at relative volume 24
playing in order 5
playing next or previous 6
purchasing 12
removing 18
reshuffling 21
rewinding 6
shuffling 5, 21
skipping to the next 6
syncing manually 17
song titles
announcing 23
navigating by 23
Sound Check, enabling 24
speech options 22
status light
battery 8, 9
location 4
response to controls 21
storing, data files on iPod shuffle 26
subscribing to podcasts 12
supported audio file formats 29
supported operating systems 30
switch, three-way 4
syncing audiobooks 17
syncing music
disconnecting iPod shuffle 7
overview 15
tutorial 35
syncing podcasts 17
syncing songs manually 17
system requirements 30
T
three-way switch 4
tracks. See songs
troubleshooting
connecting iPod shuffle to computer 29
connecting to USB port 29
cross-platform use 31
iPod shuffle not playing music 29
iPod shuffle not responding 28
resetting iPod shuffle 5, 28
safety considerations 32
updating and restoring software 31
turning iPod shuffle on or off 5
tutorial 35
U
unresponsive iPod shuffle 28
unsupported audio file formats 30
updating and restoring software 31
USB 2.0 port 28, 29
USB Power Adapter 8
V
VoiceOver
battery status 9
disabling 23
enabling 22
song announcements 23
using 22
volume
changing 6
enabling Sound Check 24
setting limit 25
W
warranty service 35
Windows
supported versions 30
troubleshooting 31
WMA files, converting 30
iPod classic
User Guide2
2 Contents
Chapter 1 4 iPod classic Basics
5 iPod classic at a Glance
5 Using iPod classic Controls
7 Disabling iPod classic Controls
8 Using iPod classic Menus
10 Connecting and Disconnecting iPod classic
13 About the iPod classic Battery
Chapter 2 16 Setting Up iPod classic
16 About iTunes
17 Setting Up Your iTunes Library
18 Adding More Information to Your iTunes Library
19 Organizing Your Music
19 Importing Video to iTunes
21 Adding Music, Videos, and Other Content to iPod classic
21 Connecting iPod classic to a Computer for the First Time
22 Syncing Music Automatically
23 Adding Videos to iPod classic
25 Adding Podcasts to iPod classic
25 Adding iTunes U Content to iPod classic
26 Adding Audiobooks to iPod classic
26 Adding Other Content to iPod classic
26 Managing iPod classic Manually
Chapter 3 28 Listening to Music
28 Playing Music and Other Audio
31 Using Genius on iPod classic
38 Playing Podcasts
38 Playing iTunes U Content
39 Listening to Audiobooks
39 Listening to FM Radio
Chapter 4 40 Watching Videos
40 Watching Videos on iPod classicContents 3
41 Watching Videos on a TV Connected to iPod classic
Chapter 5 43 Adding and Viewing Photos
43 Importing Photos
44 Adding Photos from Your Computer to iPod classic
45 Viewing Photos
47 Adding Photos from iPod classic to a Computer
Chapter 6 49 More Settings, Extra Features, and Accessories
49 Using iPod classic as an External Disk
50 Using Extra Settings
54 Syncing Contacts, Calendars, and To-Do Lists
56 Storing and Reading Notes
57 Recording Voice Memos
58 Learning About iPod classic Accessories
Chapter 7 59 Tips and Troubleshooting
59 General Suggestions
65 Updating and Restoring iPod Software
Chapter 8 66 Safety and Cleaning
66 Important Safety Information
68 Important Handling Information
Chapter 9 70 Learning More, Service, and Support
Index 741
4
1 iPod classic Basics
Read this chapter to learn about the features of iPod classic,
how to use its controls, and more.
To use iPod classic, you put music, videos, photos, and other files on your computer
and then add them to iPod classic.
iPod classic is a music player and much more. Use iPod classic to:
 Sync songs, videos, and digital photos for listening and viewing on the go
 Listen to podcasts, downloadable audio and video shows delivered over the Internet
 View video on iPod classic, or on a TV using an optional cable
 View photos as a slideshow with music on iPod classic, or on a TV using an optional
cable
 Listen to audiobooks purchased from the iTunes Store or audible.com
 Store or back up files and other data, using iPod classic as an external disk
 Sync contact, calendar, and to-do list information from your computer
 Play games, store text notes, set an alarm, and moreChapter 1 iPod classic Basics 5
iPod classic at a Glance
Get to know the controls on iPod classic:
Using iPod classic Controls
The controls on iPod classic are easy to find and use. Press any button to turn on
iPod classic.
The first time you turn on iPod classic, the language menu appears. Use the Click Wheel
to scroll to your language, and then press the Center button to choose it. The main
menu appears in your language.
Use the Click Wheel and Center button to navigate through onscreen menus,
play songs, change settings, and get information.
Hold switch
Menu
Previous/Rewind
Play/Pause
Dock connector
Headphones port
Click Wheel
Next/Fast-forward
Center button6 Chapter 1 iPod classic Basics
Move your thumb lightly around the Click Wheel to select a menu item. To choose the
item, press the Center button.
To go back to the previous menu, press Menu on the Click Wheel.
Here’s what else you can do with iPod classic controls.
To Do this
Turn on iPod classic Press any button.
Turn off iPod classic Press and hold Play/Pause (’).
Turn on the backlight Press any button or use the Click Wheel.
Disable the iPod classic controls
(so nothing happens if you press them
accidentally)
Slide the Hold switch to HOLD (an orange bar appears).
Reset iPod classic
(if it isn’t responding)
Slide the Hold switch to HOLD and back again. Press Menu
and the Center button at the same time for about 6
seconds, until the Apple logo appears.
Choose a menu item Use the Click Wheel to scroll to the item, and then press the
Center button to choose it.
Go back to the previous menu Press Menu.
Go directly to the main menu Press and hold Menu.
Access additional options Press and hold the Center button until a menu appears.
Browse for a song From the main menu, choose Music.
Browse for a video From the main menu, choose Videos.
Play a song or video Select the song or video and press the Center button or
Play/Pause (’). iPod classic must be ejected from your
computer to play songs or videos.
Pause a song or video Press Play/Pause (’) or unplug your headphones.
Change the volume From the Now Playing screen, use the Click Wheel.
Play all the songs in a playlist or album Select the playlist or album and press Play/Pause (’).
Play all songs in random order From the main menu, choose Shuffle Songs. You can also
shuffle songs from the Now Playing screen.
Skip to any point in a song or video From the Now Playing screen, press the Center button to
show the scrubber bar (the playhead on the bar shows the
current location), and then scroll to any point in the song or
video.
Skip to the next song or chapter in an
audiobook or podcast
Press Next/Fast-forward (‘).
Start a song or video over Press Previous/Rewind (]).
Fast-forward or rewind a song or
video
Press and hold Next/Fast-forward (‘) or Previous/Rewind
(]).Chapter 1 iPod classic Basics 7
Disabling iPod classic Controls
If you don’t want to turn iPod classic on or activate controls accidentally, you can
disable them with the Hold switch.
To disable iPod classic controls:
m Slide the Hold switch to HOLD (an orange bar appears).
Add a song to the On-The-Go playlist Play or select a song, and then press and hold the Center
button until a menu appears. Select “Add to On-the-Go,”
and then press the Center button.
Play the previous song or chapter in
an audiobook or podcast
Press Previous/Rewind (]) twice.
Create a Genius playlist Play or select a song, and then press and hold the Center
button until a menu appears. Select Start Genius, and then
press the Center button (Start Genius appears in the Now
Playing screen only if there’s Genius data for the selected
song).
Save a Genius playlist Create a Genius playlist, select Save Playlist, and then press
the Center button.
Play a saved Genius playlist From the Playlist menu, select a Genius playlist, and then
press Play/Pause (’).
Play a Genius Mix From the Music menu, choose Genius Mixes. Select a mix
and then press Play/Pause (’).
Find the iPod classic serial number From the main menu, choose Settings > About and press
the Center button until you get to the serial number, or look
on the back of iPod classic.
To Do this8 Chapter 1 iPod classic Basics
If you disable the controls while using iPod classic, the song, playlist, podcast, or video
that’s playing continues to play. To stop or pause, slide the Hold switch to enable the
controls again.
Using iPod classic Menus
When you turn on iPod classic, you see the main menu. Choose menu items to perform
functions or go to other menus. Icons along the top of the screen show iPod classic
status.
Adding or Removing Items in the Main Menu
You might want to add often-used items to the iPod classic main menu. For example,
you can add a Songs item to the main menu, so you don’t have to choose Music before
you choose Songs.
To add or remove items in the main menu:
1 Choose Settings > Main Menu.
2 Choose each item you want to appear in the main menu. A checkmark indicates which
items have been added.
Display item Function
Menu title Displays the title of the current menu.
Lock icon The Lock icon appears when the Hold switch is set to HOLD. This
indicates that the iPod classic controls are disabled.
Play icon The Play (“) icon appears when a song, video, or other item is
playing. The Pause (1) icon appears when the item is paused.
Battery icon The Battery icon shows the approximate remaining battery charge.
Menu items Use the Click Wheel to scroll through menu items. Press the Center
button to choose an item. An arrow next to a menu item indicates
that choosing it leads to another menu or screen.
Menu title
Menu items
Battery icon
Play icon
Lock iconChapter 1 iPod classic Basics 9
Setting the Language
iPod classic can use different languages.
To set the language:
m Choose Settings > Language, and then choose a language.
Setting the Backlight Timer
You can set the backlight to illuminate the screen for a certain amount of time when
you press a button or use the Click Wheel. The default is 10 seconds.
To set the backlight timer:
m Choose Settings > Backlight, and then choose the time you want. Choose “Always On”
to prevent the backlight from turning off (choosing this option decreases battery
performance).
Setting the Screen Brightness
You can set the brightness of the iPod classic screen.
To set the screen brightness:
m Choose Settings > Brightness, and then use the Click Wheel to adjust the brightness.
You can also adjust the brightness during a slideshow or video. Press the Center button
until the brightness slider appears, and then use the Click Wheel to adjust the
brightness.
Note: Your brightness setting may affect your battery performance.
Turning off the Click Wheel Sound
When you scroll through menu items, you can hear a clicking sound through the
earphones or headphones and through the iPod classic internal speaker. If you like, you
can turn off the Click Wheel sound.
To turn off the Click Wheel sound:
m Choose Settings and set Clicker to Off.
To turn the Click Wheel sound on again, set Clicker to On.
Scrolling Quickly Through Long Lists
You can scroll quickly through a long list of songs, videos, or other items by moving
your thumb quickly on the Click Wheel.
Note: Not all languages are supported.
To scroll quickly:
1 Move your thumb quickly on the Click Wheel, to display a letter of the alphabet on
the screen.10 Chapter 1 iPod classic Basics
2 Use the Click Wheel to navigate through the alphabet until you find the first letter of
the item you’re looking for.
Items beginning with a symbol or number appear after the letter Z.
3 Lift your thumb momentarily to return to normal scrolling.
4 Use the Click Wheel to navigate to the item you want.
Getting Information About iPod classic
You can get information about your iPod classic, such as the amount of space available,
the number of songs, videos, photos, and other items, and the serial number, model,
and software version.
To get information about iPod classic:
m Choose Settings > About, and press the Center button to cycle through the screens
of information.
Resetting All Settings
You can reset all the items on the Settings menu to their default settings.
To reset all settings:
m Choose Settings > Reset Settings, and then choose Reset.
Connecting and Disconnecting iPod classic
You connect iPod classic to your computer to add music, videos, photos, and files,
and to charge the battery. Disconnect iPod classic when you’re done.
Connecting iPod classic
To connect iPod classic to your computer:
m Plug the included iPod Dock Connector to USB Cable into a high-powered USB 2.0 port
on your computer, and then connect the other end to iPod classic.
If you have an iPod Dock, you can connect the cable to a USB 2.0 port on your
computer, connect the other end to the dock, and then put iPod classic in the dock.Chapter 1 iPod classic Basics 11
Note: The USB port on most keyboards doesn’t provide enough power to charge
iPod classic. Connect iPod classic to a USB 2.0 port on your computer, unless your
keyboard has a high-powered USB 2.0 port.
By default, iTunes syncs songs on iPod classic automatically when you connect it to
your computer. When iTunes is finished, you can disconnect iPod classic. You can sync
songs while your battery is charging.
If you connect iPod classic to a different computer and it’s set to sync music
automatically, iTunes prompts you before syncing any music. If you click Yes, the songs
and other audio files already on iPod classic will be erased and replaced with songs and
other audio files on the computer iPod classic is connected to. For information about
adding music to iPod classic and using iPod classic with more than one computer, see
Chapter 3, “Listening to Music,” on page 28.
Disconnecting iPod classic
It’s important not to disconnect iPod classic while it’s syncing. You can easily see if it’s
OK to disconnect iPod classic by looking at the iPod classic screen. Don’t disconnect
iPod classic if you see the “Connected” or “Synchronizing” messages, or you could
damage files on iPod classic.
If you see one of these messages, you must eject iPod classic before disconnecting it: 12 Chapter 1 iPod classic Basics
If you see the main menu or a large battery icon, you can disconnect iPod classic.
If you set iPod classic to manage songs manually or enable iPod classic for disk use,
you must always eject iPod classic before disconnecting it. See “Managing iPod classic
Manually” on page 26 and “Using iPod classic as an External Disk” on page 49.
If you accidentally disconnect iPod classic without ejecting it, reconnect iPod classic to
your computer and sync again.
To eject iPod classic:
m In iTunes, click the Eject (C) button next to iPod classic in the list of devices.
You can safely disconnect iPod classic while either of these messages is displayed:
If you’re using a Mac, you can also eject iPod classic by dragging the iPod classic icon on
the desktop to the Trash.
If you’re using a Windows PC, you can also eject iPod classic in My Computer or by
clicking the Safely Remove Hardware icon in the Windows system tray and selecting
iPod classic.Chapter 1 iPod classic Basics 13
To disconnect iPod classic:
m Disconnect the cable from iPod classic. If iPod classic is in the dock, simply remove it.
About the iPod classic Battery
iPod classic has an internal, non-user-replaceable battery. For best results, the first time
you use iPod classic, let it charge for about four hours or until the battery icon in the
status area of the display shows that the battery is fully charged. If iPod classic isn’t
used for a while, the battery might need to be charged.
The iPod classic battery is 80-percent charged in about two hours and fully charged in
about four hours. If you charge iPod classic while adding files, playing music, watching
videos, or viewing a slideshow, charging might take longer.
Charging the iPod classic Battery
You can charge the iPod classic battery in two ways:
 Connect iPod classic to your computer.
 Use the Apple USB Power Adapter, available separately.
To charge the battery using your computer:
m Connect iPod classic to a USB 2.0 port on your computer. The computer must be
turned on and not in sleep mode (some Mac models can charge iPod classic while
in sleep mode). 14 Chapter 1 iPod classic Basics
If the battery icon on the iPod classic screen shows the Charging screen, the battery is
charging. If it shows the Charged screen, the battery is fully charged.
If you don’t see the charging screen, iPod classic might not be connected to
a high-power USB port. Try another USB port on your computer.
Important: If a “Charging, Please Wait” or “Connect to Power” message appears on
the iPod classic screen, the battery needs to be charged before iPod classic can
communicate with your computer. See “If iPod classic displays a “Connect to Power”
message” on page 61.
If you want to charge iPod classic when you’re away from your computer, you can
purchase the Apple USB Power Adapter.
To charge the battery using the Apple USB Power Adapter:
1 Connect the iPod Dock Connector to USB 2.0 Cable to the power adapter, and plug the
other end of the cable into iPod classic. Chapter 1 iPod classic Basics 15
2 Plug the power adapter into a working electrical outlet.
Understanding Battery States
When iPod classic isn’t connected to a power source, a battery icon in the top-right
corner of the iPod classic screen shows approximately how much charge is left.
When iPod classic is connected to a power source, the battery icon changes to show
that the battery is charging or fully charged.
You can disconnect and use iPod classic before it’s fully charged.
Note: Rechargeable batteries have a limited number of charge cycles and might
eventually need to be replaced. Battery life and number of charge cycles vary by use
and settings. For more information, go to www.apple.com/batteries.
WARNING: Make sure the power adapter is fully assembled before plugging it into an
electrical outlet.
Apple USB Power Adapter
(your adapter may look different)
iPod USB cable
Battery less than 20% charged
Battery about halfway charged
Battery fully charged
Battery charging (lightning bolt)
Battery fully charged (plug)2
16
2 Setting Up iPod classic
You use iTunes on your computer to set up iPod classic to
play your music, video, and other media content.
You use iPod classic by importing songs, audiobooks, movies, TV shows, music videos,
and podcasts into your computer and then syncing them with iPod classic. Read on to
learn more about the steps in this process, including:
 Getting music from your CD collection, hard disk, or the iTunes Store (part of iTunes
and available in some countries only) into the iTunes application on your computer
 Organizing your music and other audio into playlists, if you want
 Syncing playlists, songs, audiobooks, videos, and podcasts with iPod classic
About iTunes
iTunes is the free software application you use to set up, organize, and manage your
content on iPod classic. iTunes can sync music, audiobooks, podcasts, and more with
iPod classic. If you don’t already have iTunes installed on your computer, you can
download it at www.apple.com/downloads. iPod classic requires iTunes 9 or later.
You can use iTunes to import music from CDs and the Internet, buy songs and other
audio and video from the iTunes Store, create personal compilations of your favorite
songs (called playlists), and sync your playlists with iPod classic.
iTunes also has a feature called Genius, which creates playlists and mixes of songs from
your iTunes library that go great together. You can sync Genius playlists that you create
in iTunes to iPod classic, and you can create Genius playlists and listen to Genius Mixes
on iPod classic. To use Genius, you need an iTunes Store account.
iTunes has many other features. You can burn your own CDs that play in standard CD
players (if your computer has a recordable CD drive); listen to streaming Internet radio;
watch videos and TV shows; rate songs according to preference; and much more.
For information about using these iTunes features, open iTunes and choose
Help > iTunes Help.Chapter 2 Setting Up iPod classic 17
If you already have iTunes 9 or later installed on your computer and you’ve set up your
iTunes library, you can skip ahead to “Adding Music, Videos, and Other Content to
iPod classic” on page 21.
Setting Up Your iTunes Library
To listen to music on iPod classic, you first need to get that music into iTunes on
your computer.
There are three ways of getting music and other audio into iTunes:
 Purchase music, audiobooks, and videos, or download podcasts online from the
iTunes Store.
 Import music and other audio from audio CDs.
 Add music and other audio that’s already on your computer to your iTunes library.
Purchase Songs and Download Podcasts Using the iTunes Store
If you have an Internet connection, you can easily purchase and download songs,
albums, audiobooks online using the iTunes Store. You can also subscribe to and
download podcasts, and you can download free educational content from iTunes U.
To purchase music online using the iTunes Store, you set up a free iTunes account in
iTunes, find the songs you want, and then buy them. If you already have an iTunes
account, you can use that account to sign in to the iTunes Store and buy songs.
You don’t need an iTunes Store account to download or subscribe to podcasts.
To enter the iTunes Store, open iTunes and click iTunes Store (under Store) on the left
side of the iTunes window.
Add Songs Already on Your Computer to Your iTunes Library
If you have songs on your computer encoded in file formats that iTunes supports, you
can easily add the songs to iTunes. To learn how to get songs from your computer into
iTunes, open iTunes and choose Help > iTunes Help.
Using iTunes for Windows, you can convert nonprotected WMA files to AAC or MP3
format. This can be useful if you have a library of music encoded in WMA format.
For more information, open iTunes and choose Help > iTunes Help.18 Chapter 2 Setting Up iPod classic
Import Music From Your Audio CDs Into iTunes
iTunes can import music and other audio from your audio CDs. If you have an
Internet connection, iTunes gets the names of the songs on the CD from the Internet
(if available) and lists them in the window. When you add the songs to iPod classic,
the song information is included. To learn how to import music from your CDs into
iTunes, open iTunes and choose Help > iTunes Help.
Adding More Information to Your iTunes Library
After you import your music into iTunes, you can add more song and album
information to your iTunes library. Most of this additional information appears on
iPod classic when you add the songs.
Enter Song Names and Other Information
If you don’t have an Internet connection, if song information isn’t available for music
you import, or if you want to include additional information (such as composer names),
you can enter the information manually. To learn how to enter song information, open
iTunes and choose Help > iTunes Help.
Add Lyrics
You can enter song lyrics in plain text format into iTunes so that you can view the song
lyrics on iPod classic while the song is playing. To learn how to enter lyrics, open iTunes
and choose Help > iTunes Help.
For more information, see “Viewing Lyrics on iPod classic” on page 30.
Add Album Artwork
Music you purchase from the iTunes Store includes album artwork, which iPod classic
can display. You can add album artwork automatically for music you’ve imported from
CDs, if the CDs are available from the iTunes Store. You can add album art manually if
you have the album art on your computer. To learn more about adding album artwork,
open iTunes and choose Help > iTunes Help.
For more information, see “Viewing Album Artwork on iPod classic” on page 30.Chapter 2 Setting Up iPod classic 19
Organizing Your Music
In iTunes, you can organize songs and other items into lists, called playlists, in any way
you want. For example, you can create playlists with songs to listen to while exercising,
or playlists with songs for a particular mood.
You can create Smart Playlists that update automatically based on rules you define.
When you add songs to iTunes that match the rules, they automatically get added to
the Smart Playlist.
You can turn on Genius in iTunes and create playlists of songs that go great together.
Genius can also organize your music library automatically by sorting and grouping
songs into collections called Genius Mixes.
You can create as many playlists as you like, using any of the songs in your iTunes
library. Adding a song to a playlist or later removing it doesn’t remove it from your
library.
To learn how to set up playlists in iTunes, open iTunes and choose Help > iTunes Help.
Note: To create playlists on iPod classic when iPod classic isn’t connected to your
computer, see “Creating On-The-Go Playlists on iPod classic” on page 33.
Turning On Genius in iTunes
Genius finds songs in your library that go great together and uses them to create
Genius playlists and Genius Mixes.
A Genius playlist is based on a song that you select. iTunes then compiles a Genius
playlist of songs that go great with the one you selected.
Genius Mixes are preselected compilations of songs that go great together. They’re
created for you by iTunes, using songs from your library. Each Genius Mix is designed
to provide a different listening experience each time you play it. iTunes creates up to
12 Genius Mixes, based on the variety of music in your iTunes library.
To create Genius playlists and Genius Mixes on iPod classic, you first need to turn on
Genius in iTunes. For information, open iTunes and choose Help > iTunes Help.
Genius playlists and Genius Mixes created in iTunes can be synced to iPod classic like
any iTunes playlist. You can’t add Genius Mixes to iPod classic manually. See “Syncing
Genius Playlists and Genius Mixes to iPod classic” on page 23.
Genius is a free service, but you need an iTunes Store account to use it. If you don’t
have an account, you can set one up when you turn on Genius.
Importing Video to iTunes
There are several way to import video into iTunes, described below.20 Chapter 2 Setting Up iPod classic
Purchase or Rent Videos and Download Video Podcasts from the
iTunes Store
To purchase videos—movies, TV shows, and music videos—or rent movies online from
the iTunes Store (part of iTunes and available in some countries only), you sign in to
your iTunes Store account, find the videos you want, and then buy or rent them.
A rented movie expires 30 days after you rent it or 24 hours after you begin playing it
(rental requirements may vary outside the U.S.), whichever comes first. Expired rentals
are deleted automatically. These terms apply to U.S. rentals. Rental terms vary among
countries.
To enter the iTunes Store, open iTunes and click iTunes Store (under Store) on the left
side of the iTunes window.
You can view a movie trailer or TV show preview by clicking the button next to it.
Purchased videos appear when you select Movies or TV shows (under Library) or
Purchased (under Store) in the list on the left side of the iTunes window. Rented videos
appear when you select Rented Movies (under Library).
Some items have other options, such as TV shows that let you buy a Season Pass for
all episodes.
Video podcasts appear along with other podcasts in the iTunes Store. You can
subscribe to them and download them just as you would other podcasts. You don’t
need an iTunes Store account to download podcasts. See “Purchase Songs and
Download Podcasts Using the iTunes Store” on page 17.
Create Versions of Your Own Videos to Work with iPod classic
You can view other video files on iPod classic, such as videos you create in iMovie on
a Mac or videos you download from the Internet. Import the video into iTunes, convert
it for use with iPod classic, if necessary, and then add it to iPod classic.
iTunes supports many of the video formats that QuickTime supports. For more
information, see “If you can’t add a song or other item to iPod classic” on page 62.
Some videos may be ready for use with iPod classic after you import them to iTunes.
If you try to add a video to iPod classic (see “Syncing Videos Automatically” on
page 24), and a message says the video can’t play on iPod classic, then you must
convert the video for use with iPod classic. Depending on the length and content of a
video, converting it for use with iPod classic can take several minutes to several hours.
When you create a video for use with iPod classic, the original video also remains in
your iTunes library.
For more information about converting video for iPod classic, open iTunes and choose
Help > iTunes Help, or go to www.info.apple.com/kbnum/n302758.Chapter 2 Setting Up iPod classic 21
Adding Music, Videos, and Other Content to iPod classic
After your music is imported and organized in iTunes, you can easily add it to
iPod classic.
To manage how songs, videos, photos, and other content are added to iPod classic
from your computer, you connect iPod classic to your computer, and then use iTunes
preferences to choose iPod classic settings.
Connecting iPod classic to a Computer for the First Time
The first time you connect iPod classic to your computer after installing iTunes, iTunes
opens automatically, and the iPod classic Setup Assistant appears.
To use the iPod classic Setup Assistant:
1 Enter a name of iPod classic. This is the name that will appear in the device list on the
left side of the iTunes window.
2 Select your settings. Automatic syncing is selected by default.
For more information on automatic and manual syncing, see the next section.
3 Click Done.
You can change the device name and settings any time you connect iPod classic to
your computer.
After you click Done, the Summary pane appears. If you selected automatic syncing,
iPod classic begins syncing.
Adding Content Automatically or Manually
There are two ways to add content to iPod classic:
 Automatic syncing: When you connect iPod classic to your computer, iPod classic is
automatically updated to match the items in your iTunes library. You can sync all your
songs, playlists, videos, and podcasts, or, if your entire iTunes library doesn’t fit on
iPod classic, you can sync only selected items. You can sync iPod classic automatically
with only one computer at a time. 22 Chapter 2 Setting Up iPod classic
 Manually managing iPod classic: When you connect iPod classic, you can drag songs
and playlists individually to iPod classic, and delete songs and playlists individually
from iPod classic. Using this option, you can add songs from more than one
computer without erasing songs from iPod classic. When you manage music yourself,
you must always eject iPod classic from iTunes before you can disconnect it. To skip
to the section on managing your content manually, see “Managing iPod classic
Manually” on page 26.
Syncing Music Automatically
By default, iPod classic is set to sync all songs and playlists when you connect it to your
computer. This is the simplest way to add music to iPod classic. You just connect
iPod classic to your computer, let it add songs, audiobooks, videos, and other items
automatically, and then disconnect it and go. If you added any songs to iTunes since
the last time you connected iPod classic, they are synced with iPod classic. If you
deleted songs from iTunes, they are removed from iPod classic.
To sync music with iPod classic:
m Connect iPod classic to your computer. If iPod classic is set to sync automatically, the
update begins.
Important: If you connect iPod classic to a computer that it’s not synced with, a
message asks if you want to sync songs automatically. If you accept, all songs,
audiobooks, and videos are erased from iPod classic and replaced with the songs and
other items from that computer.
While music is being synced from your computer to iPod classic, the iTunes status
window shows progress, and you see a sync icon next to the iPod classic icon in the
device list.
When the update is done, you see the “iPod sync is complete” message in iTunes. A bar
at the bottom of the iTunes window displays how much disk space is used by different
types of content.
If there isn’t enough space on iPod classic for all your music, you can set iTunes to sync
only selected songs and playlists. Only the songs and playlists you specify are synced
with iPod classic.
Syncing Music From Selected Playlist, Artists, and Genres to
iPod classic
You can set iTunes to sync selected playlists, artists, and genres to iPod classic if the
music in your iTunes library doesn’t all fit on iPod classic. Only the music in the playlists,
artists, and genres you select is synced to iPod classic.Chapter 2 Setting Up iPod classic 23
To set iTunes to sync music from selected playlists, artists, and genres to iPod classic:
1 In iTunes, select iPod classic in the device list and click the Music tab.
2 Select “Sync music” and then choose “Selected playlists, artists, and genres.”
3 Select the playlists, artists, or genres you want.
4 To include music videos, select “Include music videos.”
5 To set iTunes to automatically fill any remaining space on iPod classic, select
“Automatically fill free space with songs.”
6 Click Apply.
Note: If “Sync only checked songs and videos” is selected in the Summary pane,
iTunes syncs only items that are checked.
Syncing Genius Playlists and Genius Mixes to iPod classic
You can set iTunes to sync Genius playlists and Genius Mixes to iPod classic.
Genius playlists can be added to iPod classic manually. Genius Mixes can only be
synced automatically, so you can’t add Genius Mixes to iPod classic if you manage your
content manually.
If you select any Genius Mixes to sync, iTunes may select and sync additional songs
from your library that you didn’t select.
To set iTunes to sync Genius playlists and selected Genius Mixes to iPod classic:
1 In iTunes, select iPod classic in the device list and click the Music tab.
2 Select “Sync music,” and then choose “Selected playlists, artists, and genres.”
3 Under Playlists, select the Genius playlists and Genius Mixes you want.
4 Click Apply.
If you choose to sync your entire music library, iTunes syncs all your Genius playlists
and Genius Mixes.
If “Sync only checked songs and videos” is selected in the Summary pane, iTunes syncs
only items that are checked.
Adding Videos to iPod classic
You add movies and TV shows to iPod classic much the same way you add songs. You
can set iTunes to sync all movies and TV shows to iPod classic automatically when you
connect iPod classic, or you can set iTunes to sync only selected playlists. Alternatively,
you can manage movies and TV shows manually. Using this option, you can add videos
from more than one computer without erasing videos already on iPod classic. 24 Chapter 2 Setting Up iPod classic
Note: Music videos are managed with songs, under the Music tab in iTunes.
See “Adding Music, Videos, and Other Content to iPod classic” on page 21.
Important: You can view a rented movie on only one device at a time. So, for example,
if you rent a movie from the iTunes Store and add it to iPod classic, you can only view it
on iPod classic. If you transfer the movie back to iTunes, you can only watch it there
and not on iPod classic. Be aware of the rental expiration date.
Syncing Videos Automatically
By default, iPod classic is set to sync all videos when you connect it to your computer.
This is the simplest way to add videos to iPod classic. You just connect iPod classic to
your computer, let it add videos and other items automatically, and then disconnect it
and go. If you added any videos to iTunes since the last time you connected
iPod classic, they’re added to iPod classic. If you deleted videos from iTunes, they’re
removed from iPod classic.
If there isn’t enough space on iPod classic for all your videos, you can set iTunes to sync
only the videos you specify. You can sync selected videos or selected playlists that
contain videos.
The settings for syncing movies and TV shows are unrelated. Movie settings won’t
affect TV show settings, and vice versa.
To set iTunes to sync movies to iPod classic:
1 In iTunes, select iPod classic in the device list and click the Movies tab.
2 Select “Sync movies.”
3 Select the movies or playlists you want.
All, recent, or unwatched movies: Select “Automatically include … movies” and choose
the options you want from the pop-up menu.
Selected movies or playlists: Select the movies or playlists you want.
4 Click Apply.
If “Sync only checked songs and videos” is selected in the Summary pane, iTunes syncs
only items that are checked.
To set iTunes to sync TV shows to iPod classic:
1 In iTunes, select iPod classic in the device list and click the TV Shows tab.
2 Select “Sync TV Shows.”
All, recent, or unwatched episodes: Select “Automatically include … episodes of …” and
choose the options you want from the pop-up menus.
Episodes on selected playlists: Select the playlists you want.
3 Click Apply.Chapter 2 Setting Up iPod classic 25
If “Sync only checked songs and videos” is selected in the Summary pane, iTunes syncs
only items that are checked.
Adding Podcasts to iPod classic
The settings for adding podcasts to iPod classic are unrelated to the settings for adding
songs and videos. Podcast settings don’t affect song or video settings, and vice versa.
You can set iTunes to automatically sync all or selected podcasts, or you can add
podcasts to iPod classic manually.
To set iTunes to update the podcasts on iPod classic automatically:
1 In iTunes, select iPod classic in the device list and click the Podcasts tab.
2 In the Podcasts pane, select “Sync Podcasts.”
3 Select the podcasts, episodes, and playlists you want, and set your sync options.
4 Click Apply.
When you set iTunes to sync iPod classic podcasts automatically, iPod classic is updated
each time you connect it to your computer.
If “Sync only checked songs and videos” is selected in the Summary pane, iTunes syncs
only items that are checked.
Adding Video Podcasts to iPod classic
You add video podcasts to iPod classic the same way you add other podcasts (see
“Adding Podcasts to iPod classic” on page 25). If a podcast has video, the video plays
when you choose it from the Podcasts menu.
Adding iTunes U Content to iPod classic
The settings for adding iTunes U content to iPod classic are unrelated to the settings for
adding other content. iTunes U settings don’t affect other settings, and vice versa. You
can set iTunes to automatically sync all or selected iTunes U content, or you can add
iTunes U content to iPod classic manually.
To set iTunes to update the iTunes U content on iPod classic automatically:
1 In iTunes, select iPod classic in the device list and click the iTunes U tab.
2 In the iTunes U pane, select “Sync iTunes U.”
3 Select the collections, items, and playlists you want, and set your sync options.
4 Click Apply.
When you set iTunes to sync iTunes U content automatically, iPod classic is updated
each time you connect it to your computer.
If “Sync only checked songs and videos” is selected in the Summary pane, iTunes syncs
only items that are checked in your iTunes U and other libraries.26 Chapter 2 Setting Up iPod classic
Adding Audiobooks to iPod classic
You can purchase and download audiobooks from the iTunes Store or audible.com, or
import audiobooks from CDs, and listen to them on iPod classic.
Use iTunes to add audiobooks to iPod classic. If you sync iPod classic automatically, all
audiobooks in your iTunes library are included in a playlist named Audiobooks, which
you can sync to iPod classic. If you manage your content on iPod classic manually, you
can add audiobooks one at a time.
To sync audiobooks to iPod classic:
1 In iTunes, select iPod classic in the device list and click the Music tab.
2 Select Sync Music, and then do one of the following:
 Select “Entire music library.”
 Select “Selected playlists, artists, and genres,” and then select Audiobooks (under
Playlists).
3 Click Apply.
The update begins automatically.
Adding Other Content to iPod classic
You can also use iTunes to sync photos, games, contacts, and more to iPod classic. You
can set iTunes to sync your content automatically, or you can manage your content on
iPod classic manually.
For more information about adding other types of content to iPod classic, see:
 “Adding Photos from Your Computer to iPod classic” on page 44
 “To sync games automatically to iPod classic:” on page 53
 “Syncing Contacts, Calendars, and To-Do Lists” on page 54
Managing iPod classic Manually
If you manage iPod classic manually, you can add and remove individual songs
(including music videos) and videos (including movies and TV shows). You can also
add music and videos from multiple computers to iPod classic without erasing items
already on iPod classic.
You can’t add Genius Mixes to iPod classic manually, but you can add Genius playlists
manually.
Setting iPod classic to manually manage music and video turns off the automatic sync
options in the Music, Movies, TV Shows, Podcasts, iTunes U, Photos, Contacts, and
Games panes. You can’t manually manage some and automatically sync others at the
same time.Chapter 2 Setting Up iPod classic 27
If you set iTunes to manage content manually, you can reset it later to sync
automatically.
To set iTunes to let you manage content on iPod classic manually:
1 In iTunes, select iPod classic in the device list and click the Summary tab.
2 In the Options section, select “Manually manage music and video.”
3 Click Apply.
When you manage content on iPod classic manually, you must always eject iPod classic
from iTunes before you disconnect it.
When you connect a manually-managed iPod classic to a computer, it appears in the
device list on the left side of the iTunes window.
To add a song, video, or other item to iPod classic:
1 In iTunes, click Music or another item in the Library list on the left.
2 Drag a song or other item to iPod classic icon in the device list.
To remove a song, video, or other item from iPod classic:
1 In iTunes, select iPod classic in the device list.
2 Select a song or other item on iPod classic and press the Delete or Backspace key on
your keyboard.
If you manually remove a song or other item from iPod classic, it isn’t deleted from your
iTunes library.
To create a new playlist on iPod classic:
1 In iTunes, select iPod classic in the device list, and then click the Add (+) button or
choose File > New Playlist.
2 Type a name for the playlist.
3 Click an item, such as Music, in the Library list, and then drag songs or other items to
the playlist.
To add items to or remove items from a playlist on iPod classic:
m Drag an item to a playlist on iPod classic to add the item. Select an item in a playlist
and press the Delete key on your keyboard to delete the item.
To reset iTunes to sync music, video, and podcasts automatically:
1 In iTunes, select iPod classic in the device list and click the Summary tab.
2 Deselect “Manually manage music and videos.”
3 Click Apply.
The update begins automatically.3
28
3 Listening to Music
Read this chapter to learn about listening to iPod classic on
the go.
After you set up iPod classic, you can listen to songs, podcasts, audiobooks, and more.
Playing Music and Other Audio
When a song is playing, the Now Playing screen appears. The following table describes
the elements on the Now Playing screen of iPod classic.
Now Playing screen item Function
Shuffle (¡) icon Appears if iPod classic is set to shuffle songs or albums.
Repeat (⁄) icon Appears if iPod classic is set to repeat all songs. The Repeat Once
(!) icon appears if iPod classic is set to repeat one song.
Play icon Appears when a song is playing. The Pause (1) icon appears when
a song is paused.
Battery icon Shows the approximate remaining battery charge.
Album art Shows the album art, if it’s available.
Song information Displays the song title, artist, and album title. If you rate the song,
rating stars are displayed. Also displays the number of the song
that’s playing within the current sequence.
Progress bar Shows the elapsed and remaining times for the song that’s playing.
Shuffle icon
Repeat icon
Progress bar
Song information,
rating, and sequence
number
Album art
Battery icon
Play iconChapter 3 Listening to Music 29
Press the Center button to cycle through these additional items in the Now Playing
screen:
Use the Click Wheel and Center button to browse for a song or music video.
When you play music videos from the Music menu, you only hear the music. When you
play them from the Videos menu, you also see the video.
To browse for and play a song:
m Choose Music, browse for a song or music video, and then press Play/Pause.
To change the playback volume:
m When you see the progress bar, use the Click Wheel to change the volume.
If you don’t see the progress bar, press the Center button until it appears.
To listen to a different part of a song:
1 Press the Center button until you see the scrubber bar.
2 Use the Click Wheel to move the playhead along the scrubber bar.
To create a Genius playlist from the current song:
1 Press the Center button until you see the Genius slider.
2 Use the Click Wheel to move the slider to Start.
The Genius slider doesn’t appear if Genius information isn’t available for the current
song.
To set shuffle songs from the Now Playing screen:
1 Press the Center button until you see the shuffle slider.
2 Use the Click Wheel to move the slider to Songs or Albums.
 Choose Songs to play all songs on iPod classic at random.
 Choose Albums to play all songs in the current album in order. iPod classic then
randomly selects another album and plays through it in order.
Screen item Function
Scrubber bar Lets you quickly navigate to a different part of the track.
Genius slider Creates a Genius playlist based on the current song. The slider
doesn’t appear if Genius information isn’t available for the current
song.
Shuffle slider Lets you shuffle songs or albums directly from the Now Playing
screen.
Song rating Lets you rate the song.
Lyrics Displays the lyrics of the current song. Lyrics don’t appear if you
didn’t enter them in iTunes.30 Chapter 3 Listening to Music
To just listen to a music video:
m Choose Music and browse for a music video.
When you play the video, you hear it but don’t see it. When you play a playlist that
includes video podcasts, you hear the podcasts but don’t see them.
To return to the previous menu:
m From any screen, press Menu.
Rating Songs
You can assign a rating to a song (from 1 to 5 stars) to indicate how much you like it.
You can use a song rating to help you create Smart Playlists automatically in iTunes.
To rate a song:
1 Start playing the song.
2 From the Now Playing screen, press the Center button until the five Rating bullets appear.
3 Use the Click Wheel to assign a rating (represented by stars).
The ratings you assign to songs on iPod classic are transferred to iTunes when you sync.
Note: You cannot assign ratings to video podcasts.
Viewing Lyrics on iPod classic
If you enter lyrics for a song in iTunes (see “Add Lyrics” on page 18) and then add the
song to iPod classic, you can view the lyrics on iPod classic. Lyrics don’t appear unless
you enter them.
To view lyrics on iPod classic while a song is playing:
m On the Now Playing screen, press the Center button until you see the lyrics. You can
scroll through the lyrics as the song plays.
Viewing Album Artwork on iPod classic
iTunes displays album artwork on iPod classic, if the artwork is available. Artwork
appears on iPod classic in the album list, when you play a song from the album, and in
Cover Flow (see the next section for more information about Cover Flow).
To see album artwork on iPod classic:
m Play a song that has album artwork and view it in the Now Playing screen.
For more information about album artwork, open iTunes and choose Help > iTunes Help.
Browsing Music Using Cover Flow
You can browse your music collection using Cover Flow, a visual way to flip through
your library. Cover Flow displays your albums alphabetically by artist name. You see the
album artwork, title, and artist name.Chapter 3 Listening to Music 31
To use Cover Flow:
1 From the Music menu, choose Cover Flow.
2 Use the Click Wheel (or press Next/Fast-forward or Previous/Rewind) to move through
your album art.
3 Select an album and press the Center button.
4 Use the Click Wheel to select a song, and then press the Center button to play it.
Accessing Additional Commands
Some additional iPod classic commands can be accessed directly from the Now Playing
screen and some menus.
To access additional commands:
m Press and hold the Center button until a menu appears, select a command, and then
press the Center button again.
If a menu doesn’t appear, no additional commands are available.
Using Genius on iPod classic
Even when iPod classic isn’t connected to your computer, Genius can automatically
create instant playlists of songs that go great together. You can also play Genius Mixes,
which are preselected compilations of songs that go great together. You can create
Genius playlists in iTunes and add them to iPod classic, and you can sync Genius Mixes
to iPod classic.
To use Genius, you need to set up Genius in the iTunes Store, and then sync iPod classic
to iTunes (see “Turning On Genius in iTunes” on page 19).
To create a Genius playlist on iPod classic:
1 Select a song, and then press and hold the Center button until a menu appears.
You can select a song from a menu or playlist, or you can start from the Now Playing
screen.
2 Choose Start Genius.
Start Genius doesn’t appear in the menu of additional commands, if any of the
following apply:
 You haven’t set up Genius in iTunes and then synced iPod classic with iTunes.
 Genius doesn’t recognize the song you selected.
 Genius recognizes the song, but there aren’t at least ten similar songs in your library.
3 Press the Center button. The new playlist appears.
4 To keep the playlist, choose Save Playlist.32 Chapter 3 Listening to Music
The playlist is saved with the song title and artist of the song you used to make the
playlist.
5 To change the playlist to a new one based on the same song, choose Refresh. If you
refresh a saved playlist, the new playlist replaces the previous one. You can’t recover the
previous playlist.
You can also start Genius from the Now Playing screen by pressing the Center button
until you see the Genius slider, and then using the Click Wheel to move the slider to the
right. The Genius slider won’t appear if Genius information isn’t available for the current
song.
Genius playlists saved on iPod classic are synced back to iTunes when you connect
iPod classic to your computer.
To play a Genius playlist:
m Choose Music > Playlists, and then choose a Genius playlist.
Playing Genius Mixes
Genius Mixes are created for you by iTunes and contain songs from your library that go
great together. Genius Mixes provide a different listening experience each time you
play one. iTunes creates up to 12 Genius Mixes, depending on the variety of music in
your iTunes library.
To learn how to sync Genius Mixes to iPod classic, see “Syncing Genius Playlists and
Genius Mixes to iPod classic” on page 23.
To play a Genius Mix:
1 Choose Music > Genius Mixes.
2 Use the Click Wheel (or press Next/Fast-forward or Previous/Rewind) to browse the
Genius Mixes. The dots at the bottom of the screen indicate how many Genius Mixes
are synced to iPod classic.
3 To start playing a Genius Mix, press the Center button or Play/Pause when you see
its screen.
The Speaker icon
appears when the
selected Genius Mix is
playing.
mpChapter 3 Listening to Music 33
Creating On-The-Go Playlists on iPod classic
You can create On-The-Go playlists on iPod classic when iPod classic isn’t connected to
your computer.
To create an On-The-Go playlist:
1 Select a song, and then press and hold the Center button until a menu appears.
2 Choose “Add to On-The-Go,” and press the Center button.
3 To add more songs, repeat steps 1 and 2.
4 Choose Music > Playlists > On-The-Go to view and play your list of songs.
You can also add a group of songs. For example, to add an album, highlight the album
title, press and hold the Center button until a menu appears, and then choose “Add to
On-The-Go.”
To play songs in the On-The-Go playlist:
m Choose Music > Playlists > On-The-Go, and then choose a song.
To remove a song from the On-The-Go playlist:
1 Select a song in the playlist, and hold down the Center button until a menu appears.
2 Choose “Remove from the On-The-Go,” and then press the Center button.
To clear the entire On-The-Go playlist:
m Choose Music > Playlists > On-The-Go > Clear Playlist and then click Clear.
To save the On-The-Go playlist on iPod classic:
m Choose Music > Playlists > On-The-Go > Save Playlist.
The first playlist is saved as “New Playlist 1” in the Playlists menu. The On-The-Go
playlist is cleared. You can save as many playlists as you like. After you save a playlist,
you can no longer remove songs from it.
To copy the On-The-Go playlists to your computer:
m If iPod classic is set to update songs automatically (see “Syncing Music Automatically”
on page 22), and you make an On-The-Go playlist, the playlist is automatically copied
to iTunes when you connect iPod classic. You see the new On-The-Go playlist in the list
of playlists in iTunes. You can rename, edit, or delete the new playlist, just as you would
any playlist.
Browsing Songs by Artist or Album
When you’re listening to a song, you can browse more songs by the same artist or all
the songs from the current album.
To browse songs by artist:
1 From the Now Playing screen, press and hold the Center button until a menu appears.
2 Choose Browse Artist, and then press the Center button.34 Chapter 3 Listening to Music
You see the other songs by that artist that are on iPod classic. You can select another
song or return to the Now Playing screen.
To browse songs by album:
1 From the Now Playing screen, press and hold the Center button until a menu appears.
2 Choose Browse Album, and then press the Center button.
You see the other songs from the current album that are on iPod classic. You can select
another song or return to the Now Playing screen.
Setting iPod classic to Shuffle Songs
You can set iPod classic to play songs, albums, or your entire library in random order.
To set iPod classic to shuffle and play all your songs:
m Choose Shuffle Songs from the iPod classic main menu.
iPod classic begins playing songs from your entire music library in random order,
skipping audiobooks and podcasts.
To set iPod classic to always shuffle songs or albums:
1 Choose Settings from the iPod classic main menu.
2 Set Shuffle to either Songs or Albums.
When you set iPod classic to shuffle songs, iPod classic shuffles songs within whatever
list (for example, album or playlist) you choose to play.
When you set iPod classic to shuffle albums, it plays all the songs on an album in order,
and then randomly selects another album and plays through it in order.
You can also set iPod classic to shuffle songs directly from the Now Playing screen.
To set iPod classic to shuffle songs from the Now Playing screen:
1 From the Now Playing screen, press the Center button until the shuffle slider appears.
2 Use the Click Wheel to set iPod classic to shuffle songs or albums.
Setting iPod classic to Repeat Songs
You can set iPod classic to repeat a song over and over, or repeat songs within the list
you choose to play.
To set iPod classic to repeat songs:
m Choose Settings from the iPod classic main menu.
 To repeat all songs in the list, set Repeat to All.
 To repeat one song over and over, set Repeat to One.Chapter 3 Listening to Music 35
Searching Music
You can search iPod classic for songs, playlists, album titles, artist names, audio
podcasts, and audiobooks. The search feature doesn’t search videos, notes, calendar
items, contacts, or lyrics.
Note: Not all languages are supported.
To search for music:
1 From the Music menu, choose Search.
2 Enter a search string by using the Click Wheel to navigate the alphabet and pressing
the Center button to enter each character.
iPod classic starts searching as soon as you enter the first character, displaying the
results on the search screen. For example, if you enter “b,” iPod classic displays all
music items containing the letter “b.” If you enter “ab,” iPod classic displays all items
containing that sequence of letters.
To enter a space, press the Next/Fast-forward button.
To delete the previous character, press the Previous/Rewind button.
3 Press Menu to display the results list, which you can navigate by using the Click Wheel.
Items appear in the results list with icons identifying their type: song, video, artist,
album, audiobook, podcast, or iTunes U.
To return to Search (if Search is highlighted in the menu), press the Center button.
Customizing the Music Menu
You can add items to or remove them from the Music menu, just as you do with the
main menu. For example, you can add a Compilations item to the Music menu, so you
can easily choose compilations that are put together from various sources.
To add or remove items in the Music menu:
1 Choose Settings > Music Menu.
2 Select each item you want to appear in the Music menu. A checkmark indicates which
items have been added. To revert to the original Music menu settings, choose Reset Menu.36 Chapter 3 Listening to Music
Setting the Maximum Volume Limit
You can choose to set a limit for the maximum volume on iPod classic and assign
a combination to prevent the setting from being changed.
To set the maximum volume limit for iPod classic:
1 Choose Settings > Volume Limit.
The volume control shows the current volume.
2 Use the Click Wheel to select the maximum volume limit.
You can press Play to hear the currently selected song play while you select the
maximum volume limit.
3 Press the Center button to set the maximum volume limit.
A triangle on the volume bar indicates the maximum volume limit.
4 Press the Menu button to accept the maximum volume limit without requiring
a combination to change it. Or, on the Enter Combination screen, set a combination
to require that the combination be entered to change the maximum volume limit.
5 To enter a combination:
 Use the Click Wheel to select a number for the first position. Press the Center button
to confirm your choice and move to the next position.
 Use the same method to set the remaining numbers of the combination. You can use
the Next/Fast-forward button to move to the next position and the Previous/Rewind
button to move to the previous position. Press the Center button in the final position
to confirm the entire combination.
If you set a combination, you must enter it before you can change or remove the
maximum volume limit.
The volume of songs and other audio may vary depending on how the audio was
recorded or encoded. See “Setting Songs to Play at the Same Volume Level” on page 37
for information about how to set a relative volume level in iTunes and on iPod classic.
Volume level may also vary if you use different earphones or headphones. With the
exception of the iPod Radio Remote, accessories that connect through the iPod Dock
Connector don’t support volume limits.
To change the maximum volume limit:
1 Choose Settings > Volume Limit.
2 If you set a combination, enter it by using the Click Wheel to select the numbers and
pressing the Center button to confirm them.
3 Use the Click Wheel to change the maximum volume limit.
4 Press the Play/Pause button to accept the change.Chapter 3 Listening to Music 37
To remove the maximum volume limit:
1 If you’re currently listening to iPod classic, press Pause.
2 Choose Settings > Volume Limit.
3 If you set a combination, enter it by using the Click Wheel to select the numbers and
pressing the Center button to confirm them.
4 Use the Click Wheel to move the volume limit to the maximum level on the volume bar.
This removes any restriction on volume.
5 Press the Play/Pause button to accept the change.
If you forget the combination, you can restore iPod classic. See “Updating and
Restoring iPod Software” on page 65 for more information.
Setting Songs to Play at the Same Volume Level
iTunes can automatically adjust the volume of songs, so they play at the same relative
volume level. You can set iPod classic to use the iTunes volume settings.
To set iTunes to play songs at the same sound level:
1 In iTunes, choose iTunes > Preferences if you’re using a Mac, or choose
Edit > Preferences if you’re using a Windows PC.
2 Click Playback and select Sound Check, and then click OK.
To set iPod classic to use the iTunes volume settings:
m Choose Settings and set Sound Check to On.
If you haven’t activated Sound Check in iTunes, setting it on iPod classic has no effect.
Using the Equalizer
You can use equalizer presets to change the sound on iPod classic to suit a particular
music genre or style. For example, to make rock music sound better, set the equalizer
to Rock.
To use the equalizer to change the sound on iPod classic:
m Choose Settings > EQ and choose an equalizer preset.
If you assigned an equalizer preset to a song in iTunes and the iPod classic equalizer is
set to Off, the song plays using the iTunes setting. See iTunes Help for more information.38 Chapter 3 Listening to Music
Playing Podcasts
Podcasts are free, downloadable shows available at the iTunes Store. Podcasts are
organized by shows, episodes within shows, and chapters within episodes. If you stop
playing a podcast and return to it later, the podcast begins playing where you left off.
To play a podcast:
1 From the main menu, choose Podcasts, and then choose a show.
Shows appear in reverse chronological order so you can play the most recent one first.
You see a blue dot next to shows and episodes you haven’t played yet.
2 Choose an episode to play it.
For audio podcasts, the Now Playing screen displays the show, episode, and date
information, along with elapsed and remaining time. Press the Center button to bring
up the scrubber, star ratings, and other information about the podcast. For video
podcasts, you control the podcast as you do other videos.
If the podcast has chapters, you can press Next/Fast-forward or Previous/Rewind to skip
to the next chapter or to the beginning of the current chapter in the podcast.
If a podcast includes artwork, you also see a picture. Podcast artwork can change
during an episode.
For more information about podcasts, open iTunes and choose Help > iTunes Help.
Then search for “podcasts.”
Playing iTunes U Content
iTunes U is a part of the iTunes Store featuring free lectures, language lessons,
audiobooks, and more, which you can download and enjoy on iPod classic. iTunes U
content is organized by collections, items within collections, authors, and providers.
If you stop listening to iTunes U content and return to it later, the collection or item
begins playing where you left off.
To play iTunes U content:
1 From the main menu, choose iTunes U, and then choose a collection.
Items within a collection appear in reverse chronological order so you can listen to the
most recent one first. You see a blue dot next to collections and items you haven’t
played yet.
2 Choose an item to play it.
For more information about iTunes U, open iTunes and choose Help > iTunes Help.
Then search for “iTunes U.”Chapter 3 Listening to Music 39
Listening to Audiobooks
To listen to audiobooks on iPod classic, choose Audiobooks from the Music menu.
Choose an audiobook and then press Play/Pause.
If you stop listening to an audiobook on iPod classic and return to it later, the
audiobook begins playing where you left off. iPod classic skips audiobooks when
it’s set to shuffle.
If the audiobook you’re listening to has chapters, you can press Next/Fast-forward or
Previous/Rewind to skip to the next chapter or the beginning of the current chapter in
the audiobook. You can also choose the audiobook from the Audiobooks menu and
then choose a chapter, or choose Resume to begin playing where you left off.
You can play audiobooks at speeds faster or slower than normal. Setting the playing
speed affects only audiobooks purchased from the iTunes Store or from audible.com.
To set audiobook play speed:
m Choose Settings > Audiobooks and choose a speed.
Listening to FM Radio
You can listen to radio using the optional iPod Radio Remote accessory for iPod classic.
iPod Radio Remote attaches to iPod classic using the Dock connector. When you’re
using iPod Radio Remote, you see a Radio menu item on the iPod classic main menu.
For more information, see the iPod Radio Remote documentation.4
40
4 Watching Videos
You can use iPod classic to watch movies, TV shows, video
podcasts, and more. Read this chapter to learn about
watching videos on iPod classic and on your TV.
You can view and listen to videos on iPod classic. If you have a compatible AV cable
(available separately at www.apple.com/ipodstore or your local Apple store), you can
watch videos from iPod classic on your TV.
Watching Videos on iPod classic
Videos you add to iPod classic appear in the Videos menus. Music videos also appear in
Music menus.
To watch a video on iPod classic:
1 Choose Videos and browse for a video.
2 Select a video and then press Play/Pause.
When you play the video, you see and hear it.
Watching Video Podcasts
To watch a video podcast:
m From the main menu, choose Podcasts and then choose a video podcast.
For more information, see “Playing Podcasts” on page 38.
Watching Video Downloaded from iTunes U
To watch an iTunes U video:
m From the main menu, choose iTunes U and then choose a video.
For more information, see “Playing iTunes U Content” on page 38.Chapter 4 Watching Videos 41
Watching Videos on a TV Connected to iPod classic
If you have an AV cable from Apple, you can watch videos on a TV connected to
iPod classic. First you set iPod classic to display videos on a TV, then connect
iPod classic to your TV, and then play a video.
Use the Apple Component AV Cable, the Apple Composite AV Cable, or the Apple AV
Connection Kit. Other similar RCA-type cables might not work. You can purchase the
cables at www.apple.com/ipodstore or your local Apple store.
To set iPod classic to display videos on a TV:
m Choose Videos > Settings, and then set TV Out to Ask or On.
If you set TV Out to Ask, iPod classic gives you the option of displaying videos on TV or
on iPod classic every time you play a video. If you set TV Out to On, iPod classic displays
videos only on TV. If you try to play a video when iPod classic isn’t connected to a TV,
iPod classic displays a message instructing you to connect to one.
You can also set video to display full screen or widescreen, and set video to display on
PAL or NTSC devices.
To set TV settings:
m Choose Videos > Settings, and then follow the instructions below.
To use the Apple Component AV Cable to connect iPod classic to your TV:
1 Plug the green, blue, and red video connectors into the component video input
(Y, Pb, and Pr) ports on your TV.
You can also use the Apple Composite AV Cable. If you do, plug in the yellow video
connector into the video input port on your TV. Your TV must have RCA video and
audio ports.
To set Do this
Video to display on a TV Set TV Out to Ask or On.
Video to display on PAL or
NTSC TVs
Set TV Signal to NTSC or PAL. NTSC and PAL refer to TV broadcast
standards. Your TV might use either of these, depending on the
region where it was purchased. If you aren’t sure which your TV
uses, check the documentation that came with your TV.
The format of your external TV Set TV Screen to Widescreen for 16:9 format or Standard for 4:3
format.
Video to fit to your screen Set “Fit to Screen” to On. If you set “Fit to Screen” to Off, widescreen
videos display in letterbox format on iPod classic or a standard (4:3)
TV screen.
Alternate audio to play Set Alternate Audio to On.
Captions to display Set Captions to On.
Subtitles to display Set Subtitles to On.42 Chapter 4 Watching Videos
2 Plug the white and red audio connectors into the left and right analog audio input
ports, respectively, on your TV.
3 Plug the iPod Dock Connector into iPod classic or the Universal Dock.
4 Plug the USB connector into your USB Power Adapter or your computer to keep
iPod classic charged.
5 Turn on iPod classic and your TV or receiver to start playing.
Make sure you set TV Out on iPod classic to On.
Note: The ports on your TV or receiver may differ from the ports in the illustration.
To view a video on your TV:
1 Connect iPod classic to your TV (see above).
2 Turn on your TV and set it to display from the input ports connected to iPod classic.
See the documentation that came with your TV for more information.
3 On iPod classic, choose Videos and browse for a video.
USB Power
Adapter
USB
connector
iPod Left audio (white)
Dock Connector
Television
Video in (Y, Pb, Pr)
Right audio (red)5
43
5 Adding and Viewing Photos
Read this chapter to learn about importing and viewing
photos.
You can import digital photos to your computer and add them to iPod classic.
You can view your photos on iPod classic or as a slideshow on your TV.
Importing Photos
If you computer is a Mac, you can import photos from a digital camera to your
computer using iPhoto.
You can import other digital images into iPhoto, such as images you download from
the web. For more information about importing, organizing, and editing your photos,
open iPhoto and choose Help > iPhoto Help.
iPhoto is available for purchase as part of the iLife suite of applications at
www.apple.com/ilife or your local Apple Store. iPhoto might already be installed on
your Mac, in the Applications folder.
To import photos to a Windows PC, follow the instructions that came with your digital
camera or photo application.44 Chapter 5 Adding and Viewing Photos
Adding Photos from Your Computer to iPod classic
If you have a Mac and iPhoto 7.1.5 or later, you can sync iPhoto albums automatically
(for Mac OS X v10.4.11, iPhoto 6.0.6 or later is required). If you have a PC or Mac, you can
add photos to iPod classic from a folder on your hard disk.
Adding photos to iPod classic the first time might take some time, depending on how
many photos are in your photo library.
To sync photos from a Mac to iPod classic using iPhoto:
1 In iTunes, select iPod classic in the device list and click the Photos tab.
2 Select “Sync photos from: …” and then choose iPhoto from the pop-up menu.
3 Select your sync options.
 If you want to add all your photos, select “All photos, albums, events, and faces.”
 If you want to add selected photos, select “Selected albums, events, and faces, and
automatically include …” and choose an option from the pop-up menu. Then select
the albums, events, and faces you want to add. (Faces is supported only by iPhoto 8.1
or later.)
 If you want to add videos from iPhoto, select “Include videos.”
4 Click Apply.
To add photos from a folder on your hard disk to iPod classic:
1 Drag the images to a folder on your computer.
If you want images to appear in separate photo albums on iPod classic, create folders
within the main image folder, and drag images to the new folders.
2 In iTunes, select iPod classic in the device list and click the Photos tab.
3 Select “Sync photos from: …”
4 Choose “Choose Folder” from the pop-up menu and select the image folder.
5 Click Apply.Chapter 5 Adding and Viewing Photos 45
Adding Full-Resolution Image Files to iPod classic
When you add photos to iPod classic, iTunes optimizes the photos for viewing.
Full-resolution image files aren’t transferred by default. Adding full-resolution image
files is useful, for example if you want to move them from one computer to another,
but isn’t necessary for viewing the images at full quality on iPod classic.
To add full-resolution image files to iPod classic:
1 In iTunes, select iPod classic in the device list and click the Photos tab.
2 Select “Include full-resolution photos.”
3 Click Apply.
iTunes copies the full-resolution versions of the photos to the Photos folder on
iPod classic.
To delete photos from iPod classic:
1 In iTunes, select iPod classic in the source list and click the Photos tab.
2 Select “Sync photos from: …”
 On a Mac, choose iPhoto from the pop-up menu.
 On a Windows PC, choose Photoshop Album or Photoshop Elements from the pop-up
menu.
3 Choose “Selected albums” and deselect the albums you no longer want on iPod classic.
4 Click Apply.
Viewing Photos
You can view photos on iPod classic manually or as a slideshow. If you have an optional
AV cable from Apple (for example, the Apple Component AV Cable), you can connect
iPod classic to a TV and view photos as a slideshow with music.
Viewing Photos on iPod classic
To view photos on iPod classic:
1 On iPod classic, choose Photos > All Photos. Or choose Photos and a photo album
to see only the photos in the album. Thumbnail views of the photos might take
a moment to appear.
2 Select the photo you want and press the Center button to view a full-screen version.
From any photo-viewing screen, use the Click Wheel to scroll through photos. Press the
Next/Fast-forward or Previous/Rewind button to skip to the next or previous screen of
photos. Press and hold the Next/Fast-forward or Previous/Rewind button to skip to the
last or first photo in the library or album.46 Chapter 5 Adding and Viewing Photos
Viewing Slideshows
You can view a slideshow, with music and transitions if you choose, on iPod classic.
If you have an optional AV cable from Apple, you can view the slideshow on a TV.
To set slideshow settings:
m Choose Photos > Settings, and then follow these instructions:
To view a slideshow on iPod classic:
m Select any photo, album, or roll, and press the Play/Pause button. Or select any
full-screen photo and press the Center button. To pause, press the Play/Pause button.
To skip to the next or previous photo, press the Next/Fast-forward or Previous/Rewind
button.
When you view a slideshow, you can use the Click Wheel to control the music volume
and adjust the brightness. You can’t use the Click Wheel to scroll through photos
during a slideshow.
If you view a slideshow of an album that includes videos, the slideshow pauses when
it reaches a video. If music is playing, it continues to play. If you play the video, the
music pauses while the video is playing, and then resumes. To play the video, press
Play/Pause. To resume the slideshow, press Next/Fast-Forward.
To set Do this
How long each slide is shown Choose Time Per Slide and pick a time.
The music that plays during
slideshows
Choose Music and choose a playlist. If you’re using iPhoto, you can
choose From iPhoto to copy the iPhoto music setting. Only the
songs that you’ve added to iPod classic play.
Slides to repeat Set Repeat to On.
Slides to display in random
order
Set Shuffle Photos to On.
Slides to display with
transitions
Choose Transitions and choose a transition type.
Slideshows to display on
iPod classic
Set TV Out to Ask or Off.
Slideshows to display on TV Set TV Out to Ask or On.
If you set TV Out to Ask, iPod classic gives you the option of
showing slideshows on TV or on iPod classic every time you start a
slideshow.
Slides to show on PAL
or NTSC TVs
Set TV Signal to PAL or NTSC.
PAL and NTSC refer to TV broadcast standards. Your TV might use
either of these, depending on the region where it was purchased.
If you aren’t sure which your TV uses, check the documentation
that came with your TV.Chapter 5 Adding and Viewing Photos 47
To adjust the brightness during a slideshow:
1 Press the Center button until the brightness indicator appears.
2 Use the Click Wheel to adjust the brightness.
To connect iPod classic to your TV:
1 Connect the optional Apple Component or Composite AV cable to iPod classic.
Use the Apple Component AV Cable, Apple Composite AV Cable, or Apple AV
Connection Kit. Other similar RCA-type cables won’t work. You can purchase the
cables at www.apple.com/ipodstore or your local Apple store.
2 Connect the video and audio connectors to the ports on your TV (see the illustration
on page 42).
Make sure you set TV Out on iPod classic to Ask or On.
Your TV must have RCA video and audio ports. The ports on your TV or receiver may
differ from the ports in the illustration.
To view a slideshow on a TV:
1 Connect iPod classic to a TV (see above).
2 Turn on your TV and set it to display from the input ports connected to iPod classic.
See the documentation that came with your TV for more information.
3 Use iPod classic to play and control the slideshow.
Adding Photos from iPod classic to a Computer
If you add full-resolution photos from your computer to iPod classic using the previous
steps, they’re stored in a Photos folder on iPod classic. You can connect iPod classic to a
computer and put these photos onto the computer. iPod classic must be enabled for
disk use (see “Using iPod classic as an External Disk” on page 49).
To add photos from iPod classic to a computer:
1 Connect iPod classic to the computer.
2 Drag image files from the Photos folder or DCIM folder on iPod classic to the desktop or
to a photo editing application on the computer.48 Chapter 5 Adding and Viewing Photos
Note: You can also use a photo editing application, such as iPhoto, to add photos
stored in the Photos folder. See the documentation that came with the application
for more information.
To delete photos from the Photos folder on iPod classic:
1 Connect iPod classic to the computer.
2 Navigate to the Photos folder on iPod classic and delete the photos you no longer
want.6
49
6 More Settings, Extra Features, and
Accessories
iPod classic can do a lot more than play songs. And you can
do a lot more with it than listen to music.
Read this chapter to find out more about the extra features of iPod classic, including
how to use it as an external disk, alarm, or sleep timer; show the time of day in other
parts of the world; display notes; and sync contacts, calendars, and to-do lists. Learn
about how to use iPod classic as a stopwatch and to lock the screen, and about the
accessories available for iPod classic.
Using iPod classic as an External Disk
You can use iPod classic as an external disk to store data files.
Note: To add music and other audio or video files to iPod classic, you must use
iTunes. For example, you won’t see songs you add using iTunes in the Mac Finder
or in Windows Explorer. Likewise, if you copy music files to iPod classic in the
Mac Finder or Windows Explorer, you won’t be able to play them on iPod classic.
To enable iPod classic as an external disk:
1 In iTunes, select iPod classic in the device list and click the Summary tab.
2 In the Options section, select “Enable disk use.”
3 Click Apply.
When you use iPod classic as an external disk, the iPod classic disk icon appears on
the desktop on Mac, or as the next available drive letter in Windows Explorer on a
Windows PC.
Note: Clicking Summary and selecting “Manually manage music and videos” in the
Options section also enables iPod classic to be used as an external disk. Drag files to
and from iPod classic to copy them.
If you use iPod classic primarily as a disk, you might want to keep iTunes from opening
automatically when you connect iPod classic to your computer.50 Chapter 6 More Settings, Extra Features, and Accessories
To prevent iTunes from opening automatically when you connect iPod classic
to your computer:
1 In iTunes, select iPod classic in the device list and click the Summary tab.
2 In the Options section, deselect “Open iTunes when this iPod is connected.”
3 Click Apply.
Using Extra Settings
You can set the date and time, clocks in different time zones, and alarm and sleep
features on iPod classic. You can use iPod classic as a stopwatch or to play games, and
you can lock the iPod classic screen.
Setting and Viewing the Date and Time
The date and time are set automatically from your computer’s clock when you connect
iPod classic, but you can change the settings.
To set date and time options:
1 Choose Settings > Date & Time.
2 Choose one or more of the following options:
To Do this
Set the date Choose Date. Use the Click Wheel to change the selected value.
Press the Center button to move to the next value.
Set the time Choose Time. Use the Click Wheel to change the selected value.
Press the Center button to move to the next value.
Specify the time zone Choose Time Zone and use the Click Wheel to select a city in
another time zone.
Display the time in 24-hour
format
Choose 24 Hour Clock and press the Center button to turn the
24-hour format on or off.
Display the time in the title bar Choose Time in Title and press the Center button to turn the option
on or off. Chapter 6 More Settings, Extra Features, and Accessories 51
Adding Clocks for Other Time Zones
To add clocks for other time zones:
1 Choose Extras > Clocks.
2 On the Clocks screen, click the Center button and choose Add.
3 Choose a region and then choose a city.
The clocks you add appear in a list. The last clock you added appears last.
To delete a clock:
1 Choose Extras > Clocks.
2 Choose the clock.
3 Choose Delete.
Setting the Alarm
You can set an alarm for any clock on iPod classic.
To use iPod classic as an alarm clock:
1 Choose Extras > Alarms.
2 Choose Create Alarm and set one or more of the following options:
To delete an alarm:
1 Choose Extras > Alarms.
2 Choose the alarm and then choose Delete.
Setting the Sleep Timer
You can set iPod classic to turn off automatically after playing or other content for
a specific period of time.
To set the sleep timer:
1 Choose Extras > Alarms.
To Do this
Turn the alarm on Choose Alarm and choose On.
Set the date Choose Date. Use the Click Wheel to change the selected value.
Press the Center button to move to the next value.
Set the time Choose Time. Use the Click Wheel to change the selected value.
Press the Center button to move to the next value.
Set a repeat option Choose Repeat and choose an option (for example, “weekdays”).
Choose a sound Choose Tones or a playlist. If you choose Tones, select Beep to hear
the alarm through the internal speaker. If you choose a playlist,
you’ll need to connect iPod classic to speakers or headphones to
hear the alarm.
Name the alarm Choose Label and choose an option (for example, “Wake up”).52 Chapter 6 More Settings, Extra Features, and Accessories
2 Choose Sleep Timer and choose how long you want iPod classic to play.
Using the Stopwatch
You can use the stopwatch as you exercise to track your overall time and, if you’re
running on a track, your lap times. You can play music while you use the stopwatch.
To use the stopwatch:
1 Choose Extras > Stopwatch.
2 Press the Play/Pause button to start the timer.
3 Press the Center button to record lap times. Up to three lap times show beneath the
overall time.
4 Press the Play/Pause button to stop the overall timer, or choose Resume to start the
timer again.
5 Choose New Timer to start a new stopwatch session.
Note: After you start the stopwatch, the stopwatch continues to run as long as you
display the Timer screen. If you start the stopwatch and then go to another menu, and
iPod classic isn’t playing music or a video, the stopwatch timer stops and iPod classic
turns off automatically after a few minutes.
To review or delete a logged stopwatch session:
1 Choose Extras > Stopwatch.
The current log and a list of saved sessions appear.
2 Choose a log to view session information.
iPod classic stores stopwatch sessions with dates, times, and lap statistics. You see the
date and time the session started; the total time of the session; the shortest, longest,
and average lap times; and the last several lap times.
3 Press the Center button and choose Delete Log to delete the chosen log, or Clear Logs
to delete all current logs.
Playing Games
iPod classic comes with three games: iQuiz, Klondike, and Vortex.
To play a game:
m Choose Extras > Games and choose a game.
You can purchase additional games from the iTunes Store (in some countries) to play
on iPod classic. After purchasing games in iTunes, you can add them to iPod classic by
syncing them automatically or by managing them manually.
To buy a game:
1 In iTunes, select iTunes Store in the list on the left side of the iTunes window.
2 Choose iPod Games from the iTunes Store list.Chapter 6 More Settings, Extra Features, and Accessories 53
3 Select the game you want and click Buy Game.
To sync games automatically to iPod classic:
1 In iTunes, select iPod classic in the device list and click the Games tab.
2 Select “Sync games.”
3 Click “All games” or “Selected games.” If you click “Selected games,” also select the
games you want to sync.
4 Click Apply.
Locking the iPod classic Screen
You can set a combination to prevent iPod classic from being used by someone
without your permission. When you lock an iPod classic that isn’t connected to a
computer, you must enter a combination to unlock and use it.
Note: This is different from the Hold button in that the Hold button prevents
iPod classic buttons from being pressed accidentally. The combination prevents
another person from using iPod classic.
To set a combination for iPod classic:
1 Choose Extras > Screen Lock.
2 On the New Combination screen, enter a combination:
 Use the Click Wheel to select a number for the first position. Press the Center button
to confirm your choice and move to the next position.
 Use the same method to set the remaining numbers of the combination. You can use
the Next/Fast-forward button to move to the next position and the Previous/Rewind
button to move to the previous position. Press the Center button in the final
position.
3 On the Confirm Combination screen, enter the combination to confirm it, or press
Menu to exit without locking the screen.
When you finish, you return to the Screen Lock screen, where you can lock the screen
or reset the combination. Press the Menu button to exit without locking the screen.
To lock the iPod classic screen:
m Choose Extras > Screen Lock > Lock.
If you just finished setting your combination, Lock will already be selected on the
screen. Just press the Center button to lock iPod classic.
When the screen is locked, you see a picture of a lock.
Note: You might want to add the Screen Lock menu item to the main menu so that
you can quickly lock the iPod classic screen. See “Adding or Removing Items in the
Main Menu” on page 8.54 Chapter 6 More Settings, Extra Features, and Accessories
When you see the lock on the screen, you can unlock the iPod classic screen in
two ways:
 Press the Center button to enter the combination on iPod classic. Use the Click Wheel
to select the numbers and press the Center button to confirm them. If you enter the
wrong combination, the lock remains. Try again.
 Connect iPod classic to the primary computer you use it with, and iPod classic
automatically unlocks.
Note: If you try these methods and you still can’t unlock iPod classic, you can restore
iPod classic. See “Updating and Restoring iPod Software” on page 65.
To change a combination you’ve already set:
1 Choose Extras > Screen Lock > Reset.
2 On the Enter Combination screen, enter the current combination.
3 On the New Combination screen, enter and confirm a new combination.
Note: If you can’t remember the current combination, the only way to clear it and enter
a new one is to restore the iPod classic software. See “Updating and Restoring iPod
Software” on page 65.
Syncing Contacts, Calendars, and To-Do Lists
iPod classic can store contacts, calendar events, and to-do lists for viewing on the go.
If you’re using Mac OS X v10.4 or later, you can use iTunes to sync the contact and
calendar information on iPod classic with Address Book and iCal. If you’re using any
version of Mac OS X earlier than 10.4, you can use iSync to sync your information.
Syncing information using iSync requires iSync 1.1 or later, and iCal 1.0.1 or later.
If you’re using Windows XP, and you use Windows Address Book or Microsoft Outlook
2003 or later to store your contact information, you can use iTunes to sync the address
book information on iPod classic. If you use Microsoft Outlook 2003 or later to keep a
calendar, you can also sync calendar information.
To sync contacts or calendar information using Mac OS X v10.4 or later:
1 Connect iPod classic to your computer.
2 In iTunes, select iPod classic in the device list and click the Contacts tab.
3 Do one of the following:
 To sync contacts, in the Contacts section, select “Sync Address Book contacts,” and
select an option:
 To sync all contacts automatically, select “All contacts.”
 To sync selected groups of contacts automatically, select “Selected groups” and
select the groups you want to sync.Chapter 6 More Settings, Extra Features, and Accessories 55
 To copy contacts’ photos to iPod classic, when available, select “Include contacts’
photos.”
When you click Apply, iTunes updates iPod classic with the Address Book contact
information you specified.
 To sync calendars, in the Calendars section, select “Sync iCal calendars,” and choose
an option:
 To sync all calendars automatically, choose “All calendars.”
 To sync selected calendars automatically, choose “Selected calendars” and select
the calendars you want to sync.
When you click Apply, iTunes updates iPod classic with the calendar information
you specified.
To sync contacts and calendars with a Mac and iSync using a version of Mac OS X
earlier than v10.4:
1 Connect iPod classic to your computer.
2 Open iSync and choose Devices > Add Device. You need to do this step only the first
time you use iSync with iPod classic.
3 Select iPod classic and click Sync Now. iSync puts information from iCal and Mac
Address Book onto iPod classic.
The next time you want to sync iPod classic, you can simply open iSync and click Sync
Now. You can also choose to have iPod classic sync automatically when you connect it.
Note: iSync syncs information from your computer with iPod classic. You can’t use iSync
to sync information from iPod classic to your computer.
To sync contacts or calendars using Windows Address Book or Microsoft Outlook for
Windows:
1 Connect iPod classic to your computer.
2 In iTunes, select iPod classic in the device list and click the Contacts tab.
3 Do one of the following:
 To sync contacts, in the Contacts section, select “Sync contacts from” and choose
Windows Address Book or Microsoft Outlook from the pop-up menu. Then select
which contact information you want to sync.
 To sync calendars from Microsoft Outlook, in the Calendars section, select “Sync
calendars from Microsoft Outlook.”
4 Click Apply.
You can also add contact and calendar information to iPod classic manually. iPod classic
must be enabled as an external disk (see “Using iPod classic as an External Disk” on
page 49).56 Chapter 6 More Settings, Extra Features, and Accessories
To add contact information manually:
1 Connect iPod classic and open your favorite email or contacts application. You can add
contacts using Palm Desktop, Microsoft Outlook, Microsoft Entourage, and Eudora,
among others.
2 Drag contacts from the application’s address book to the Contacts folder on
iPod classic.
In some cases, you might need to export contacts and then drag the exported file or
files to the Contacts folder. See the documentation for your email or contacts
application.
To add appointments and other calendar events manually:
1 Export calendar events from any calendar application that uses the standard iCal
format (filenames end in .ics) or vCal format (filenames end in .vcs).
2 Drag the files to the Calendars folder on iPod classic.
Note: To add to-do lists to iPod classic manually, save them in a calendar file with a .ics
or .vcs extension.
To view contacts on iPod classic:
m Choose Extras > Contacts.
To sort contacts by first or last name:
m Choose Settings > Sort By, and press the Center button to choose First or Last.
To view calendar events:
m Choose Extras > Calendars.
To view to-do lists:
m Choose Extras > Calendars > To Do’s.
Storing and Reading Notes
You can store and read text notes on iPod classic if it’s enabled as an external disk
(see page 49).
1 In any word-processing application, save a document as a text (.txt) file.
2 Place the file in the Notes folder on iPod classic.
To view notes:
m Choose Extras > Notes.Chapter 6 More Settings, Extra Features, and Accessories 57
Recording Voice Memos
You can record voice memos using the optional Apple Earphones with Remote and Mic
or an optional iPod classic–compatible microphone (available for purchase at
www.apple.com/ipodstore or your local Apple store). You can store voice memos on
iPod classic and sync them with your computer. You can set voice memo quality to
Low or High.
Note: Voice memos can’t be longer than two hours. If you record for more than two
hours, iPod classic automatically starts a new voice memo to continue your recording.
To record a voice memo:
1 Connect the Apple Earphones with Remote and Mic to iPod classic, or connect a
microphone to the Dock connector port on iPod classic.
The Voice Memos item appears in the main menu.
2 To begin recording, choose Voice Memos > Start Recording.
3 Speak while wearing the Apple Earphones with Remote and Mic, or hold the
microphone a few inches from your mouth and speak. To pause recording, press the
Play/Pause button.
Choose Resume to continue recording.
4 When you finish, choose Stop and Save. Your saved recording is listed by date and time.
To play a recording:
m Choose Extras > Voice Memos and select the recording.
Note: You won’t see a Voice Memos menu item if you’ve never connected a
microphone or the Apple Earphones with Remote and Mic to iPod classic.
To sync voice memos with your computer:
Voice memos are saved in the Voice Memos application on iPod classic in the WAV file
format. If you enable iPod classic for disk use, you can drag voice memos from the
folder to copy them.
If iPod classic is set to sync songs automatically (see “Syncing Music Automatically” on
page 22) and you record voice memos, the voice memos are automatically synced
to a playlist in iTunes (and removed from iPod classic) when you connect iPod classic.
You see the new Voice Memos playlist in the list of playlists on the left side of the
iTunes window.58 Chapter 6 More Settings, Extra Features, and Accessories
Learning About iPod classic Accessories
iPod classic comes with some accessories, and many other accessories are available. To
purchase iPod classic accessories, go to www.apple.com/ipodstore.
Available accessories include:
 iPod Radio Remote
 Apple Universal Dock
 Apple Component AV Cable
 Apple Composite AV Cable
 Apple USB Power Adapter
 Apple Earphones with Remote and Mic
 Apple In-Ear Headphones with Remote and Mic
 iPod Socks
To use the earphones included with iPod classic:
m Plug the earphones into the Headphones port. Then place the earbuds in your ears.
WARNING: Permanent hearing loss may occur if earbuds or headphones are used
at high volume. You can adapt over time to a higher volume of sound that may sound
normal but can be damaging to your hearing. If you experience ringing in your
ears or muffled speech, stop listening and have your hearing checked. The louder the
volume, the less time is required before your hearing could be affected. Hearing
experts suggest that to protect your hearing:
 Limit the amount of time you use earbuds or headphones at high volume.
 Avoid turning up the volume to block out noisy surroundings.
 Turn the volume down if you can’t hear people speaking near you.
For information about setting a maximum volume limit on iPod classic, see “Setting
the Maximum Volume Limit” on page 36.
The earphone
cord is adjustable.7
59
7 Tips and Troubleshooting
Most problems with iPod classic can be solved quickly by
following the advice in this chapter.
General Suggestions
Most problems with iPod classic can be solved by resetting it. First, make sure
iPod classic is charged.
To reset iPod classic:
1 Toggle the Hold switch on and off (slide it to HOLD and then back again).
2 Press and hold the Menu and Center buttons for at least 6 seconds, until the
Apple logo appears.
If iPod classic won’t turn on or respond
 Make sure the Hold switch isn’t set to HOLD.
 The iPod classic battery might need to be recharged. Connect iPod classic to your
computer or to an Apple USB Power Adapter and let the battery recharge. Look for
the lightning bolt icon on the iPod classic screen to verify that iPod classic is
receiving a charge.
The 5 Rs: Reset, Retry, Restart, Reinstall, Restore
Remember these five basic suggestions if you have a problem with iPod classic. Try
these steps one at a time until your issue is resolved. If one of the following doesn’t
help, read on for solutions to specific problems.
 Reset iPod classic. See “General Suggestions,” below.
 Retry with a different USB port if you cannot see iPod classic in iTunes.
 Restart your computer, and make sure you have the latest software updates
installed.
 Reinstall iTunes software from the latest version on the web.
 Restore iPod classic. See “Updating and Restoring iPod Software” on page 65.60 Chapter 7 Tips and Troubleshooting
To charge the battery, connect iPod classic to a USB 2.0 on your computer.
Connecting iPod classic to a USB port on your keyboard won’t charge the battery,
unless your keyboard has a high-powered USB 2.0 port.
 Try the 5 Rs, one by one, until iPod classic responds.
If you want to disconnect iPod classic, but you see the message “Connected” or
“Sync in Progress”
 If iPod classic is syncing music, wait for it to complete.
 Select iPod classic in the device list and click the Eject (C) button.
 If iPod classic disappears from the device list, but you still see the “Connected” or
“Sync in Progress” message on the iPod classic screen, disconnect iPod classic.
 If iPod classic doesn’t disappear from the device list, drag the iPod classic icon from
the desktop to the Trash (if you’re using a Mac) or, if you’re using a Windows PC, eject
the device in My Computer or click the Safely Remove Hardware icon in the system
tray and select iPod classic. If you still see the “Connected” or “Sync in Progress”
message, restart your computer and eject iPod classic again.
If iPod classic isn’t playing music
 Make sure the Hold switch isn’t set to HOLD.
 Make sure the headphone connector is pushed in all the way.
 Make sure the volume is set properly. A maximum volume limit might have been set.
You can change or remove it by using Settings > Volume Limit. See “Setting the
Maximum Volume Limit” on page 36.
 iPod classic might be paused. Try pressing the Play/Pause button.
 Make sure you’re using iTunes 9.0 or later (go to www.apple.com/ipod/start). Songs
purchased from the iTunes Store using earlier versions of iTunes won’t play on
iPod classic until you upgrade iTunes.
 If you’re using the iPod Universal Dock, make sure the iPod classic is seated firmly in
the Dock and make sure all cables are connected properly. Chapter 7 Tips and Troubleshooting 61
If you connect iPod classic to your computer and nothing happens
 Make sure you have installed the latest iTunes software from
www.apple.com/ipod/start.
 Try connecting to a different USB port on your computer.
Note: A USB 2.0 port is recommended to connect iPod classic. USB 1.1 is significantly
slower than USB 2.0. If you have a Windows PC that doesn’t have a USB 2.0 port, in
some cases you can purchase and install a USB 2.0 card. For more information, go to
www.apple.com/ipod.
 iPod classic might need to be reset (see page 59).
 If you’re connecting iPod classic to a portable or laptop computer using the iPod
Dock Connector to USB 2.0 Cable, connect the computer to a power outlet before
connecting iPod classic.
 Make sure you have the required computer and software. See “If you want to doublecheck the system requirements” on page 64.
 Check the cable connections. Unplug the cable at both ends and make sure no foreign
objects are in the USB ports. Then plug the cable back in securely. Make sure the
connectors on the cables are oriented correctly. They can be inserted only one way.
 Try restarting your computer.
 If none of the previous suggestions solves your problems, you might need to restore
iPod classic software. See “Updating and Restoring iPod Software” on page 65.
If iPod classic displays a “Connect to Power” message
This message may appear if iPod classic is exceptionally low on power and the battery
needs to be charged before iPod classic can communicate with your computer. To
charge the battery, connect iPod classic to a USB 2.0 port on your computer.
Leave iPod classic connected to your computer until the message disappears and
iPod classic appears in iTunes or the Finder. Depending on how depleted the battery is,
you may need to charge iPod classic for up to 30 minutes before it will start up.
To charge iPod classic more quickly, use the optional Apple USB Power Adapter.
Note: Connecting iPod classic to a USB port on your keyboard won’t charge the
battery, unless your keyboard has a high-powered USB 2.0 port. 62 Chapter 7 Tips and Troubleshooting
If iPod classic displays a “Use iTunes to restore” message
 Make sure you have the latest version of iTunes on your computer (download it from
www.apple.com/ipod/start).
 Connect iPod classic to your computer. When iTunes opens, follow the onscreen
prompts to restore iPod classic.
 If restoring iPod classic doesn’t solve the problem, iPod classic may need to be
repaired. You can arrange for service at the iPod Service & Support website:
www.apple.com/support/ipod
If songs or data sync more slowly over USB 2.0
 If you sync a large number of songs or amount of data using USB 2.0 and the
iPod classic battery is low, iPod classic syncs the information at a reduced speed in
order to conserve battery power.
 If you want to sync at higher speeds, you can stop syncing and keep the iPod classic
connected so that it can recharge, or connect it to the optional iPod USB 2.0 Power
Adapter. Let iPod classic charge for about an hour, and then resume syncing your
music or data.
If you can’t add a song or other item to iPod classic
The song may have been encoded in a format that iPod classic doesn’t support. The
following audio file formats are supported by iPod classic. These include formats for
audiobooks and podcasting:
 AAC (M4A, M4B, M4P, up to 320 Kbps)
 Apple Lossless (a high-quality compressed format)
 HE-AAC
 MP3 (up to 320 Kbps)
 MP3 Variable Bit Rate (VBR)
 WAV
 AA (audible.com spoken word, formats 2, 3, and 4)
 AIFF
A song encoded using Apple Lossless format has full CD-quality sound, but takes up
only about half as much space as a song encoded using AIFF or WAV format. The same
song encoded in AAC or MP3 format takes up even less space. When you import music
from a CD using iTunes, it’s converted to AAC format by default.
Using iTunes for Windows, you can convert nonprotected WMA files to AAC or MP3
format. This can be useful if you have a library of music encoded in WMA format.
iPod classic doesn’t support WMA, MPEG Layer 1, MPEG Layer 2 audio files, or
audible.com format 1.
If you have a song in iTunes that isn’t supported by iPod classic, you can convert it to
a format iPod classic supports. For more information, see iTunes Help.Chapter 7 Tips and Troubleshooting 63
If iPod classic displays a “Connect to iTunes to activate Genius” message
You haven’t activated Genius in iTunes, or you haven’t synced iPod classic since you
activated Genius in iTunes. For more information, see page 19 or iTunes Help.
If iPod classic displays a “Genius is not available for the selected song” message
Genius is activated but doesn’t recognize the song you selected to start a Genius
playlist. New songs are added to the iTunes Store Genius database regularly, so try
again later.
If you accidentally set iPod classic to use a language you don’t understand
You can reset the language.
1 Press and hold Menu until the main menu appears.
2 Choose the sixth menu item (Settings).
3 Choose the last menu item (Reset Settings).
4 Choose the left item (Reset) and select a language.
Other iPod classic settings, such as song repeat, are also reset.
Note: If you added or removed items from the iPod classic main menu (see “Adding or
Removing Items in the Main Menu” on page 8) the Settings menu item may be
in a different place. If you can’t find the Reset Settings menu item, you can restore
iPod classic to its original state and choose a language you understand. See “Updating
and Restoring iPod Software” on page 65.
If you can’t see videos or photos on your TV
 You must use RCA-type cables made specifically for iPod classic, such as the
Apple Component or Apple Composite AV cables, to connect iPod classic to
your TV. Other similar RCA-type cables won’t work.
 Make sure your TV is set to display images from the correct input source (see the
documentation that came with your TV for more information).
 Make sure all cables are connected correctly (see “Watching Videos on a TV
Connected to iPod classic” on page 41).
 Make sure the yellow end of the Apple Composite AV Cable is connected to the
video port on your TV.
 If you’re trying to watch a video, go to Videos > Settings and set TV Out to On, and
then try again. If you’re trying to view a slideshow, go to Photos > Slideshow Settings
and set TV Out to On, and then try again.
 If that doesn’t work, go to Videos > Settings (for video) or Photos > Settings (for a
slideshow) and set TV Signal to PAL or NTSC, depending on which type of TV you
have. Try both settings.64 Chapter 7 Tips and Troubleshooting
If you want to double-check the system requirements
To use iPod classic, you must have:
 One of the following computer configurations:
 A Mac with a USB 2.0 port
 A Windows PC with a USB 2.0 or a USB 2.0 card installed
 One of the following operating systems:
 Mac OS X v10.4.11 or later
 Windows Vista
 Windows XP (Home or Professional) with Service Pack 3 or later
 iTunes 9.0 or later (iTunes can be downloaded from www.apple.com/ipod/start)
If your Windows PC doesn’t have a USB 2.0 port, you can purchase and install a USB 2.0
card. For more information about cables and compatible USB cards, go to
www.apple.com/ipod.
On the Mac, iPhoto 4.0.3 or later is recommended for adding photos and albums to
iPod classic. This software is optional. iPhoto might already be installed on your Mac.
Check the Applications folder. If you have iPhoto 4 you can update it by choosing
Apple () > Software Update.
On a Windows PC, iPod classic can sync photo collections automatically from Adobe
Photoshop Album 2.0 or later, and Adobe Photoshop Elements 3.0 or later, available at
www.adobe.com. This software is optional.
On both Mac and Windows PC, iPod classic can sync digital photos from folders on
your computer’s hard disk.
If you want to use iPod classic with a Mac and a Windows PC
If you’re using iPod classic with a Mac and you want to use it with a Windows PC, you
must restore the iPod software for use with the PC (see “Updating and Restoring iPod
Software” on page 65). Restoring the iPod software erases all data from iPod classic,
including all songs.
You cannot switch from using iPod classic with a Mac to using it with a Windows PC
without erasing all data on iPod classic.
If you lock the iPod classic screen and can’t unlock it
Normally, if you can connect iPod classic to the computer it’s authorized to work with,
iPod classic automatically unlocks. If the computer authorized to work with iPod classic
is unavailable, you can connect iPod classic to another computer and use iTunes to
restore iPod software. See the next section for more information.
If you want to change the screen lock combination and you can’t remember the current
combination, you must restore the iPod software and then set a new combination.Chapter 7 Tips and Troubleshooting 65
Updating and Restoring iPod Software
You can use iTunes to update or restore iPod software. It’s recommended that you
update iPod classic to use the latest software. You can also restore the software, which
puts iPod classic back to its original state.
 If you choose to update, the software is updated, but your settings and songs aren’t
affected.
 If you choose to restore, all data is erased from iPod classic, including songs, videos,
files, contacts, photos, calendar information, and any other data. All iPod classic
settings are restored to their original state.
To update or restore iPod classic:
1 Make sure you have an Internet connection and have installed the latest version of
iTunes from www.apple.com/ipod/start.
2 Connect iPod classic to your computer.
3 In iTunes, select iPod classic in the device list and click the Summary tab.
The Version section tells you whether iPod classic is up to date or needs a newer
version of the software.
4 Click Update to install the latest version of the software.
5 If necessary, click Restore to restore iPod classic to its original settings (this erases all
data from iPod classic). Follow the onscreen instructions to complete the restore
process.8
66
8 Safety and Cleaning
Read the following important safety and handling
information for Apple iPods.
Keep the iPod classic User Guide handy for future reference.
Important Safety Information
Proper handling Do not bend, drop, crush, puncture, incinerate, or open iPod classic.
Water and wet locations Do not use iPod classic in rain, or near washbasins or other
wet locations. Take care not to spill any food or liquid into iPod classic. In case
iPod classic gets wet, unplug all cables, turn iPod classic off, and slide the Hold switch
to HOLD before cleaning, and allow it to dry thoroughly before turning it on again.
iPod classic repairs Never attempt to repair iPod classic yourself. If iPod classic has
been submerged in water, punctured, or subjected to a severe fall, do not use it
until you take it to an Apple Authorized Service Provider. iPod classic does not
contain any user-serviceable parts. For service information, choose iPod Help from
the Help menu in iTunes or go to www.apple.com/support/ipod. The rechargeable
battery in iPod classic should be replaced only by an Apple Authorized Service Provider.
For more information about battery replacement service, go to
www.apple.com/support/ipod/service/battery.
± To avoid injury, read all safety information below and operating instructions
before using iPod classic.
WARNING: Failure to follow these safety instructions could result in fire, electric shock,
or other injury or damage.Chapter 8 Safety and Cleaning 67
Apple USB Power Adapter (available separately) If you use the Apple USB Power
Adapter (sold separately at www.apple.com/ipodstore) to charge iPod classic, make
sure that the power adapter is fully assembled before you plug it into a power outlet.
Then insert the Apple USB Power Adapter firmly into the power outlet. Do not connect
or disconnect the Apple USB Power Adapter with wet hands. Do not use any power
adapter other than an Apple iPod power adapter to charge iPod classic.
The iPod USB Power Adapter may become warm during normal use. Always allow
adequate ventilation around the iPod USB Power Adapter and use care when handling.
Unplug the iPod USB Power Adapter if any of the following conditions exist:
 The power cord or plug has become frayed or damaged.
 The adapter is exposed to rain, liquids, or excessive moisture.
 The adapter case has become damaged.
 You suspect the adapter needs service or repair.
 You want to clean the adapter.
Hearing damage Permanent hearing loss may occur if earbuds or headphones are
used at high volume. Set the volume to a safe level. You can adapt over time to a
higher volume of sound that may sound normal but can be damaging to your hearing.
If you experience ringing in your ears or muffled speech, stop listening and have your
hearing checked. The louder the volume, the less time is required before your hearing
could be affected. Hearing experts suggest that to protect your hearing:
 Limit the amount of time you use earbuds or headphones at high volume.
 Avoid turning up the volume to block out noisy surroundings.
 Turn the volume down if you can’t hear people speaking near you.
For information about how to set a maximum volume limit on iPod classic, see “Setting
the Maximum Volume Limit” on page 36.
Headphones safety Use of earphones while operating a vehicle is not recommended
and is illegal in some areas. Check and obey the applicable laws and regulations on the
use of earphones while operating a vehicle. Be careful and attentive while driving. Stop
listening to your audio device if you find it disruptive or distracting while operating any
type of vehicle or performing another activity that requires your full attention.68 Chapter 8 Safety and Cleaning
Seizures, blackouts, and eye strain A small percentage of people may be susceptible
to blackouts or seizures (even if they have never had one before) when exposed to
flashing lights or light patterns such as when playing games or watching video. If you
have experienced seizures or blackouts or have a family history of such occurrences,
you should consult a physician before playing games (if available) or watching video on
iPod classic. Discontinue use and consult a physician if you experience: convulsion, eye
or muscle twitching, loss of awareness, involuntary movements, or disorientation. To
reduce the risk of blackout, seizures, and eyestrain, avoid prolonged use, hold
iPod classic some distance from your eyes, use iPod classic in a well lit room, and take
frequent breaks.
Repetitive motion When you perform repetitive activities such as playing games on
iPod classic, you may experience occasional discomfort in your hands, arms, shoulders,
neck, or other parts of your body. Take frequent breaks and, if you have discomfort
during or after such use, stop use and see a physician.
Important Handling Information
Carrying iPod classic iPod classic contains sensitive components, including, in some
cases, a hard drive. Do not bend, drop, or crush iPod classic. If you are concerned about
scratching iPod classic, you can use one of the many cases sold separately.
Using connectors and ports Never force a connector into a port. Check for
obstructions on the port. If the connector and port don’t join with reasonable ease,
they probably don’t match. Make sure that the connector matches the port and that
you have positioned the connector correctly in relation to the port.
Keeping iPod classic within acceptable temperatures Operate iPod classic in a place
where the temperature is always between 0º and 35º C (32º to 95º F). iPod classic play
time might temporarily shorten in low-temperature conditions.
Store iPod classic in a place where the temperature is always between -20º and 45º C
(-4º to 113º F). Don’t leave iPod classic in your car, because temperatures in parked cars
can exceed this range.
When you’re using iPod classic or charging the battery, it is normal for iPod classic to
get warm. The exterior of iPod classic functions as a cooling surface that transfers heat
from inside the unit to the cooler air outside.
NOTICE: Failure to follow these handling instructions could result in damage to
iPod classic or other property.Chapter 8 Safety and Cleaning 69
Keeping the outside of iPod classic clean To clean iPod classic, unplug all cables,
turn iPod classic off, and slide the Hold switch to HOLD. Then use a soft, slightly
damp, lint-free cloth. Avoid getting moisture in openings. Don’t use window cleaners,
household cleaners, aerosol sprays, solvents, alcohol, ammonia, or abrasives to clean
iPod classic.
Disposing of iPod classic properly For information about the proper disposal of
iPod classic, including other important regulatory compliance information, see
“Regulatory Compliance Information” on page 71.9
70
9 Learning More, Service,
and Support
You can find more information about using iPod classic in
onscreen help and on the web.
The following table describes where to get more iPod-related software and service
information.
To learn about Do this
Service and support,
discussions, tutorials, and
Apple software downloads
Go to: www.apple.com/support/ipod
Using iTunes Open iTunes and choose Help > iTunes Help.
For an online iTunes tutorial (available in some areas only), go to:
www.apple.com/support/itunes
Using iPhoto (on Mac OS X) Open iPhoto and choose Help > iPhoto Help.
Using iSync (on Mac OS X) Open iSync and choose Help > iSync Help.
Using iCal (on Mac OS X) Open iCal and choose Help > iCal Help.
The latest information about
iPod classic
Go to: www.apple.com/ipodclassic
Registering iPod classic To register iPod classic, install iTunes on your computer and
connect iPod classic.
Finding the iPod classic serial
number
Look at the back of iPod classic or choose Settings > About and
press the Center button. In iTunes (with iPod classic connected to
your computer), select iPod classic in the device list and click the
Settings tab.
Obtaining warranty service First follow the advice in this booklet, the onscreen help, and
online resources. Then go to:
www.apple.com/support/ipod/service 71
Regulatory Compliance Information
FCC Compliance Statement
This device complies with part 15 of the FCC rules.
Operation is subject to the following two conditions:
(1) This device may not cause harmful interference,
and (2) this device must accept any interference
received, including interference that may cause
undesired operation. See instructions if interference
to radio or TV reception is suspected.
Radio and TV Interference
This computer equipment generates, uses, and can
radiate radio-frequency energy. If it is not installed
and used properly—that is, in strict accordance with
Apple’s instructions—it may cause interference with
radio and TV reception.
This equipment has been tested and found to
comply with the limits for a Class B digital device in
accordance with the specifications in Part 15 of FCC
rules. These specifications are designed to provide
reasonable protection against such interference
in a residential installation. However, there is
no guarantee that interference will not occur in
a particular installation.
You can determine whether your computer system is
causing interference by turning it off. If the
interference stops, it was probably caused by the
computer or one of the peripheral devices.
If your computer system does cause interference to
radio or TV reception, try to correct the interference
by using one or more of the following measures:
 Turn the TV or radio antenna until the interference
stops.
 Move the computer to one side or the other of the
TV or radio.
 Move the computer farther away from the TV or
radio.
 Plug the computer in to an outlet that is on a
different circuit from the TV or radio. (That is, make
certain the computer and the TV or radio are on
circuits controlled by different circuit breakers or
fuses.)
If necessary, consult an Apple Authorized Service
Provider or Apple. See the service and support
information that came with your Apple product. Or,
consult an experienced radio/TV technician for
additional suggestions.
Important: Changes or modifications to this product
not authorized by Apple Inc. could void the EMC
compliance and negate your authority to operate
the product.
This product was tested for EMC compliance under
conditions that included the use of Apple peripheral
devices and Apple shielded cables and connectors
between system components.
It is important that you use Apple peripheral devices
and shielded cables and connectors between system
components to reduce the possibility of causing
interference to radios, TV sets, and other electronic
devices. You can obtain Apple peripheral devices and
the proper shielded cables and connectors through
an Apple Authorized Reseller. For non-Apple
peripheral devices, contact the manufacturer or
dealer for assistance.
Responsible party (contact for FCC matters only):
Apple Inc. Corporate Compliance
1 Infinite Loop, MS 26-A
Cupertino, CA 95014
Industry Canada Statement
This Class B device meets all requirements of the
Canadian interference-causing equipment
regulations.
Cet appareil numérique de la classe B respecte
toutes les exigences du Règlement sur le matériel
brouilleur du Canada.
VCCI Class B Statement
Korea Class B Statement72
Russia
European Community
Disposal and Recycling Information
Your iPod must be disposed of properly according to
local laws and regulations. Because this product
contains a battery, the product must be disposed of
separately from household waste. When your iPod
reaches its end of life, contact Apple or your local
authorities to learn about recycling options.
For information about Apple’s recycling program,
go to: www.apple.com/environment/recycling
Deutschland: Dieses Gerät enthält Batterien. Bitte
nicht in den Hausmüll werfen. Entsorgen Sie dieses
Gerätes am Ende seines Lebenszyklus entsprechend
der maßgeblichen gesetzlichen Regelungen.
Nederlands: Gebruikte batterijen kunnen worden
ingeleverd bij de chemokar of in een speciale
batterijcontainer voor klein chemisch afval (kca)
worden gedeponeerd.
China:
Taiwan:
European Union—Disposal Information:
This symbol means that according to local laws and
regulations your product should be disposed of
separately from household waste. When this product
reaches its end of life, take it to a collection point
designated by local authorities. Some collection
points accept products for free. The separate
collection and recycling of your product at the time
of disposal will help conserve natural resources and
ensure that it is recycled in a manner that protects
human health and the environment.
Union Européenne—informations sur l’élimination
Le symbole ci-dessus signifie que vous devez vous
débarasser de votre produit sans le mélanger avec
les ordures ménagères, selon les normes et la
législation de votre pays. Lorsque ce produit n’est
plus utilisable, portez-le dans un centre de
traitement des déchets agréé par les autorités
locales. Certains centres acceptent les produits
gratuitement. Le traitement et le recyclage séparé de
votre produit lors de son élimination aideront à
préserver les ressources naturelles et à protéger
l’environnement et la santé des êtres humains.
Europäische Union—Informationen zur Entsorgung
Das Symbol oben bedeutet, dass dieses Produkt
entsprechend den geltenden gesetzlichen
Vorschriften und getrennt vom Hausmüll entsorgt
werden muss. Geben Sie dieses Produkt zur
Entsorgung bei einer offiziellen Sammelstelle ab. Bei
einigen Sammelstellen können Produkte zur
Entsorgung unentgeltlich abgegeben werden. Durch
das separate Sammeln und Recycling werden die
natürlichen Ressourcen geschont und es ist
sichergestellt, dass beim Recycling des Produkts alle
Bestimmungen zum Schutz von Gesundheit und
Umwelt beachtet werden. 73
Unione Europea—informazioni per l’eliminazione
Questo simbolo significa che, in base alle leggi e alle
norme locali, il prodotto dovrebbe essere eliminato
separatamente dai rifiuti casalinghi. Quando il
prodotto diventa inutilizzabile, portarlo nel punto di
raccolta stabilito dalle autorità locali. Alcuni punti di
raccolta accettano i prodotti gratuitamente. La
raccolta separata e il riciclaggio del prodotto al
momento dell’eliminazione aiutano a conservare le
risorse naturali e assicurano che venga riciclato in
maniera tale da salvaguardare la salute umana e
l’ambiente.
Europeiska unionen—uttjänta produkter
Symbolen ovan betyder att produkten enligt lokala
lagar och bestämmelser inte får kastas tillsammans
med hushållsavfallet. När produkten har tjänat ut
måste den tas till en återvinningsstation som utsetts
av lokala myndigheter. Vissa återvinningsstationer
tar kostnadsfritt hand om uttjänta produkter. Genom
att låta den uttjänta produkten tas om hand för
återvinning hjälper du till att spara naturresurser och
skydda hälsa och miljö.
Battery Replacement and Disposal for
iPod classic
The rechargeable battery in iPod classic should be
replaced only by an Apple Authorized Service
Provider. For battery replacement services, go to:
www.apple.com/support/ipod/service/battery
When iPod classic reaches its end of life, contact
local authorities to learn about disposal and
recycling options, or simply drop it off at your local
Apple retail store or return it to Apple. The battery
will be removed and recycled in an environmentally
friendly manner. For more information, go to:
www.apple.com/environment/recycling
Apple and the Environment
At Apple, we recognize our responsibility to
minimize the environmental impacts of our
operations and products.
For more information, go to:
www.apple.com/environment
© 2009 Apple Inc. All rights reserved. Apple, the Apple logo, iCal, iLife,
iPhoto, iPod, iPod classic, iPod Socks, iTunes, Mac, Macintosh, and
Mac OS are trademarks of Apple Inc., registered in the U.S. and other
countries. Finder and Shuffle are trademarks of Apple Inc. iTunes Store
is a service mark of Apple Inc., registered in the U.S. and other
countries. Other company and product names mentioned herein may
be trademarks of their respective companies.
Mention of third-party products is for informational purposes only
and constitutes neither an endorsement nor a recommendation.
Apple assumes no responsibility with regard to the performance or
use of these products. All understandings, agreements, or warranties,
if any, take place directly between the vendors and the prospective
users. Every effort has been made to ensure that the information in
this manual is accurate. Apple is not responsible for printing or
clerical errors.
019-1734/2009-12Index
74
Index
A
accessories for iPod 58
adding album artwork 18
adding menu items 8, 35
adding music
automatically 22
disconnecting iPod 11
from more than one computer 22, 23
manually 26
On-The-Go playlists 33
tutorial 70
adding photos
about 43
all or selected photos 45
from iPod to computer 47
full-resolution image 45
address book, syncing 54
Adobe Photoshop Album 64
Adobe Photoshop Elements 64
alarms
deleting 51
setting 51
album artwork
adding 18
viewing 30
albums
browsing by 34
alternate audio 41
artist, browsing by 33
audio, alternate 41
audiobooks
setting play speed 39
AV cables 41, 47
B
backlight
setting timer 9
turning on 6, 9
battery
charge states when disconnected 15
charging 13
rechargeable 15
replacing 15
very low 14, 61
viewing charge status 14
brightness setting 9
browsing
by album 34
by artist 33
quickly 9
songs 6
videos 6
with Cover Flow 31
buttons
Center 5
disabling with Hold switch 6
Eject 12
C
calendar events, syncing 54
Center button, using 5
Charging, Please Wait message 14, 61
charging the battery
about 13
using the iPod USB Power Adapter 14
using your computer 13
when battery very low 14, 61
cleaning iPod 69
Click Wheel
turning off the Click Wheel sound 9
using 5
clocks
adding for other time zones 51
settings 50
close captions 41
compilations 35
component AV cable 41, 47
composite AV cable 41, 47
computer
charging the battery 13
connecting iPod 10
getting photos from iPod 47
problems connecting iPod 61
requirements 64
connecting iPodIndex 75
about 10
charging the battery 13
to a TV 41, 47
Connect to Power message 14
contacts
sorting 56
syncing 54
controls
disabling with Hold switch 7
using 5
converting unprotected WMA files 62
Cover Flow 31
customizing the Music menu 35
D
data files, storing on iPod 49
date and time
setting 50
viewing 50
determining battery charge 15
diamond icon on scrubber bar 6
digital photos. See photos
disconnecting iPod
about 10
during music update 11
ejecting first 12
instructions 13
troubleshooting 60
disk, using iPod as 49
displaying time in title bar 50
downloading
See also adding; syncing
E
Eject button 12
ejecting before disconnecting 11, 12
external disk, using iPod as 49
F
fast-forwarding a song or video 6
features of iPod 4
file formats, supported 62
finding your iPod serial number 70
fit video to screen 41
FM radio 39
full-resolution images 45
G
games 52
Genius
about 16
creating a playlist 7, 29
playing a playlist 7, 32
saving a playlist 7
setting up in iTunes 19
syncing to iPod classic 23
troubleshooting 63
using on iPod 31
Genius Mixes
playing 7, 32
syncing to iPod classic 23
getting help 70
getting information about your iPod 10
H
handling information 66
hearing loss warning 58
help, getting 70
Hold switch 6, 7
I
iCal, getting help 70
images. See photos
importing contacts, calendars, and to-do lists. See
syncing
iPhoto
getting help 43, 70
recommended version 64
iPod Dock 10
iPod Dock Connector 10
iPod Updater application 65
iPod USB power adapter 13
iSync, getting help 70
iTunes
ejecting iPod 12
getting help 70
setting not to open automatically 49
Sound Check 37
iTunes Library, adding songs 17
iTunes U 25, 38, 40
L
language
resetting 63
specifying 9
letterbox format 41
library, adding songs 17
lightning bolt on battery icon 14
locating your iPod serial number 70
locking iPod screen 53
lyrics
adding 18
viewing on iPod 30
M
Mac OS X operating system 64
main menu
adding or removing items 8
opening 5
settings 8, 3576 Index
main menu, returning to 6
managing iPod manually 26
manually adding 26
maximum volume limit, setting 36
memos, recording 57
menu items
adding or removing 8, 35
choosing 6
returning to main menu 6
returning to previous menu 6
modifying playlists 27
movies
syncing selected 24
See also videos
music
iPod not playing 60
rating 30
setting for slideshows 46
syncing 22
tutorial 70
See also adding music; songs
Music menu, customizing 35
music videos
syncing 23
N
navigating quickly 9
notes, storing and reading 56
Now Playing screen
moving to any point in a song or video 6
scrubber bar 6
shuffling songs or albums 29
NTSC TV 41, 46
O
On-The-Go playlists 33
operating system requirements 64
overview of iPod features 4
P
PAL TV 41, 46
pausing
a song 6
a video 6
phone numbers, syncing 54
photos
adding and viewing 43
deleting 45, 48
full-resolution 45
syncing 44, 45
viewing on iPod 45
playing
games 52
Genius Mixes 7, 32
songs 6
videos 6
playlists
adding songs 7, 27
Genius 7, 32
making on iPod 33
modifying 27
On-The-Go 33
plug on battery icon 14
podcasts
listening 38
updating 25
ports
RCA video and audio 41, 47
USB 64
power adapter safety 67
previous menu, returning to 6
problems. See troubleshooting
Q
quick navigation 9
R
radio accessory 39
random play 6
rating songs 30
RCA video and audio ports 41, 47
rechargeable batteries 15
recording voice memos 57
registering iPod 70
relative volume, playing songs at 37
removing menu items 8, 35
repairing iPod 66
replacing battery 15
replaying a song or video 6
requirements
computer 64
operating system 64
reset all settings 10
resetting iPod 6, 59
resetting the language 63
restore message 62
restoring iPod software 65
rewinding a song or video 6
S
Safely Remove Hardware icon 12
safety considerations
setting up iPod 66
safety information 66
saving On-The-Go playlists 33
screen lock 53
scrolling quickly 9
scrubber bar 6
searching
iPod 35Index 77
Select button. See Center button
serial number, locating 70
service and support 70
sets of songs. See playlists
setting combination for iPod 53
settings
about your iPod 10
alarm 51
audiobook play speed 39
backlight timer 9
brightness 9
Click Wheel sound 9
date and time 50
language 9
main menu 8, 35
PAL or NTSC TV 41, 46
playing songs at relative volume 37
repeating songs 34
reset all 10
shuffle songs 34
sleep timer 51
slideshow 46
TV 41
volume limit 36
shuffling songs on iPod 6, 34
sleep mode and charging the battery 13
sleep timer, setting 51
slideshows 46
software
getting help 70
iPhoto 64
iPod Updater 65
support versions 64
updating 65
songs
adding to On-The-Go playlists 7
browsing 6
entering names 18
fast-forwarding 6
pausing 6
playing 6
playing at relative volume 37
rating 30
repeating 34
replaying 6
rewinding 6
shuffling 6, 29, 34
skipping ahead 6
viewing lyrics 18
See also music
sorting contacts 56
Sound Check 37
standard TV 41
stopwatch 52
storing
data files on iPod 49
notes on iPod 56
subtitles 41
supported operating systems 64
suppressing iTunes from opening 49
syncing
address book 54
automatically 22
music 21
music videos 23
photos 44, 45
selected movies 24
to-do lists 54
videos 24
voice memos 57
T
time, displaying in title bar 50
timer, setting for backlight 9
time zones, clocks for 51
title bar, displaying time 50
to-do lists, syncing 54
transitions for slides 46
troubleshooting
connecting iPod to computer 61
cross-platform use 64
disconnecting iPod 60
Genius 63
iPod not playing music 60
iPod won’t respond 59
resetting iPod 59
restore message 62
safety considerations 66
setting incorrect language 63
slow syncing of music or data 62
software update and restore 65
TV slideshows 63
unlocking iPod screen 64
turning iPod on and off 6
tutorial 70
TV
connecting to iPod 41, 47
PAL or NTSC 41, 46
settings 41
viewing slideshows 42, 47
TV shows
See also videos
U
unlocking iPod screen 54, 64
unresponsive iPod 59
unsupported audio file formats 62
updating and restoring software 65
USB 2.0 port
requirements 64
slow syncing of music or data 6278 Index
USB port on keyboard 11, 60
Use iTunes to restore message in display 62
V
video captions 41
video podcasts
downloading 20
viewing on a TV 41
videos
adding to iPod 23
browsing 6
fast-forwarding 6
pausing 6
playing 6
purchasing 20
replaying 6
rewinding 6
skipping ahead 6
syncing 24
viewing on a TV 41
viewing on iPod 40
viewing album artwork 30
viewing lyrics 30
viewing music videos 40
viewing photos 45
viewing slideshows
on a TV 42, 47
on iPod 46
settings 46
troubleshooting 63
voice memos
recording 57
syncing with your computer 57
volume
changing 6
setting maximum limit 36
W
warranty service 70
widescreen TV 41
Windows
supported operating systems 64
troubleshooting 64
WMA files, converting 62
iPod nano
User Guide2
2 Contents
Chapter 1 4 iPod nano Basics
4 iPod nano at a Glance
5 Using iPod nano Controls
9 Using iPod nano Menus
12 About the iPod nano Internal Speaker
13 Connecting and Disconnecting iPod nano
16 About the iPod nano Battery
Chapter 2 19 Setting Up iPod nano
20 Setting Up Your iTunes Library
20 Importing Music to iTunes
21 Adding More Details to Your iTunes Library
21 Organizing Your Music
22 Importing Video to iTunes
24 Adding Music, Videos, and Other Content to iPod nano
24 Connecting iPod nano to a Computer for the First Time
25 Syncing Music Automatically
27 Syncing Videos Automatically
28 Adding Podcasts to iPod nano
29 Adding iTunes U Content to iPod nano
29 Adding Audiobooks to iPod nano
30 Adding Other Content to iPod nano
30 Managing iPod nano Manually
32 Setting Up VoiceOver
Chapter 3 33 Listening to Music
33 Playing Music and Other Audio
39 Using Genius on iPod nano
47 Playing Podcasts
48 Playing iTunes U Content
48 Listening to Audiobooks
Chapter 4 49 Watching Videos
49 Watching Videos on iPod nanoContents 3
50 Watching Videos on a TV Connected to iPod nano
Chapter 5 52 Using the Video Camera
53 Recording Video
54 Playing Recorded Videos
55 Deleting Recorded Videos
55 Importing Recorded Videos to Your Computer
Chapter 6 58 Listening to FM Radio
60 Tuning the FM radio
61 Pausing Live Radio
64 Tagging Songs to Sync to iTunes
65 Using the Radio Menu
Chapter 7 67 Photo Features
69 Viewing Photos
71 Adding Photos from iPod nano to a Computer
Chapter 8 72 More Settings, Extra Features, and Accessories
72 Using iPod nano as a Pedometer
74 Recording Voice Memos
77 Using Extra Settings
81 Syncing Contacts, Calendars, and To-Do Lists
83 Mono Audio
83 Using Spoken Menus for Accessibility
84 Using iPod nano as an External Disk
84 Storing and Reading Notes
85 Learning About iPod nano Accessories
Chapter 9 86 Tips and Troubleshooting
86 General Suggestions
92 Updating and Restoring iPod Software
Chapter 10 93 Safety and Cleaning
93 Important Safety Information
96 Important Handling Information
Chapter 11 97 Learning More, Service, and Support
Index 1001
4
1 iPod nano Basics
Read this chapter to learn about the features of iPod nano,
how to use its controls, and more.
iPod nano at a Glance
Get to know the controls on iPod nano:
Headphones port
Menu
Previous/Rewind
Play/Pause
Hold switch
30-pin connector
Click Wheel
Next/Fast-forward
Center button
Lens
Microphone Chapter 1 iPod nano Basics 5
What’s New in iPod nano
 Larger, 2.2-inch display
 Polished aluminum finish
 A built-in video camera that lets you record video with special effects
 An FM radio that lets you pause live radio and tag songs for purchase from the
iTunes Store (radio tagging may not be available in some countries)
 Internal speaker and microphone
 A pedometer that records your workout history
Using iPod nano Controls
The controls on iPod nano are easy to find and use. Press any button to turn on
iPod nano.
The first time you turn on iPod nano, the language menu appears. Use the Click Wheel
to scroll to your language, and then press the Center button to choose it. The main
menu appears in your language.
Use the Click Wheel and Center button to navigate through onscreen menus, play
songs, change settings, and get information.
Move your thumb lightly around the Click Wheel to select a menu item. To choose the
item, press the Center button.
To go back to the previous menu, press Menu.6 Chapter 1 iPod nano Basics
Here’s what else you can do with iPod nano controls.
To Do this
Turn on iPod nano Press any button.
Turn off iPod nano Press and hold Play/Pause (’).
Turn on the backlight Press any button or use the Click Wheel.
Disable iPod nano controls
(so nothing happens if you press
them accidentally)
Slide the Hold switch to HOLD (an orange bar appears).
Reset iPod nano
(if it isn’t responding)
Slide the Hold switch to HOLD and back again. Press Menu and the
Center button at the same time for about 6 seconds, until the
Apple logo appears.
Choose a menu item Use the Click Wheel to scroll to the item and press the Center
button to choose.
Go back to the previous menu Press Menu.
Go directly to the main menu Press and hold Menu.
Access additional options Press and hold the Center button until a menu appears.
Browse for a song From the main menu, choose Music.
Browse for a video From the main menu, choose Videos.
Play a song or video Select the song or video and press the Center button or Play/Pause
(’). iPod nano must be ejected from your computer to play songs
and videos.
Pause a song or video Press Play/Pause (’) or unplug your headphones.
Change the volume From the Now Playing screen, use the Click Wheel.
Play all the songs in a playlist
or album
Select the playlist or album and press Play/Pause (’).
Play all songs in random order From the main menu, choose Shuffle Songs.
Skip to a random song Shake iPod nano.
Enable or disable Shake for
shuffling songs
Choose Settings > Playback, choose Shake, and then select Shuffle
or Off.
Skip to any point in a song or
video
From the Now Playing screen, press the Center button to show the
scrubber bar (the playhead on the bar shows the current location),
and then scroll to any point in the song or video.
Skip to the next song or
chapter in an audiobook or
podcast
Press Next/Fast-forward (‘).
Start a song or video over Press Previous/Rewind (]).
Fast-forward or rewind a song,
video, or paused radio
Press and hold Next/Fast-forward (‘) or Previous/Rewind (]).
Add a song to the On-The-Go
playlist
Play or select a song, and then press and hold the Center button
until a menu appears. Select “Add to On-The-Go,” and then press
the Center button.Chapter 1 iPod nano Basics 7
Play the previous song or
chapter in an audiobook or
podcast
Press Previous/Rewind (]) twice.
Create a Genius playlist Play or select a song, and then press and hold the Center button
until a menu appears. Select Start Genius, and then press the
Center button (Start Genius appears in the Now Playing screen only
if there’s Genius data for the selected song).
Save a Genius playlist Create a Genius playlist, select Save Playlist then press the Center
button.
Play a saved Genius playlist From the Playlist menu, select a Genius playlist, and then press
Play/Pause (’).
Play a Genius Mix From the Music menu, choose Genius Mixes. Select a mix and then
press Play/Pause (’).
Record video Choose Video Camera from the main menu. Press the Center
button to start or stop recording.
Record video with special
effects
Before recording video, press and hold the Center button to display
effects, then use the Click Wheel to browse and press the Center
button to select. Press the Center button again to start recording.
Play back recorded video Press the Center button to stop recording, then press Menu to
enter the Camera Roll screen. Choose a video and press the Center
button to play.
Watch recorded video From the Videos menu, choose Camera Videos, then select a video
and press Play/Pause (’).
Listen to FM radio Choose Radio from the main menu.
Tune to an FM station Use the Click Wheel to browse the radio dial.
Seek FM stations When the radio dial is visible, press Next/Fast-forward (‘) or
Previous/Rewind (]) to skip to the next or previous station.
Not available if you save any stations as favorites.
Scan FM stations When the radio dial is visible, press and hold Next/Fast-forward
(‘). Press the Center button to stop scanning.
Save an FM station as a favorite Press and hold the Center button until a menu appears, and then
choose Add to Favorites.
Pause and resume live radio From any screen, press Play/Pause (’) while listening to the radio.
Press Play/Pause (’) again to resume playing. Changing the radio
station clears paused radio.
Switch between the radio dial
and the Live Pause screen
Press the Center button.
Tag a song on the radio Press and hold the Center button to tag songs marked with a tag
symbol. Sync with iTunes to preview and purchase tagged songs.
Use the pedometer From the Extras menu, choose Fitness, and then choose Pedometer.
Press the Center button to start or stop a session.
To Do this8 Chapter 1 iPod nano Basics
Disabling iPod nano Controls
If you don’t want to turn iPod nano on or activate controls accidentally, you can disable
them with the Hold switch. The Hold switch disables all Click Wheel controls, and also
disables functions that are activated by movement, such as shaking to shuffle and
rotating to enter or exit Cover Flow.
To disable iPod nano controls:
m Slide the Hold switch to HOLD (an orange bar appears).
If you disable the controls while using iPod nano, the song, playlist, podcast, or video
that’s playing continues to play, and if the pedometer is turned on it continues
counting steps. To stop or pause, slide the Hold switch to enable the controls again.
Record a voice memo From the Extras menu, choose Voice Memos. Press Play/Pause (’)
to start or stop recording. Press the Center button to add a chapter
markers.
Find the iPod nano serial
number
From the main menu choose Settings > About and press the
Center button until you see the serial number, or look on the back
of iPod nano.
To Do thisChapter 1 iPod nano Basics 9
Using iPod nano Menus
When you turn on iPod nano, you see the main menu. Choose menu items to perform
functions or go to other menus. Icons along the top of the screen show iPod nano
status.
Display item Function
Menu title Displays the title of the current menu. The menu title doesn’t
appear when the Lock icon appears.
Pedometer icon Appears when the pedometer is on.
Play icon The Play (“) icon appears when a song, video, or other item is
playing. The Pause (1) icon appears when the item is paused.
Battery icon The Battery icon shows the approximate remaining battery charge.
Lock icon The Lock icon appears when the Hold switch is set to HOLD. This
indicates that the iPod nano controls are disabled. When the Lock
icon appears, it replaces the menu title.
Menu items Use the Click Wheel to scroll through menu items. Press the Center
button to choose an item. An arrow next to a menu item indicates
that it leads to another menu or screen.
Preview panel Displays album art, photos, and other information relating to the
menu item selected.
Menu title
Pedometer icon Battery icon
Menu items
Preview panel
Play icon
Lock icon10 Chapter 1 iPod nano Basics
Adding or Removing Items on the Main Menu
You might want to add often-used items to the iPod nano main menu. For example,
you can add a Songs item to the main menu, so you don’t have to choose Music before
you choose Songs.
To add or remove items on the main menu:
1 Choose Settings > General > Main Menu.
2 Select each item you want to appear in the main menu. A checkmark indicates which
items have been added.
Turning Off the Preview Panel
The preview panel at the bottom of the main menu displays album art, photo
thumbnails, available storage, and other information. You can turn it off to allow more
space for menu items.
To turn the preview panel on or off:
m Choose Settings > General > Main Menu > Preview Panel, and then press the Center
button to select On or Off.
The preview panel displays art for a category only if iPod nano contains at least five
items with art in the category.
Setting the Font Size in Menus
iPod nano can display text in two different sizes, standard and large.
To set the font size:
m Choose Settings > General > Font Size, and then press the Center button to select
Standard or Large.
Setting the Language
iPod nano can use different languages.
To set the language:
m Choose Settings > Language, and then choose a language.Chapter 1 iPod nano Basics 11
Setting the Backlight Timer
You can set the backlight to illuminate the screen for a certain amount of time when
you press a button or use the Click Wheel. The default is 10 seconds.
To set the backlight timer:
m Choose Settings > General > Backlight, and then choose the time you want. Choose
“Always On” to prevent the backlight from turning off (choosing this option decreases
battery performance).
Setting the Screen Brightness
You can adjust the brightness of the iPod nano screen.
To set the screen brightness:
m Choose Settings > General > Brightness, and then use the Click Wheel to adjust
the brightness.
You can also set the brightness during a slideshow or video. Press the Center button
until the brightness slider appears, and then use the Click Wheel to adjust the
brightness.
Turning Off the Click Wheel Sound
When you scroll through menu items, you can hear a clicking sound through the
earphones or headphones and through the iPod nano internal speaker. If you like,
you can turn off the Click Wheel sound.
To turn off the Click Wheel sound:
m Choose Settings > General and set Clicker to Off.
To turn the Click Wheel sound on again, set Clicker to On.
Scrolling Quickly Through Long Lists
You can scroll quickly through a long list by moving your thumb quickly on the
Click Wheel.
Note: Not all languages are supported.
To scroll quickly:
1 Move your thumb quickly on the Click Wheel, to display a letter of the alphabet on
the screen.
2 Use the Click Wheel to navigate through the alphabet until you find the first letter of
the item you’re looking for.
Items beginning with a symbol or number appear after the letter Z.
3 Lift your thumb momentarily to return to normal scrolling.
4 Use the Click Wheel to navigate to the item you want.12 Chapter 1 iPod nano Basics
Getting Information About iPod nano
You can get details about your iPod nano, such as the amount of space available,
the number of songs, videos, photos, and other items, and the serial number, model,
and software version.
To get information about iPod nano:
m Choose Settings > About, and press the Center button to cycle through the screens
of information.
Resetting All Settings
You can reset all the items on the Settings menu to their default setting.
To reset all settings:
m Choose Settings > Reset Settings, and then choose Reset.
About the iPod nano Internal Speaker
With the iPod nano internal speaker, you can listen to any audio on iPod nano without
earphones or headphones — except for the built-in FM radio, which uses the earphone
or headphone cord as an antenna.
Connecting earphones or headphones to iPod nano disables the internal speaker. Any
audio that’s playing continues to play, but only through the earphones or headphones.
If you disconnect the earphones or headphones while audio is playing, the audio
pauses. To resume listening through the internal speaker, press Play/Pause (’). If you
disconnect the earphones or headphones while the radio is playing or paused, the
radio stops and any paused radio is cleared.
If you choose Video Camera or Voice Memos while audio is playing, the audio turns off.Chapter 1 iPod nano Basics 13
Connecting and Disconnecting iPod nano
You connect iPod nano to your computer to add music, videos, photos, and files to
iPod nano, to import recorded videos and voice memos to your computer, and to
charge the battery. Disconnect iPod nano when you’re done.
Important: The battery doesn’t charge when your computer is in sleep.
Connecting iPod nano
To connect iPod nano to your computer:
m Plug the included Dock Connector to USB Cable into a high-powered USB 2.0 port on
your computer, and then connect the other end to iPod nano.
If you have an iPod dock, you can connect the cable to a USB 2.0 port on your
computer, connect the other end to the dock, and then put iPod nano in the dock.
Note: The USB port on most keyboards doesn’t provide enough power to charge
iPod nano. Connect iPod nano to a USB 2.0 port on your computer.
By default, iTunes syncs songs on iPod nano automatically when you connect it to your
computer. When iTunes is finished, you can disconnect iPod nano. You can sync songs
while your battery is charging.
If you connect iPod nano to a different computer and it’s set to sync music
automatically, iTunes prompts you before syncing any music. If you click Yes, the songs
and other audio files already on iPod nano will be erased and replaced with songs and
other audio files on the computer iPod nano is connected to. For information about
adding music to iPod nano and using iPod nano with more than one computer, see
Chapter 2, “Setting Up iPod nano,” on page 19.
Disconnecting iPod nano
It’s important not to disconnect iPod nano while it’s syncing. You can see if it’s OK to
disconnect iPod nano by looking at the iPod nano screen. Don’t disconnect iPod nano
if you see the “Connected” or “Synchronizing” message, or you could damage files on
iPod nano.14 Chapter 1 iPod nano Basics
If you see one of these messages, you must eject iPod nano before disconnecting it:
If you see the main menu or a large battery icon, you can disconnect iPod nano.
If you set iPod nano to manage songs manually (see “Managing iPod nano Manually”
on page 30) or enable iPod nano for disk use (see “Using iPod nano as an External Disk”
on page 84), you must always eject iPod nano before disconnecting it.
If you accidentally disconnect iPod nano without ejecting it, reconnect iPod nano to
your computer and sync again.Chapter 1 iPod nano Basics 15
To eject iPod nano:
m In iTunes, click the Eject (C) button next to iPod nano in the device list on the left side
of the iTunes window.
You can safely disconnect iPod nano while either of these messages is displayed:
If you’re using a Mac, you can also eject iPod nano by dragging the iPod nano icon on
the desktop to the Trash.
If you’re using a Windows PC, you can also eject iPod nano in My Computer or by clicking
the Safely Remove Hardware icon in the Windows system tray and selecting iPod nano.
To disconnect iPod nano:
1 Unplug your earphones or headphones, if they’re attached.
2 Disconnect the cable from iPod nano. If iPod nano is in the dock, simply remove it. 16 Chapter 1 iPod nano Basics
About the iPod nano Battery
iPod nano has an internal, non–user-replaceable battery. For best results, the first time
you use iPod nano, let it charge for about three hours or until the battery icon in the
status area of the display shows that the battery is fully charged. If iPod nano isn’t used
for a while, the battery might need to be charged.
Note: iPod nano continues to use battery power after it’s been turned off.
The iPod nano battery is 80-percent charged in about one and a half hours, and fully
charged in about three hours. If you charge iPod nano while adding files, playing
music, watching videos, or viewing a slideshow, it might take longer.
Charging the iPod nano Battery
You can charge the iPod nano battery in two ways:
 Connect iPod nano to your computer.
 Use the Apple USB Power Adapter, available separately.
To charge the battery using your computer:
m Connect iPod nano to a USB 2.0 port on your computer. The computer must be turned
on and not in sleep.
If the battery icon on the iPod nano screen shows the Charging screen, the battery is
charging. If it shows the Charged screen, the battery is fully charged.
If you don’t see the Charging screen, iPod nano might not be connected to a
high-power USB port. Try another USB port on your computer. Chapter 1 iPod nano Basics 17
Important: If a “Charging, Please Wait” or “Connect to Power” message appears on the
iPod nano screen, the battery needs to be charged before iPod nano can communicate
with your computer. See “If iPod nano displays a “Connect to Power” message” on
page 88.
If you want to charge iPod nano when you’re away from your computer, you can
purchase the Apple USB Power Adapter.
To charge the battery using the Apple USB Power Adapter:
1 Connect the AC plug adapter to the power adapter (they might already be connected).
2 Connect the Dock Connector to USB Cable to the power adapter, and plug the other
end of the cable into iPod nano.
3 Plug the power adapter into a working power outlet.
Understanding Battery States
When iPod nano isn’t connected to a power source, a battery icon in the top-right
corner of the iPod nano screen shows approximately how much charge is left.
Apple USB Power Adapter
(your adapter may look different)
iPod USB cable
Battery less than 20% charged
Battery about halfway charged
Battery fully charged18 Chapter 1 iPod nano Basics
When iPod nano is connected to a power source, the battery icon changes to show
that the battery is charging or fully charged.
You can disconnect and use iPod nano before it’s fully charged.
Note: Rechargeable batteries have a limited number of charge cycles and might
eventually need to be replaced. Battery life and number of charge cycles vary by use
and settings. For information, go to www.apple.com/batteries.
Improving Battery Performance with Energy Saver
Energy Saver can extend the time between battery charges by turning off the
iPod nano screen when you aren’t using the controls.
Energy Saver is turned on by default.
To turn Energy Saver on or off:
m Choose Settings > Playback > Energy Saver, and then select On or Off.
If you turn Energy Saver off, iPod nano displays the following information after the
backlight turns off:
Turning off Energy Saver increases the rate of battery consumption.
Battery charging (lightning bolt)
Battery fully charged (plug)2
19
2 Setting Up iPod nano
You use iTunes on your computer to set up iPod nano to play
your music, video, and other media content. No setup is
needed to record video or listen to FM radio.
Using iTunes
iTunes is the free software application you use to set up, organize, and manage your
content on iPod nano. iTunes can sync music, audiobooks, podcasts, and more with
iPod nano. If you don’t already have iTunes installed on your computer, you can
download it at www.apple.com/downloads. iPod nano requires iTunes 9 or later.
iTunes is available for Mac and Windows.
You can use iTunes to import music from CDs and the Internet, buy songs and other
audio and video from the iTunes Store, create personal compilations of your favorite
songs (called playlists), sync them to iPod nano, and adjust iPod nano settings.
iTunes also has a feature called Genius that creates playlists and mixes of songs from
your iTunes library that go great together. You can sync Genius playlists that you create
in iTunes to iPod nano, and you can create Genius playlists and listen to Genius Mixes
on iPod nano. To use Genius, you need an iTunes Store account.
iTunes has many other features. You can burn your own CDs that play in standard CD
players (if your computer has a recordable CD drive); listen to streaming Internet radio;
watch videos and TV shows; rate songs according to preference; and much more. 20 Chapter 2 Setting Up iPod nano
For information about using these iTunes features, open iTunes and choose
Help > iTunes Help.
If you already have iTunes 9 or later installed on your computer and you’ve set up your
iTunes library, you can skip ahead to “Adding Music, Videos, and Other Content to
iPod nano” on page 24.
If you want to get started recording video or listening to FM radio, you can set up
iPod nano at a later time. To learn how to record video, see “Using the Video Camera”
on page 52. For information about the FM radio, see “Listening to FM Radio” on
page 58.
Setting Up Your iTunes Library
To listen to music and watch videos on iPod nano, you first need to get that music and
video into iTunes on your computer.
Importing Music to iTunes
There are three ways of getting music and other audio into iTunes.
Purchase Songs and Download Podcasts Using the iTunes Store
If you have an Internet connection, you can easily purchase and download songs,
albums, and audiobooks online using the iTunes Store. You can also subscribe to and
download podcasts, and you can download free educational content from iTunes U.
To purchase music online using the iTunes Store, you set up a free iTunes account in
iTunes, find the songs you want, and then buy them. If you already have an iTunes
account (also called an Apple ID), you can use that account to sign in to the iTunes
Store and buy songs.
You don’t need an iTunes Store account to download or subscribe to podcasts.
To enter the iTunes Store, open iTunes and click iTunes Store below Store on the left
side of the iTunes window.
Add Songs Already on Your Computer to Your iTunes Library
If you have songs on your computer encoded in file formats that iTunes supports, you
can easily add the songs to iTunes. To learn how to get songs from your computer into
iTunes, open iTunes and choose Help > iTunes Help.
Using iTunes for Windows, you can convert nonprotected WMA files to AAC or MP3
format. This can be useful if you have a library of music encoded in WMA format.
For more information, open iTunes and choose Help > iTunes Help.Chapter 2 Setting Up iPod nano 21
Import Music From Your Audio CDs Into iTunes
iTunes can import music and other audio from your audio CDs. If you have an Internet
connection, iTunes gets the names of the songs on the CD from the Internet (if
available) and lists them in the iTunes window. When you add the songs to iPod nano,
the song information is included. To learn how to import music from your CDs into
iTunes, open iTunes and choose Help > iTunes Help.
Adding More Details to Your iTunes Library
Once you import your music into iTunes, you can add more details to your iTunes
library. Most of these additional details appear on iPod nano when you add songs.
Enter Song Names and Other Information
If you don’t have an Internet connection, if song information isn’t available for music
you import, or if you want to include additional information (such as composer names),
you can enter the information manually. To learn how to enter song information, open
iTunes and choose Help > iTunes Help.
Add Lyrics
You can enter song lyrics in plain text format into iTunes so that you can view the song
lyrics on iPod nano while the song is playing. To learn how to enter lyrics, open iTunes
and choose Help > iTunes Help.
For more information, see “Viewing Lyrics on iPod nano” on page 37.
Add Album Artwork
Music you purchase from the iTunes Store includes album artwork, which iPod nano
can display. You can add album artwork automatically for music you’ve imported from
CDs, if the CDs are available from the iTunes Store. You can add album artwork
manually if you have the album art on your computer. To learn more about adding
album artwork, open iTunes and choose Help > iTunes Help.
For more information, see “Viewing Album Artwork on iPod nano” on page 37.
Organizing Your Music
In iTunes, you can organize songs and other items into lists, called playlists, in any way
you want. For example, you can create playlists with songs to listen to while exercising,
or playlists with songs for a particular mood.
You can create Smart Playlists that update automatically based on rules you define.
When you add songs to iTunes that match the rules, they automatically get added to
the Smart Playlist. 22 Chapter 2 Setting Up iPod nano
You can turn on Genius in iTunes and create playlists of songs that go great together.
Genius can also organize your music library automatically by sorting and grouping it
into collections called Genius Mixes.
You can create as many playlists as you like, using any of the songs in your iTunes
library. Adding a song to a playlist or later removing it doesn’t remove it from
your library.
To learn how to set up playlists in iTunes, open iTunes and choose Help > iTunes Help.
Note: To create playlists on iPod nano when iPod nano isn’t connected to your
computer, see “Creating On-The-Go Playlists on iPod nano” on page 41.
Turning On Genius in iTunes
Genius finds songs in your library that go great together in order to create Genius
playlists and Genius Mixes.
A Genius playlist starts with a song that you select. To create the Genius playlist, iTunes
then compiles a collection of songs that go great with the one you selected.
Genius Mixes are pre-selected compilations of songs that go great together, and are
created for you by iTunes using songs from your library. Genius Mixes are designed to
provide a different listening experience each time you play one. iTunes creates up to
12 Genius Mixes, depending on the variety of music in your iTunes library.
To use Genius on iPod nano to create Genius playlists and Genius Mixes, you first need
to turn on Genius in iTunes. To learn about turning on and using Genius in iTunes, open
iTunes and choose Help > iTunes Help.
Genius playlists and Genius Mixes created in iTunes can be synced to iPod nano like
any iTunes playlist. Genius Mixes can’t be added to iPod nano manually. See “Syncing
Genius Playlists and Genius Mixes to iPod nano” on page 26.
Genius is a free service, but an iTunes Store account is required to use it (if you don’t
have an account, you can set one up when you turn on Genius).
Importing Video to iTunes
There are several ways to import video into iTunes, described below.
Purchase or Rent Videos and Download Video Podcasts from the
iTunes Store
To purchase videos—movies, TV shows, and music videos—or rent movies online from
the iTunes Store (part of iTunes and available in some countries only), you sign in to
your iTunes Store account, find the videos you want, and then buy or rent them.Chapter 2 Setting Up iPod nano 23
A rented movie expires 30 days after you rent it or 24 hours after you begin playing it
(48 hours outside the U.S.), whichever comes first. Expired rentals are deleted
automatically. These terms apply to U.S. rentals. Rental terms vary among countries.
To enter the iTunes Store, open iTunes and click iTunes Store below Store on the left
side of the iTunes window.
You can view movie trailers or TV show previews by clicking the Preview button next
to them.
Purchased videos appear when you select Movies or TV Shows (under Library) or
Purchased (under Store) in the source list. Rented videos appear when you select
Rented Movies (under Library).
Some items have other options, such as TV shows that let you buy a Season Pass for
all episodes.
Video podcasts appear along with other podcasts in the iTunes Store. You can
subscribe to them and download them just as you would other podcasts. You don’t
need an iTunes Store account to download podcasts. See “Purchase Songs and
Download Podcasts Using the iTunes Store” on page 20.
Create Versions of Your Own Videos to Work with iPod nano
You can view other video files on iPod nano, such as videos you create in iMovie on a
Mac or videos you download from the Internet. Import the video into iTunes, convert it
for use with iPod nano, if necessary, and then add it to iPod nano.
iTunes supports many of the video formats that QuickTime supports. For more
information, see “If you can’t add a song or other item to iPod nano” on page 89.
Some videos may be ready for use with iPod nano after you import them to iTunes.
If you try to add a video to iPod nano (see “Syncing Videos Automatically” on page 27),
and a message says the video can’t play on iPod nano, then you must convert the video
for use with iPod nano.
Depending on the length and content of a video, converting it for use with iPod nano
can take several minutes to several hours.
When you create a version of a video for use with iPod nano, the original video also
remains in your iTunes library.
For more about converting video for iPod nano, open iTunes and choose Help >
iTunes Help, or go to www.info.apple.com/kbnum/n302758.24 Chapter 2 Setting Up iPod nano
Adding Music, Videos, and Other Content to iPod nano
After your music and video are imported and organized in iTunes, you can easily add
them to iPod nano.
To manage how music, videos, photos, and other content are added to iPod nano from
your computer, you connect iPod nano to your computer, and then use iTunes to
choose iPod nano settings.
Connecting iPod nano to a Computer for the First Time
The first time you connect iPod nano to your computer (after installing iTunes),
iTunes opens automatically and the iPod nano Setup Assistant appears:
To use the iPod nano Setup Assistant:
1 Enter a name for iPod nano. This is the name that will appear in the device list on the
left side of the iTunes window.
2 Select your settings. Automatic syncing and VoiceOver are selected by default.
If you don’t want to enable VoiceOver at this time, deselect Enable VoiceOver. If you
change your mind, you can enable VoiceOver any time you connect iPod nano to your
computer. See “Setting Up VoiceOver” on page 32.
3 Click Done.
If you selected to enable VoiceOver during setup, follow any onscreen instructions for
downloading and installing the VoiceOver Kit. For more information, see “Using
VoiceOver in iPod nano” on page 44. To learn how to set up VoiceOver in iPod nano,
see “Setting Up VoiceOver” on page 32.Chapter 2 Setting Up iPod nano 25
You can change the device name and settings any time you connect iPod nano to
your computer.
After you click Done, the Summary pane appears. If you selected automatic syncing,
iPod nano begins syncing.
Adding Content Automatically or Manually
There are two ways to add content to iPod nano:
 Automatic syncing: When you connect iPod nano to your computer, iPod nano is
automatically updated to match the items in your iTunes library. You can sync all your
songs, playlists, videos, and podcasts, or, if your entire iTunes library doesn’t fit on
iPod nano, you can sync only selected items. You can sync iPod nano automatically
with only one computer at a time.
 Manually managing iPod nano: When you connect iPod nano, you can drag items
individually to iPod nano and delete them individually from iPod nano. You can add
songs from more than one computer without erasing songs from iPod nano. When
you manage music yourself, you must always eject iPod nano from iTunes before you
disconnect it.
Syncing Music Automatically
By default, iPod nano is set to sync all songs and playlists when you connect it to your
computer. This is the simplest way to add music to iPod nano. You just connect
iPod nano to your computer, let it add songs, audiobooks, videos, and other items
automatically, and then disconnect it and go. If you added any songs to iTunes since
the last time you connected iPod nano, they are synced with iPod nano. If you deleted
songs from iTunes, they are removed from iPod nano.
To sync music with iPod nano:
m Connect iPod nano to your computer. If iPod nano is set to sync automatically,
the update begins.
Important: If you connect iPod nano to a computer that it’s not synced with, a
message asks if you want to sync songs automatically. If you accept, all songs,
audiobooks, and videos are erased from iPod nano and replaced with songs and
other items from that computer.
While music is being synced from your computer to iPod nano, the iTunes status
window shows progress, and you see a sync icon next to the iPod nano icon in the
device list.26 Chapter 2 Setting Up iPod nano
When the update is done, a message in iTunes says “iPod sync is complete.” A bar at the
bottom of the iTunes window displays how much disk space is used by different types
of content.
If there isn’t enough space on iPod nano for all your music, you can set iTunes to sync
only selected songs and playlists. Only the songs and playlists you specify are synced
with iPod nano.
Syncing Music From Selected Playlists, Artists, and Genres to
iPod nano
You can set iTunes to sync selected playlists, artists, and genres to iPod nano if the
music in your iTunes library doesn’t all fit on iPod nano. Only the music from the
playlists, artists, and genres you select is synced to iPod nano.
To set iTunes to sync music from selected playlists, artists, and genres to iPod nano:
1 In iTunes, select iPod nano in the device list and click the Music tab.
2 Select “Sync music,” and then choose “Selected playlists, artists, and genres.”
3 Select the playlists you want.
4 To include music videos, select “Include music videos.”
5 To set iTunes to automatically fill any remaining space on iPod nano, select
“Automatically fill free space with songs.”
6 Click Apply.
If “Sync only checked songs and videos” is selected in the Summary pane, iTunes syncs
only items that are checked.
Syncing Genius Playlists and Genius Mixes to iPod nano
You can set iTunes to sync Genius playlists and Genius Mixes to iPod nano.
Genius Mixes can only be synced automatically. You can’t add Genius Mixes to
iPod nano if you manage your content manually. Genius playlists can be added
manually to iPod nano.
If you select any Genius Mixes to sync, iTunes may select and sync additional songs
from your library that you didn’t select.Chapter 2 Setting Up iPod nano 27
To set iTunes to sync Genius playlists and selected Genius Mixes to iPod nano:
1 In iTunes, select iPod nano in the device list and click the Music tab.
2 Select “Sync music,” and then choose “Selected playlists, artists, and genres.”
3 Under Playlists, select the Genius playlists and Genius Mixes you want.
4 Click Apply.
If you choose to sync your entire music library, iTunes syncs all your Genius playlists
and Genius Mixes.
If “Sync only checked songs and videos” is selected in the Summary pane, iTunes syncs
only items that are checked.
Adding Videos to iPod nano
You add movies and TV shows to iPod nano much the same way you add songs. You
can set iTunes to sync all movies and TV shows to iPod nano automatically when you
connect iPod nano, or you can set iTunes to sync only selected playlists. Alternatively,
you can manage movies and TV shows manually. Using this option, you can add videos
from more than one computer without erasing videos already on iPod nano.
Note: Music videos are managed with songs, under the Music tab in iTunes. See
“Adding Music, Videos, and Other Content to iPod nano” on page 24.
Important: You can view a rented movie on only one device at a time. For example,
if you rent a movie from the iTunes Store and add it to iPod nano, you can only view it
on iPod nano. If you transfer the movie back to iTunes, you can only view it there and
not on iPod nano. All standard time limits apply to rented movies added to iPod nano.
Syncing Videos Automatically
By default, iPod nano is set to sync all movies and TV shows when you connect it to
your computer. This is the simplest way to add videos to iPod nano. You just connect
iPod nano to your computer, let it add videos and other items automatically, and then
disconnect it and go. If you added any videos to iTunes since the last time you
connected iPod nano, they’re added to iPod nano. If you deleted videos from iTunes,
they’re removed from iPod nano.
If there isn’t enough space on iPod nano for all your videos, you can set iTunes to sync
only the videos you specify. You can sync selected videos or selected playlists that
contain videos.
The settings for syncing movies and TV shows are unrelated. Movie settings don’t affect
TV show settings, and vice versa.28 Chapter 2 Setting Up iPod nano
To set iTunes to sync movies to iPod nano:
1 In iTunes, select iPod nano in the device list and click the Movies tab.
2 Select “Sync movies.”
3 Select the movies or playlists you want.
All, recent, or unwatched movies: Select “Automatically include … movies” and choose
the option you want from the pop-up menu.
Selected movies or playlists: Select the movies or playlists you want.
4 Click Apply.
If “Sync only checked songs and videos” is selected in the Summary pane, iTunes syncs
only movies that are checked.
To set iTunes to sync TV shows to iPod nano:
1 In iTunes, select iPod nano in the device list and click the TV Shows tab.
2 Select “Sync TV Shows.”
3 Select the shows, episodes, and playlists you want.
All, most recent, or unwatched episodes: Select “Automatically include … episodes of … ”
and choose the options you want from the pop-up menus.
Episodes on selected playlists: Select the playlists you want.
4 Click Apply.
If “Sync only checked songs and videos” is selected in the Summary pane, iTunes syncs
only TV show that are checked.
Adding Podcasts to iPod nano
The settings for adding podcasts to iPod nano are unrelated to the settings for adding
songs and videos. Podcast settings don’t affect song or video settings, and vice versa.
You can set iTunes to automatically sync all or selected podcasts, or you can add
podcasts to iPod nano manually.
To set iTunes to update the podcasts on iPod nano automatically:
1 In iTunes, select iPod nano in the device list and click the Podcasts tab.
2 In the Podcasts pane, select “Sync Podcasts.”Chapter 2 Setting Up iPod nano 29
3 Select the podcasts, episodes, and playlists you want, and set your sync options.
4 Click Apply.
When you set iTunes to sync iPod nano podcasts automatically, iPod nano is updated
each time you connect it to your computer.
If you select “Sync only checked songs and videos” in the Summary pane, iTunes syncs
only items that are checked in your Podcasts and other libraries.
Adding Video Podcasts to iPod nano
You add video podcasts to iPod nano the same way you add other podcasts
(see “Adding Podcasts to iPod nano” on page 28). If a podcast has video, the video
plays when you choose it from the Podcasts menu.
Adding iTunes U Content to iPod nano
The settings for adding iTunes U content to iPod nano are unrelated to the settings
for adding other content. iTunes U settings don’t affect other settings, and vice versa.
You can set iTunes to automatically sync all or selected iTunes U content, or you can
add iTunes U content to iPod nano manually.
To set iTunes to update the iTunes U content on iPod nano automatically:
1 In iTunes, select iPod nano in the device list and click the iTunes U tab.
2 In the iTunes U pane, select “Sync iTunes U.”
3 Select the collections, items, and playlists you want, and set your sync options.
4 Click Apply.
When you set iTunes to sync iTunes U content automatically, iPod nano is updated each
time you connect it to your computer.
If you select “Sync only checked songs and videos” in the Summary pane, iTunes syncs
only items that are checked in your iTunes U and other libraries.
Adding Audiobooks to iPod nano
You can purchase and download audiobooks from the iTunes Store or audible.com,
or import audiobooks from CDs, and listen to them on iPod nano.
Use iTunes to add audiobooks to iPod nano. If you sync iPod nano automatically, all
audiobooks in your iTunes library are included in a playlist named Audiobooks, which
you can sync to iPod nano. If you manage your content on iPod nano manually, you
can add audiobooks one at a time.30 Chapter 2 Setting Up iPod nano
To sync audiobooks to iPod nano:
1 In iTunes, select iPod nano in the device list and click the Music tab.
2 Select Sync Music, and then do one of the following:
 Select “Entire music library.”
 Select “Selected playlists, artists, and genres,” and then select Audiobooks (below
Playlists).
3 Click Apply.
The update begins automatically.
Adding Other Content to iPod nano
You can also use iTunes to sync photos, games, contacts, and more to iPod nano.
You can set iTunes to sync your content automatically, or you can manage your content
on iPod nano manually.
For more information about adding other types of content to iPod nano, see:
 “Adding Photos from Your Computer to iPod nano” on page 67
 “To sync games automatically to iPod nano:” on page 76
 “Syncing Contacts, Calendars, and To-Do Lists” on page 81
 “Mono Audio” on page 83
Managing iPod nano Manually
If you manage iPod nano manually, you can add and remove individual songs
(including music videos) and videos (including movies and TV shows). You can also
add music and video from multiple computers to iPod nano without erasing items
already on iPod nano.
You can’t manually add Genius Mixes to iPod nano, but you can manually add Genius
playlists.
Setting iPod nano to manually manage music and video turns off the automatic sync
options in the Music, Movies, TV Shows, Podcasts, iTunes U, Photos, Contacts, and
Games panes. You can’t manually manage some and automatically sync others at the
same time.
If you set iTunes to manage content manually, you can reset it later to sync automatically.Chapter 2 Setting Up iPod nano 31
To set iTunes to manage content on iPod nano manually:
1 In iTunes, select iPod nano in the device list and click the Summary tab.
2 In the Options section, select “Manually manage music and video.”
3 Click Apply.
When you manually manage content on iPod nano, you must always eject iPod nano
from iTunes before you disconnect it.
When you connect a manually-managed iPod nano to a computer, it appears in the
device list on the left side of the iTunes window.
To add a song, video, or other item to iPod nano:
1 In iTunes, click Music or another item below Library on the left side of the iTunes window.
2 Drag a song or other item to iPod nano in the device list.
To remove a song, video, or other item from iPod nano:
1 In iTunes, select iPod nano in the device list.
2 Select a song or other item on iPod nano, and then press the Delete or Backspace key
on your keyboard.
If you remove a song or other item from iPod nano, it isn’t deleted from your iTunes
library.
To create a new playlist on iPod nano:
1 In iTunes, select iPod nano in the device list, and then click the Add (+) button or
choose File > New Playlist.
2 Type a name for the playlist.
3 Click an item, such as Music, in the Library list, and then drag songs or other items to
the playlist.
To add items to or remove items from a playlist on iPod nano:
m Drag an item to a playlist on iPod nano to add the item. Select an item in a playlist and
press the Delete key on your keyboard to delete the item.
To reset iTunes to sync music, video, and podcasts automatically:
1 In iTunes, select iPod nano in the device list and click the Summary tab.
2 Deselect “Manually manage music and videos.”
3 Select the Music, Movies, TV Shows, and Podcasts tabs, and select your sync options.
4 Click Apply.
The update begins automatically.32 Chapter 2 Setting Up iPod nano
Setting Up VoiceOver
VoiceOver announces the title and artist of the song you’re listening to, on demand.
If you have the Apple Earphones with Remote and Mic or the In-Ear Headphones with
Remote and Mic, you can also use VoiceOver to navigate playlists.
Note: VoiceOver isn’t available in all languages.
You set VoiceOver options in the Summary pane in iTunes. When you first set up
iPod nano, VoiceOver is enabled by default. Follow any onscreen instructions to
download and install the VoiceOver Kit.
If you don’t want VoiceOver enabled when you set up iPod nano, deselect Enable
VoiceOver in the Setup Assistant. If you change your mind, you can enable VoiceOver at
a later time.
To enable VoiceOver at a later time:
1 Connect iPod nano to your computer.
2 In iTunes, select iPod nano in the device list and click the Summary tab.
3 Under Voice Feedback, select Enable VoiceOver.
4 Click Apply.
5 Follow the onscreen instructions to download and install the VoiceOver Kit.
6 Click Apply.
When syncing is finished, VoiceOver is enabled.
If your computer has a system voice that you want to use instead of the built-in voice
that comes with VoiceOver, select “Use system voice instead of built-in voice” under
Voice Feedback in the Summary pane.
You can disable VoiceOver any time you connect iPod nano to your computer.
To disable VoiceOver:
1 In iTunes, select iPod nano in the device list and click the Summary tab.
2 Under Voice Feedback, deselect Enable VoiceOver.
3 Click Apply.
When syncing is finished, VoiceOver is disabled.3
33
3 Listening to Music
Read this chapter to learn about listening on the go.
After you set up iPod nano, you can listen to songs, podcasts, audiobooks, and more.
Playing Music and Other Audio
When a song is playing, the Now Playing screen appears. The following table describes
the elements on the Now Playing screen.
Screen item Function
Shuffle icon Appears if iPod nano is set to shuffle songs or albums.
Repeat icon Appears if iPod nano is set to repeat all songs. The Repeat Once
(!) icon appears if iPod nano is set to repeat one song.
Play icon Appears when song is playing. The Pause (1) icon appears when
song is paused.
Battery icon Shows the approximate remaining battery charge.
Song information Displays the song title, artist, and album title.
Album art Shows the album art, if it’s available.
Progress bar Shows the elapsed and remaining times for the song that’s playing.
Progress bar
Click the Center button
to see the scrubber bar,
Genius or shuffle slider,
song rating and lyrics.
Album art
Shuffle icon
Repeat icon Play icon
Battery icon
Song information34 Chapter 3 Listening to Music
Press the Center button to click through these additional items in the Now Playing screen:
Use the Click Wheel and Center button to browse for a song or music video.
When you play music videos from the Music menu, you only hear the music. When you
play them from the Videos menu, you also see the video.
To browse for and play a song:
m Choose Music, browse for a song or music video, and then press Play/Pause (’).
To change the playback volume:
m When you see the progress bar, use the Click Wheel to change the volume.
If you don’t see the progress bar, press the Center button until it appears.
To listen to a different part of a song:
1 Press the Center button until you see the scrubber bar.
2 Use the Click Wheel to move the playhead along the scrubber bar.
Screen item Function
Scrubber bar Lets you quickly navigate to a different part of the track.
Genius slider Creates a Genius playlist based on the current song. The slider
doesn’t appear if Genius information isn’t available for the current
song.
Shuffle slider Lets you shuffle songs or albums directly from the Now Playing
screen.
Song rating Lets you rate the song.
Lyrics Displays the lyrics of the song that’s playing. Lyrics don’t appear if
you didn’t enter them in iTunes.Chapter 3 Listening to Music 35
To create a Genius playlist from the current song:
1 Press the Center button until you see the Genius slider.
2 Use the Click Wheel to move the slider to Start.
The Genius slider doesn’t appear if Genius information isn’t available for the
current song.
To shuffle songs from the Now Playing screen:
1 Press the Center button until you see the shuffle slider.
2 Use the Click Wheel to move the slider to Songs or Albums.
 Choose Songs to play all songs on iPod nano at random.
 Choose Albums to play all songs in the current album in order. iPod nano then
randomly selects another album and plays through it in order.36 Chapter 3 Listening to Music
To just listen to a music video:
m Choose Music and browse for a music video.
When you play the video, you hear it but don’t see it. When you play a playlist that
includes video podcasts, you hear the podcasts but don’t see them.
To return to the previous menu:
m From any screen, press Menu.
Rating Songs
You can assign a rating to a song (from 1 to 5 stars) to indicate how much you like it.
You can use song ratings to help you create Smart Playlists automatically in iTunes.
To rate a song:
1 Start playing the song.
2 From the Now Playing screen, press the Center button until the five rating bullets
appear.
3 Use the Click Wheel to assign a rating.
The ratings you assign to songs on iPod nano are transferred to iTunes when you sync.
Note: You can’t assign ratings to video podcasts.Chapter 3 Listening to Music 37
Viewing Lyrics on iPod nano
If you enter lyrics for a song in iTunes (see “Add Lyrics” on page 21) and then add the
song to iPod nano, you can view the lyrics on iPod nano. Lyrics don’t appear unless you
enter them.
To view lyrics on iPod nano while a song is playing:
m On the Now Playing screen, press the Center button until you see the lyrics. You can
scroll through the lyrics as the song plays.
Viewing Album Artwork on iPod nano
iTunes displays album artwork on iPod nano, if the artwork is available. Artwork
appears on iPod nano in Cover Flow, in the album list, and when you play a song from
the album.
To see album artwork on iPod nano:
m Hold iPod nano horizontally to view Cover Flow, or play a song that has album artwork
and view it in the Now Playing screen.
For more information about album artwork, open iTunes and choose Help > iTunes Help.
Browsing Music Using Cover Flow
You can browse your music collection using Cover Flow, a visual way to flip through
your library. Cover Flow displays your albums alphabetically by artist name.
You can activate Cover Flow from the main menu, any music menu, or the Now Playing
screen.38 Chapter 3 Listening to Music
To use Cover Flow:
1 Rotate iPod nano 90 degrees to the left or right. Cover Flow appears.
2 Use the Click Wheel to move through your album art.
3 Select an album and press the Center button.
4 Use the Click Wheel to select a song, and then press the Center button to play it.
You can also browse quickly through your albums in Cover Flow by moving your
thumb quickly on the Click Wheel.
Note: Not all languages are supported.
To browse quickly in Cover Flow:
1 Move your thumb quickly on the Click Wheel, to display a letter of the alphabet on
the screen.
2 Use the Click Wheel to navigate through the alphabet until you find the first letter of
the artist you’re looking for.
Albums by artists beginning with a symbol or number appear after the letter Z.
3 Lift your thumb momentarily to return to normal browsing.
4 Select an album and press the Center button.
5 Use the Click Wheel to select a song, and then press the Center button to play it.
To turn Cover Flow on or off:
1 From the main menu, choose Settings > General > Rotate.
2 Press the Center button to select Cover Flow or Off.Chapter 3 Listening to Music 39
Accessing Additional Commands
Some additional iPod nano commands can be accessed directly from the Now Playing
screen and some menus.
To access additional commands:
m Press and hold the Center button until a menu appears, select a command, and then
press the Center button again.
If a menu doesn’t appear, no additional commands are available.
Using Genius on iPod nano
Even when iPod nano isn’t connected to your computer, Genius can automatically
create instant playlists of songs that go great together. You can also play Genius Mixes,
which are pre-selected compilations of songs that go great together. To use Genius,
you need to set up Genius in the iTunes Store, and then sync iPod nano to iTunes.
You can also create Genius playlists in iTunes and add them to iPod nano, and you can
sync Genius Mixes to iPod nano.
To learn how to set up Genius in iTunes, open iTunes and choose Help > iTunes Help.
Genius is a free service, but you need an iTunes Store account to use it.40 Chapter 3 Listening to Music
To make a Genius playlist on iPod nano:
1 Select a song, and then press and hold the Center button until a menu appears.
You can select a song from a menu or playlist, or you can start from the Now Playing
screen.
2 Choose Start Genius.
Start Genius doesn’t appear in the menu of additional commands, if any of the
following apply:
 You haven’t set up Genius in iTunes and then synced iPod nano with iTunes.
 Genius doesn’t recognize the song you selected.
 Genius recognizes the song, but there aren’t at least ten similar songs in your library.
3 Press the Center button. The new playlist appears.
4 To keep the playlist, choose Save Playlist.
The playlist is saved with the song title and artist of the song you used to make
the playlist.
5 To change the playlist to a new one based on the same song, choose Refresh. If you
refresh a saved playlist, the new playlist replaces the previous one. You can’t recover
the previous playlist.
You can also start Genius from the Now Playing screen by pressing the Center button
until you see the Genius slider, and then using the Click Wheel to move the slider to
the right. The Genius slider won’t appear if Genius information isn’t available for the
current song.
Genius playlists saved on iPod nano are synced back to iTunes when you connect
iPod nano to your computer.
To play a Genius playlist:
m Choose Music > Playlists and choose the playlist.
Playing Genius Mixes
Genius Mixes are created for you by iTunes and contain songs from your library that go
great together. Genius Mixes are designed to provide a different listening experience
each time you play one. iTunes creates up to 12 Genius Mixes, depending on the variety
of music in your iTunes library.
To find out how to sync Genius Mixes to iPod nano, see “Syncing Genius Playlists and
Genius Mixes to iPod nano” on page 26.Chapter 3 Listening to Music 41
To play a Genius Mix:
1 Choose Music > Genius Mixes.
2 Use Next/Fast-forward (‘) or Previous/Rewind (]) to browse the Genius Mixes.
The dots at the bottom of the screen indicate how many Genius Mixes are synced
to iPod nano.
3 To start playing a Genius Mix, press the Center button or Play/Pause (’) when you see
its screen.
The Speaker ( ) icon appears when the selected Genius Mix is playing.
Creating On-The-Go Playlists on iPod nano
You can create On-The-Go Playlists on iPod nano when iPod nano isn’t connected to
your computer.
To create an On-The-Go playlist:
1 Select a song, and then press and hold the Center button until a menu appears.
2 Choose “Add to On-The-Go.”
3 To add more songs, repeat steps 1 and 2.42 Chapter 3 Listening to Music
4 Choose Music > Playlists > On-The-Go to browse and play your list of songs.
You can also add a group of songs. For example, to add an album, highlight the album
title, press and hold the Center button until a menu appears, and then choose “Add to
On-The-Go.”
To play songs in the On-The-Go playlist:
m Choose Music > Playlists > On-The-Go, and then choose a song.
To remove a song from the On-The-Go playlist:
1 Select a song in the playlist and hold down the Center button until a menu appears.
2 Choose “Remove from On-The-Go,” and then press the Center button.
To clear the entire On-The-Go playlist:
m Choose Music > Playlists > On-The-Go > Clear Playlist, and then click Clear.
To save the On-The-Go playlist on iPod nano:
m Choose Music > Playlists > On-The-Go > Save Playlist.
The first playlist is saved as “New Playlist 1” in the Playlists menu. The On-The-Go
playlist is cleared and ready to reuse. You can save as many playlists as you like.
After you save a playlist, you can no longer remove songs from it.
To copy On-The-Go playlists from iPod nano to your computer:
m If iPod nano is set to sync songs automatically (see “Syncing Music Automatically” on
page 25) and you create an On-The-Go playlist, the playlist is automatically synced to
iTunes when you connect iPod nano. The new On-The-Go playlist appears in the list of
playlists in iTunes. You can rename, edit, or delete the new playlist, just as you would
any playlist.
Browsing Songs by Album or Artist
When you’re listening to a song, you can browse more songs by the same artist or all
the songs in the current album.
To browse songs by album:
1 From the Now Playing screen, press and hold the Center button until a menu appears.
2 Choose Browse Album, and then press the Center button.
You see all the songs from the current album that are on iPod nano. You can select a
different song or return to the Now Playing screen.
To browse songs by artist:
1 From the Now Playing screen, press and hold the Center button until a menu appears.
2 Choose Browse Artist, and then press the Center button.
You see all the songs by that artist that are on iPod nano. You can select a different
song or return to the Now Playing screen.Chapter 3 Listening to Music 43
Setting iPod nano to Shuffle Songs
You can set iPod nano to play songs, albums, or your entire library in random order.
To set iPod nano to shuffle and play all your songs:
m Choose Shuffle Songs from the iPod nano main menu.
iPod nano begins playing songs from your entire music library in random order,
skipping audiobooks and podcasts.
To set iPod nano to always shuffle songs or albums:
1 Choose Settings from the iPod nano main menu.
2 Set Shuffle to either Songs or Albums.
When you set iPod nano to shuffle songs, iPod nano shuffles songs within whatever list
(for example, album or playlist) you choose to play.
When you set iPod nano to shuffle albums, it plays all the songs on an album in order,
and then randomly selects another album and plays through it in order.
You can also set iPod nano to shuffle songs directly from the Now Playing screen.
To set iPod nano to shuffle songs from the Now Playing screen:
1 From the Now Playing screen, press the Center button until the shuffle slider appears.
2 Use the Click Wheel to set iPod nano to shuffle songs or albums.
You can skip ahead to a random song by shaking iPod nano.
To shuffle songs while a song is playing or paused:
m Shake iPod nano from side to side. A new song starts to play.
Shaking to shuffle doesn’t change your shuffle settings.44 Chapter 3 Listening to Music
To disable shaking:
m Choose Settings > Playback > Shake and select Off.
To turn shaking on again, choose Settings > Playback > Shake, and then select Shuffle.
Shaking is also disabled when the Hold switch is in the HOLD position, when the
iPod nano built-in radio is playing, or when the display is off. If iPod nano is off, you
can’t turn it on by shaking it.
Setting iPod nano to Repeat Songs
You can set iPod nano to repeat a song over and over, or repeat songs within the list
you choose to play.
To set iPod nano to repeat songs:
m Choose Settings from the iPod nano main menu.
 To repeat all songs in the list, set Repeat to All.
 To repeat one song over and over, set Repeat to One.
Using VoiceOver in iPod nano
With VoiceOver, iPod nano can announce the title and artist of the song you’re listening
to. VoiceOver is available in selected languages.
To use VoiceOver, install the VoiceOver Kit and enable the VoiceOver feature in iTunes.
For more information, see “Setting Up VoiceOver” on page 32.
To hear the current song announcement:
m From the Now Playing screen, press the Center button.
You hear the current song title and artist name. If you’re listening to an audiobook,
you hear the book title and author’s name.
If you have the Apple Earphones with Remote and Mic or the In-Ear Headphones with
Remote and Mic (available at store.apple.com or your local Apple Store), you can also
use VoiceOver to navigate through playlists. For more information, see the
documentation for those accessories.
Searching Music
You can search iPod nano for songs, playlists, album titles, artist names, audio podcasts,
and audiobooks. The search feature doesn’t search videos, notes, calendar items,
contacts, or lyrics.
Note: Not all languages are supported.
To search for music:
1 From the Music menu, choose Search.
2 Enter a search string by using the Click Wheel to navigate the alphabet and pressing
the Center button to enter each character.Chapter 3 Listening to Music 45
iPod nano starts searching as soon as you enter the first character, displaying the
results on the search screen. For example, if you enter “b,” iPod nano displays all music
items containing the letter “b.” If you enter “ab,” iPod nano displays all items containing
that sequence of letters.
To enter a space character, press Next/Fast-forward (‘).
To delete the previous character, press Previous/Rewind (]).
3 Press Menu to display the results list, which you can navigate by using the Click Wheel.
Items appear in the results list with icons identifying their type: song, video, artist,
album, audiobook, or podcast.
To return to Search (if Search is highlighted in the menu), press the Center button.
Customizing the Music Menu
You can add items to or remove them from the Music menu, just as you do with the
main menu. For example, you can add a Compilations item to the Music menu, so you
can easily choose compilations that are put together from various sources.
To add or remove items in the Music menu:
1 Choose Settings > General > Music Menu.
2 Select each item you want to appear in the Music menu. A checkmark indicates
which items have been added. To revert to the original Music menu settings,
choose Reset Menu.
Setting the Maximum Volume Limit
You can set a limit for the maximum volume on iPod nano and assign a combination
to prevent the setting from being changed.
To set the maximum volume limit for iPod nano:
1 Choose Settings > Playback > Volume Limit.
The volume control shows the current volume.
2 Use the Click Wheel to select the maximum volume limit.
3 Press the Center button to set the maximum volume limit.
4 If you don’t want to require a combination to change the maximum volume,
choose Done.
To require a combination to change the maximum volume:
1 After setting the maximum volume, choose Lock.
2 In the screen that appears, enter a combination.
To enter a combination:
 Use the Click Wheel to select a number for the first position. Press the Center button
to confirm your choice and move to the next position. 46 Chapter 3 Listening to Music
 Use the same method to set the remaining numbers of the combination. You can use
Next/Fast-forward (‘) to move to the next position and Previous/Rewind (]) to
move to the previous position. Press the Center button in the final position to
confirm the entire combination.
The volume of songs and other audio may vary depending on how the audio was
recorded or encoded. See “Setting Songs to Play at the Same Volume Level” on page 46
for information about how to set a relative volume level in iTunes and on iPod nano.
The volume level may also vary if you use different earphones or headphones.
Accessories that connect using the Dock Connector don’t support volume limits.
If you set a combination, you must enter it before you can change or remove the
maximum volume limit.
To change the maximum volume limit:
1 Choose Settings > Playback > Volume Limit.
2 If you set a combination, enter it by using the Click Wheel to select the numbers and
pressing the Center button to confirm them.
A triangle on the volume bar indicates the current volume limit.
3 Use the Click Wheel to change the maximum volume limit.
4 Press Play/Pause (’) to accept the change.
To remove the maximum volume limit:
1 If you’re currently listening to iPod nano, press Play/Pause (’).
2 Choose Settings > Playback > Volume Limit.
3 If you set a combination, enter it by using the Click Wheel to select the numbers and
pressing the Center button to confirm each number.
4 Use the Click Wheel to move the volume limit to the maximum level on the volume bar.
This removes any restriction on volume.
5 Press Play/Pause (’) to accept the change.
If you forget the combination, you can restore iPod nano. See “Updating and Restoring
iPod Software” on page 92.
Setting Songs to Play at the Same Volume Level
iTunes can automatically adjust the volume of songs, so they play at the same relative
volume level. You can set iPod nano to use the iTunes volume settings.
To set iTunes to play songs at the same sound level:
1 In iTunes, choose iTunes > Preferences if you’re using a Mac, or choose Edit >
Preferences if you’re using a Windows PC.
2 Click Playback and select Sound Check, and then click OK.Chapter 3 Listening to Music 47
To set iPod nano to use the iTunes volume settings:
m Choose Settings > Playback and set Sound Check to On.
If you haven’t activated Sound Check in iTunes, setting it on iPod nano has no effect.
Using the Equalizer
You can use equalizer presets to change the sound on iPod nano to suit a particular
music genre or style. For example, to make rock music sound better, set the equalizer
to Rock.
To use the equalizer to change the sound on iPod nano:
m Choose Settings > Playback > EQ, and then choose an equalizer preset.
If you assigned an equalizer preset to a song in iTunes and the iPod nano equalizer is
set to Off, the song plays using the iTunes setting. See iTunes Help for more
information.
Crossfading Between Songs
You can set iPod nano to fade out at the end of each song and fade in at the beginning
of the song following it.
To turn on crossfading:
m Choose Settings > Playback > Audio Crossfade and select On.
Note: Songs that are grouped for gapless playback play without gaps even when
crossfading is on.
Playing Podcasts
Podcasts are free, downloadable shows available at the iTunes Store. Podcasts are
organized by shows, episodes within shows, and chapters within episodes. If you stop
playing a podcast and return to it later, the podcast begins playing where you left off.
To play a podcast:
1 From the main menu, choose Podcasts, and then choose a show.
Shows appear in reverse chronological order so you can play the most recent one first.
You see a blue dot next to shows and episodes you haven’t played yet.
2 Choose an episode to play it.
The Now Playing screen displays the show, episode, and date information, along with
elapsed and remaining time. Press the Center button to see more information about
the podcast.
If the podcast has chapters, you can press Next/Fast-forward (‘) or Previous/Rewind
(]) to skip to the next chapter or the beginning of the current chapter in the podcast.48 Chapter 3 Listening to Music
If a podcast includes artwork, you also see a picture. Podcast artwork can change
during an episode.
For more information about podcasts, open iTunes and choose Help > iTunes Help.
Then search for “podcasts.”
Playing iTunes U Content
iTunes U is a part of the iTunes Store featuring free lectures, language lessons,
audiobooks, and more, which you can download and enjoy on iPod nano. iTunes U
content is organized by collections, items within collections, authors, and providers.
If you stop listening to iTunes U content and return to it later, the collection or item
begins playing where you left off.
To play iTunes U content:
1 From the main menu, choose iTunes U, and then choose a collection.
Items within a collection appear in reverse chronological order so you can listen to the
most recent one first. You see a blue dot next to collections and items you haven’t
watched or listened to yet.
2 Choose an item to play it.
For more information about iTunes U, open iTunes and choose Help > iTunes Help.
Then search for “iTunes U.”
Listening to Audiobooks
To listen to audiobooks on iPod nano, choose Audiobooks from the Music menu.
Choose an audiobook, and then press Play/Pause (’).
If you stop listening to an audiobook on iPod nano and return to it later, the audiobook
begins playing where you left off. iPod nano skips audiobooks when set to shuffle.
If the audiobook you’re listening to has chapters, you can press Next/Fast-forward (‘)
or Previous/Rewind (]) to skip to the next chapter or the beginning of the current
chapter. You can also choose the audiobook from the Audiobooks menu, and then
choose a chapter, or choose Resume to begin playing where you left off.
You can play audiobooks at speeds faster or slower than normal. Setting the play speed
affects only audiobooks purchased from the iTunes Store or audible.com.
To set audiobook play speed:
m Choose Settings > Playback > Audiobooks and choose a speed, or press and hold the
Center button from the Now Playing window, and then choose a speed.4
49
4 Watching Videos
You can use iPod nano to watch TV shows, movies, video
podcasts, and more. Read this chapter to learn about
watching videos on iPod nano and on your TV.
You can view and listen to videos on iPod nano. If you have a compatible AV cable
(available separately at www.apple.com/ipodstore), you can watch videos from
iPod nano on your TV.
Watching Videos on iPod nano
Videos you add to iPod nano appear in the Videos menus. Music videos also appear in
Music menus. Videos recorded with the iPod nano built-in video camera appear in the
Videos menu, under Camera Videos.
To watch a video on iPod nano:
1 Choose Videos and browse for a video. To browse for a video recorded with the
iPod nano built-in video camera, choose Camera Videos.
2 Select a video and then press Play/Pause (’).
To watch the video, hold iPod nano horizontally. You can rotate iPod nano to either the
left or right. 50 Chapter 4 Watching Videos
To watch videos recorded in portrait (vertical) format with the built-in video camera,
hold iPod nano vertically.
Watching Video Podcasts
To watch a video podcast:
m From the main menu, choose Podcasts and then choose a video podcast.
For more information, see “Playing Podcasts” on page 47.
Watching Videos Downloaded from iTunes U
To watch an iTunes U video:
m From the main menu, choose iTunes U and then choose a video.
For more information, see “Playing iTunes U Content” on page 48.
Watching Videos on a TV Connected to iPod nano
If you have an AV cable from Apple, you can watch videos on a TV connected to your
iPod nano. First you set iPod nano to display videos on a TV, then connect iPod nano to
your TV, and then play a video.
Use the Apple Component AV Cable, the Apple Composite AV Cable, or the Apple AV
Connection Kit. Other similar RCA-type cables might not work. You can purchase the
cables at www.apple.com/ipodstore or your local Apple Store.
To set iPod nano to display videos on a TV:
m Choose Videos > Settings, and then set TV Out to Ask or On.
If you set TV Out to Ask, iPod nano gives you the option of displaying videos on TV or
on iPod nano every time you play a video. If you set TV Out to On, iPod nano displays
videos only on TV. If you try to play a video when iPod nano isn’t connected to a TV,
iPod nano displays a message instructing you to connect to one.
You can also set video to display full screen or widescreen, and set video to display on
PAL or NTSC devices.
To set TV settings:
m Choose Videos > Settings, and then follow the instructions below.
To set Do this
Video to display on a TV Set TV Out to Ask or On.
Video to display on a PAL or
NTSC TV
Set TV Signal to PAL or NTSC. PAL and NTSC refer to TV broadcast
standards. Your TV might use either of these, depending on the
region where it was purchased. If you aren’t sure which your TV
uses, check the documentation that came with your TV.
The format of your external TV Set TV Screen to Widescreen for 16:9 format or Standard for 4:3
format.Chapter 4 Watching Videos 51
To use the Apple Component AV Cable to connect iPod nano to your TV:
1 Plug the green, blue, and red video connectors into the component video
(Y, Pb, and Pr) input ports on your TV.
If you use the Apple Composite AV cable, plug the yellow video connector into the
video input port on your TV. Your TV must have RCA video and audio ports.
2 Plug the white and red audio connectors into the left and right analog audio input
ports on your TV.
3 Plug the 30-pin connector into your iPod nano or Universal Dock.
4 Plug the USB connector into your Apple USB Power Adapter or your computer to keep
your iPod nano charged.
5 Turn on iPod nano and your TV or receiver to start playing. Make sure you set TV Out
on iPod nano to On or Ask.
The ports on your TV or receiver may differ from the ports in the illustration.
To watch a video on your TV:
1 Connect iPod nano to your TV (see above).
2 Turn on your TV and set it to display from the input ports connected to iPod nano.
For more information, see the documentation that came with your TV.
3 On iPod nano, choose Videos and browse for a video.
Video to fit to your screen Set “Fit to Screen” to On. If you set “Fit to Screen” to Off, widescreen
videos display in letterbox format on iPod nano or a standard (4:3)
TV screen.
Alternate audio to play Set Alternate Audio to On.
Captions to display Set Captions to On.
Subtitles to display Set Subtitles to On.
To set Do this
USB Power
Adapter
iPod Left audio (white)
30-pin connector
Television
Video in (Y, Pb, Pr)
Right audio (red)
USB
connector5
52
5 Using the Video Camera
With the built-in iPod nano video camera, you can record
high-quality video with sound wherever you go. You can even
record video with special effects. You can watch your
recorded videos on iPod nano, and you can transfer them to
your computer to edit and share.
To use iPod nano as a video camera, choose Video Camera from the main menu.
The display screen becomes a viewfinder.
You can record video in landscape or portrait mode. In either mode, your current
recording time appears in the upper right corner of the display.Chapter 5 Using the Video Camera 53
The lens and microphone are on the back of iPod nano, so you can use the display to
see the video you’re recording. Be careful not to block the lens or microphone.
Recording Video
To record video:
1 Choose Video Camera from the main menu.
2 When you’re ready to begin recording, press the Center button. Press the Center button
again to stop recording.
When video is recording, a blinking red light appears in the upper right corner of the
display, next to the recording time.
Recording time depends on the available disk space and battery level.
A recorded video can be up to 2 GB in size. Once a recorded video takes up 2 GB of
disk space, recording stops. To resume recording, press the Center button.
Recording Video with Special Effects
You can record video with a variety of special effects on iPod nano.
Note: Video effects can only be selected before recording. iPod nano can’t add effects
to or remove effects from recorded videos. You can’t change video effects settings
while recording.
To record video with special effects:
1 Choose Video Camera from the main menu.54 Chapter 5 Using the Video Camera
2 Press and hold the Center button to display the video effects palette.
3 Use the Click Wheel to browse the effects, and press the Center button to select one.
The viewfinder screen appears with the selected effect.
4 Press the Center button again to start recording with video effects.
5 Press the Center button to stop recording.
If you exit the Video Camera screen to play back your video, video effects are turned
off. To resume recording with a video effect, repeat steps 2 through 4.
Playing Recorded Videos
iPod nano saves your recorded videos to the Camera Roll. To go to the Camera Roll
screen, press Menu from the Video Camera viewfinder screen.
iPod nano lets you access your recorded videos from the Camera Roll screen, so you
can watch what you just recorded without leaving the Video Camera application.
Your recorded videos can also be played from the Videos menu.
To play back a video you just recorded:
1 Press the Center button to stop recording.
2 Press Menu to enter the Camera Roll screen.
3 Choose the recording, and then press the Center button to play.
You can also access a complete list of recorded videos on iPod nano from the Videos
menu.
To play a recorded video from the Videos menu:
1 Choose Videos in the main menu.
2 Choose Camera Videos to display a list of recorded videos.
3 Use the Click Wheel to scroll to the video you want to play, and then press Play/Pause
(’) to start and stop playback. Playback stops automatically at the end of the video.Chapter 5 Using the Video Camera 55
Deleting Recorded Videos
Removing unwanted videos clears disk space for new videos. Recorded videos can be
deleted one at a time, or all at once.
To delete a recorded video:
1 Go to Videos > Camera Videos and select a video from the list, or select a video from
the Camera Roll screen.
2 Press and hold the Center button until a menu appears.
3 Choose Delete, or Delete All.
Importing Recorded Videos to Your Computer
You can import your recorded videos to your computer. If you have a Mac with iPhoto,
you can easily share your recorded videos and add background music to them.
iPod nano formats recorded videos as VGA video H.264 w/AAC 30 fps files.
To import your recorded videos to your computer, iPod nano must be enabled for
disk use.
To enable iPod nano for disk use:
1 Connect iPod nano to your computer.
2 In iTunes, click iPod nano in the device list and click the Summary tab.
3 Select “Enable disk use.”
In addition to appearing in iTunes, iPod nano also appears on your computer as an
external disk, with the same name you gave it during initial setup. On a Mac, iPod nano
appears in the Finder and on the Desktop. On a PC, iPod nano appears in Windows
Explorer and My Computer.
Your recorded videos are stored in the DCIM folder on iPod nano, and can be copied to
your computer when iPod nano is connected to it. See the documentation that came
with your computer for more information about copying files.
After you copy your recorded videos to your computer, you can watch them on a Mac
using QuickTime Player. You can watch them on a PC using QuickTime or Windows
Media Player.
To clear disk space on iPod nano after you’ve copied your recorded videos to your
computer, delete them from the DCIM folder.56 Chapter 5 Using the Video Camera
Importing Recorded Videos to a Mac with iPhoto Installed
If your computer is a Mac with iPhoto 6.0.6 or later installed, you can use iPhoto to
import your recorded videos from iPod nano to your Mac and post them on MobileMe.
You can also add music by editing your recorded videos in QuickTime Player. To use
iPhoto to import your recorded videos, iPod nano must be enabled for disk use.
To import videos to your Mac using iPhoto:
1 Connect iPod nano to your computer.
2 Open iPhoto if it doesn’t open automatically.
3 Click iPod nano in the iPhoto device list.
4 Select the videos to import, and then click Selected or Import All.
5 Select Delete Photos or Keep Photos.
Your recorded videos appear in your iPhoto library in Events and Photos, and in the list
of recent imports.
To share recorded videos using iPhoto:
1 Follow the instructions to import your recorded videos to iPhoto.
2 In iPhoto, select a recorded video.
3 Click MobileMe at the bottom of the iPhoto window.
4 Follow the onscreen instructions.
You need a MobileMe account in order to share your recorded videos using MobileMe,
and you need to set up iPhoto to publish to your account. For more information about
online sharing, open iPhoto and choose Help > iPhoto Help.
Sharing Recorded Videos from a Mac or PC
After you import your recorded videos to your computer, you can post them on
YouTube using a Mac or PC.
To post recorded videos on Facebook:
1 Go to www.facebook.com and log in if necessary.
2 Click the video icon to the left of the Share button at the top of your Facebook
homepage, and then click “Upload a Video.”
3 Follow the onscreen instructions to select and upload your video.
To post recorded videos on YouTube:
1 Go to www.youtube.com and log in if necessary.
2 Click the Upload button at the top right of your YouTube homepage.
3 Follow the onscreen instructions to select and upload your video.
If you have a Mac with iPhoto 8.1 or later and Mac OS X v10.6.1 or later, you can also
export your recorded videos directly to YouTube.Chapter 5 Using the Video Camera 57
To post recorded videos on YouTube using iPhoto 8.1 or later and Mac OS X v10.6.1
or later:
1 In iPhoto, double-click the video you want to post. The video opens in QuickTime
Player.
2 In QuickTime Player, choose Share > YouTube.
3 Enter your YouTube name and password, and then click Sign In.
4 Enter a description and tags. If you want to restrict access to your video, select “Make
this movie personal.”
5 Click Next, and then click Share.
When the export is complete, click the link that appears to go to your video page on
YouTube.
Accounts are required to upload videos to Facebook or YouTube. For more information,
visit the websites.
Adding Music to Your Recorded Videos
You can use QuickTime Player to add music to your recorded videos. Select a recorded
video in iPhoto, and then click Edit at the bottom of the iPhoto window. The recorded
video opens in QuickTime Player, where you can add a music track to your recorded
video.
To learn how to add music to your recorded videos with QuickTime Player, choose
Help > QuickTime Player Help, and see the instructions for extracting, adding, and
moving tracks.
To add music to your recorded videos with a Windows PC, see the documentation that
came with your computer or photo application.6
58
6 Listening to FM Radio
iPod nano has a built-in FM radio that displays station and
song information, lets you pause live radio, and tags songs
that you can preview and purchase in iTunes.
To listen to FM radio, connect earphones or headphones to iPod nano, and then
choose Radio from the main menu.
iPod nano uses the earphone or headphone cord as the radio antenna. You must
connect earphones or headphones to iPod nano in order to receive a radio signal.
The radio doesn’t play through the iPod nano speaker.
After you choose Radio from the main menu, the radio screen appears.
When the radio dial is visible, you can use the Click Wheel or press Next/Fast-forward (‘)
or Previous/Rewind (]) to tune to a station.
Important: Radio frequencies shown in this chapter are for illustration purposes only
and are not available in all areas.Chapter 6 Listening to FM Radio 59
When you tune to a station that supports RDS (Radio Data System), song, artist, and
station information appear in the display. After you tune to a station, the progress bar
replaces the radio dial. The progress bar fills up as you continue to listen to the station.
Display item Function
RDS data Displays the current station, song, and artist.
Radio dial Tunes the FM radio.
Favorite station markers Indicate that the current station is in the Favorites list.
Radio signal icon Appears when the radio is on and receiving a signal.
Station frequency Displays the number of the station that the radio is tuned to.
Tag icon Appears if the current song supports iTunes Tagging.
Progress bar Indicates the length of the radio buffer.
Radio signal icon
Progress bar
Radio dial
RDS data
Station frequency
Favorite station marker
RDS data
Tag icon
Favorite station marker60 Chapter 6 Listening to FM Radio
Tuning the FM radio
You can tune the FM radio by browsing stations, seeking or scanning available stations,
or saving your favorite stations and tuning to them directly.
To browse radio stations:
1 Choose Radio from the main menu. If you don’t see the radio dial, press the Center
button until it appears.
2 Use the Click Wheel to browse the radio dial.
To seek available stations:
1 Choose Radio from the main menu. If you don’t see the radio dial, press the Center
button until it appears.
2 Press Next/Fast-forward (‘) or Previous/Rewind (]) to seek the next or previous
available station. Repeat to continue seeking.
The station seeking function isn’t available if any favorite stations are set. If favorites
are set, pressing Next/Fast-forward (‘) or Previous/Rewind (]) tunes the radio to
favorite stations.
To scan available stations:
1 Choose Radio from the main menu. If you don’t see the radio dial, press the Center
button until it appears.
2 Press and hold Next/Fast-forward (‘) or Previous/Rewind (]) to scan available stations.
You hear a five-second preview of each station before advancing to the next one.
3 To stop scanning and listen to the current station, press the Center button.
103.5FM
100 106 102 104Chapter 6 Listening to FM Radio 61
To save your favorite stations:
1 Tune to a station you want to save.
2 Press and hold the Center button until a menu appears.
3 Choose “Add to Favorites” and then press the Center button.
To tune to a favorite station:
1 Choose Radio from the main menu. If you don’t see the radio dial, press the Center
button until it appears.
2 Press Next/Fast-forward (‘) or Previous/Rewind (]) to tune to the next or previous
favorite station. Repeat to continue tuning.
Pausing Live Radio
You can pause a radio broadcast, and resume playing it from the same point up to
15 minutes later.
To pause live radio:
m While the radio is playing, press Play/Pause (’) from any screen.
103.5FM62 Chapter 6 Listening to FM Radio
The Pause (1) icon appears, and the time at which you paused is displayed above the
progress bar.
As Live Pause continues, a yellow triangle indicates the point where the radio was
paused. The progress bar continues to fill up, indicating the time that’s passed since
you paused.
When you press Play/Pause (’) again, the program resumes from the same point.Chapter 6 Listening to FM Radio 63
You can also navigate forward or back along the progress bar. To fast-forward or
rewind, press and hold Next/Fast-forward (‘) or Previous/Rewind (]), or use the Click
Wheel. To skip forward or back in one-minute intervals, press Next/Fast-forward (‘) or
Previous/Rewind (]).
You can navigate through paused radio only when the progress bar appears, not the
radio dial.
To switch between the progress bar and radio dial:
m Press the Center button.
The progress bar is completely filled when Live Pause reaches the 15-minute limit.
As long as your paused radio isn’t cleared, you can navigate through the 15 most recent
minutes of the station you’re listening to. Anything older than 15 minutes is cleared to
make room for the continuing broadcast.
If you pause without resuming for 15 minutes, iPod nano goes to sleep and clears your
paused radio.
Paused radio is cleared if any of the following occurs:
 You change stations. If you try to change stations while Live Pause is active, a
warning appears and gives you the option to cancel.
 You turn off iPod nano.
 You exit Radio to play other media content, use the video camera, or record a voice
memo.
 The battery is very low on power and needs to be charged.
 You pause the radio for 15 minutes without resuming play.
You can disable Live Pause from the Radio menu, to conserve battery life.
To disable Live Pause:
1 From the Radio screen, press Menu.
2 Choose Live Pause, and then press the Center button to select Off. To enable Live Pause
again, select On.64 Chapter 6 Listening to FM Radio
Tagging Songs to Sync to iTunes
If you’re tuned to a radio station that supports iTunes Tagging, you can save a list of
songs that you can preview and purchase later at the iTunes Store. Songs that can be
tagged are marked with a tag icon next to the song title.
To tag a song:
1 Press and hold the Center button until a menu appears.
2 Choose Tag, and then press the Center button.
Your tagged songs appear in the Radio menu under Tagged Songs. The next time you
sync iPod nano to iTunes, your tagged songs are synced and removed from iPod nano.
They appear in Tunes, where you can preview and purchase them from the iTunes Store.
Note: This feature may not be available for all radio stations.
To preview and purchase tagged songs in iTunes:
1 Click Tagged below Store on the left side of the iTunes window.
2 Click the View button for the song you want.
3 To preview the song, double-click it or click the preview button. To buy the song,
click the Buy button.
Tag iconChapter 6 Listening to FM Radio 65
Using the Radio Menu
To go to the Radio menu, press Menu from the radio screen.
The Radio menu contains the following items.
Menu item What it does
Play Radio Turns the radio on, and returns iPod nano to the radio screen.
Stop Radio Turns the radio off, and clears paused radio (appears only if the
radio is on).
Favorites Displays a list of the stations you’ve saved as favorites. Choose a
station and press the Center button to play.
Tagged Songs Displays a list of songs you’ve tagged for preview and purchase
since you last synced with iTunes.
Recent Songs Displays a list of recently played songs.
Radio Regions Lets you set the radio for the region you’re in.
Live Pause Enables or disables Live Pause.66 Chapter 6 Listening to FM Radio
About Radio Regions
iPod nano can be used in many countries to receive FM radio signals. iPod nano comes
with five preset signal ranges, identified by region: The Americas, Asia, Australia,
Europe, and Japan.
To select a radio region:
m Choose Radio Regions from the Settings menu, and then choose your region.
The Radio Regions menu also appears in the Radio menu.
Region settings are determined by international radio standards, not actual geographic
regions. If you live in a country not listed in the Radio Regions menu, choose a region
that best matches the radio frequency specifications in your country.
The following table specifies the radio frequency range of each region in the Radio
Regions menu, along with the increments between stations (indicated by the ± sign).
Important: iPod nano is intended for the reception of public broadcasts only. Listening
to transmissions that are not intended for the public is illegal in some countries and
violators may be prosecuted. Check and obey the laws and regulations in the areas
where you plan to use iPod nano.
Radio region Radio frequency specifications
Americas 87.5—107.9 MHz / ± 200 kHz
Asia 87.5—108.0 MHz / ± 100 kHz
Australia 87.5—107.9 MHz / ± 200 kHz
Europe 87.5—108.0 MHz / ± 100 kHz
Japan 76.0—90.0 MHz / ± 100 kHz7
67
7 Photo Features
Read this chapter to learn about importing and viewing
photos.
You can import digital photos to your computer and add them to iPod nano. You can
view your photos on iPod nano or as a slideshow on your TV.
Importing Photos
If your computer is a Mac, you can import photos from a digital camera to your
computer using iPhoto. You can import other digital images into iPhoto, such as
images you download from the web. For more information about importing,
organizing, and editing your photos, open iPhoto and choose Help > iPhoto Help.
iPhoto is available for purchase as part of the iLife suite of applications at
www.apple.com/ilife or your local Apple Store. iPhoto might already be installed
on your Mac, in the Applications folder.
To import photos to a Windows PC, follow the instructions that came with your digital
camera or photo application.
Adding Photos from Your Computer to iPod nano
If you have a Mac and iPhoto 7.1.5 or later, you can sync iPhoto albums automatically
(for Mac OS X v10.4.11, iPhoto 6.0.6 or later is required). If you have a PC or Mac,
you can add photos to iPod nano from a folder on your hard disk.
Adding photos to iPod nano the first time might take some time, depending on how
many photos are in your photo library.
To sync photos from a Mac to iPod nano using iPhoto:
1 In iTunes, select iPod nano in the device list and click the Photos tab.
2 Select “Sync photos from: …” and then choose iPhoto from the pop-up menu.
3 Select your sync options:68 Chapter 7 Photo Features
 If you want to add all your photos, select “All photos, albums, events, and faces.”
 If you want to add selected photos, select “Selected albums, events, and faces, and
automatically include … “ and choose an option from the pop-up menu. Then select
the albums, events, and faces you want to add (Faces is supported only by iPhoto 8.1
or later).
 If you want to add videos from iPhoto, select “Include videos.”
4 Click Apply.
Each time you connect iPod nano to your computer, photos are synced automatically.
To add photos from a folder on your hard disk to iPod nano:
1 Drag the images to a folder on your computer.
If you want images to appear in separate photo albums on iPod nano, create folders
within the main image folder and drag images to the new folders.
2 In iTunes, select iPod nano in the device list and click the Photos tab.
3 Select “Sync photos from …”
4 Choose “Choose Folder …” from the pop-up menu and select theimage folder.
5 Click Apply.
Adding Full-Resolution Image Files to iPod nano
When you add photos to iPod nano, iTunes optimizes the photos for viewing.
Full-resolution image files aren’t transferred by default. Adding full-resolution image
files is useful, for example if you want to move them from one computer to another,
but isn’t necessary for viewing the images at full quality on iPod nano.
To add full-resolution image files to iPod nano:
1 In iTunes, select iPod nano in the device list and click the Photos tab.
2 Select “Include full-resolution photos.”
3 Click Apply.
iTunes copies the full-resolution versions of the photos to the Photos folder on
iPod nano.
To delete photos from iPod nano:
1 In iTunes, select iPod nano in the device list and click the Photos tab.
2 Select “Sync photos from: …”
 On a Mac, choose iPhoto from the pop-up menu.
 On a Windows PC, choose Photoshop Album or Photoshop Elements from the pop-up
menu.
3 Choose “Selected albums” and deselect the albums you no longer want on iPod nano.
4 Click Apply.Chapter 7 Photo Features 69
Viewing Photos
You can view photos on iPod nano manually or as a slideshow. If you have an optional
AV cable from Apple (for example, the Apple Component AV Cable), you can connect
iPod nano to your TV and view photos as a slideshow with music.
Viewing Photos on iPod nano
To view photos on iPod nano:
1 On iPod nano, choose Photos > All Photos. Or choose Photos and a photo album
to see only the photos in the album. Thumbnail views of the photos might take a
moment to appear.
2 Select the photo you want and press the Center button.
3 To view photos, hold iPod nano vertically for portrait format, or horizontally for
landscape format.
From any photo-viewing screen, use the Click Wheel to scroll through photos (if you’re
viewing a slideshow, the Click Wheel controls music volume only). Press Next/Fastforward (‘) or Previous/Rewind (]) to skip to the next or previous screen of photos.
Press and hold Next/Fast-forward (‘) or Previous/Rewind (]) to skip to the last or first
photo in the library or album.70 Chapter 7 Photo Features
Viewing Slideshows
You can view a slideshow, with music and transitions if you choose, on iPod nano.
If you have an optional AV cable from Apple, you can view the slideshow on your TV.
To set slideshow settings:
m Choose Photos > Settings, and then follow these instructions:
To view a slideshow on iPod nano:
m Select any photo, album, or roll, and press Play/Pause (’). Or select any full-screen
photo and press the Center button. To pause, press Play/Pause (’). To skip to the next
or previous photo, press Next/Fast-forward (‘) or Previous/Rewind (]).
When you view a slideshow, you can use the Click Wheel to control the music volume
and adjust the brightness. You can’t use the Click Wheel to scroll through photos
during a slideshow.
If you view a slideshow of an album that includes videos, the slideshow pauses when it
reaches a video. If music is playing, it continues to play. If you play the video, the music
pauses while the video is playing, and then resumes. To play the video, press Play/Pause
(’). To resume the slideshow, press Next/Fast-Forward (‘).
To adjust the brightness during a slideshow:
1 Press the Center button until the brightness indicator appears.
2 Use the Click Wheel to adjust the brightness.
To set Do this
How long each slide is shown Choose Time Per Slide and pick a time.
The music that plays during
slideshows
Choose Music and choose a playlist or Now Playing. If you’re using
iPhoto, you can choose From iPhoto to copy the iPhoto music
setting. Only the songs that you’ve added to iPod nano play.
Slides to repeat Set Repeat to On.
Slides to display in random
order
Set Shuffle Photos to On.
Slides to display with
transitions
Choose Transitions and choose a transition type. Random includes
all transition types except Ken Burns.
Slideshows to display on
iPod nano
Set TV Out to Ask or Off.
Slideshows to display on TV Set TV Out to Ask or On.
If you set TV Out to Ask, iPod nano gives you the option of showing
slideshows on TV or on iPod nano every time you start a slideshow.
Slides to show on a PAL
or NTSC TV
Set TV Signal to PAL or NTSC.
PAL and NTSC refer to TV broadcast standards. Your TV might use
either of these, depending on the region where it was purchased.
If you aren’t sure which your TV uses, check the documentation
that came with your TV.Chapter 7 Photo Features 71
To connect iPod nano to your TV:
1 Connect the optional Apple Component or Composite AV cable to iPod nano.
Use the Apple Component AV Cable, Apple Composite AV Cable, or Apple AV
Connection Kit. Other similar RCA-type cables might not work. You can purchase the
cables at www.apple.com/ipodstore.
2 Connect the audio connectors to the ports on your TV.
Make sure you set TV Out on iPod nano to Ask or On.
Your TV must have RCA video and audio ports. The ports on your TV or receiver may
differ from the ports in the illustration.
To view a slideshow on your TV:
1 Connect iPod nano to your TV (see page 51).
2 Turn on your TV and set it to display from the input ports connected to iPod nano.
See the documentation that came with your TV for more information.
3 Use iPod nano to play and control the slideshow.
Adding Photos from iPod nano to a Computer
If you add full-resolution photos from your computer to iPod nano using the previous
steps, they’re stored in a Photos folder on iPod nano. You can connect iPod nano to a
computer and put these photos on the computer. iPod nano must be enabled for disk
use (see “Using iPod nano as an External Disk” on page 84).
To add photos from iPod nano to a computer:
1 Connect iPod nano to the computer.
2 Drag image files from the Photos folder or DCIM folder on iPod nano to the desktop
or to a photo editing application on the computer.
You can also use a photo editing application, such as iPhoto, to add photos stored in
the Photos folder. See the documentation that came with the application for more
information.
To delete photos from the Photos folder on iPod nano:
1 Connect iPod nano to the computer.
2 Navigate to the Photos folder on iPod nano and delete the photos you no longer want.8
72
8 More Settings, Extra Features,
and Accessories
iPod nano can do a lot more than play songs. And you can do
a lot more with it than listen to music.
Read this chapter to find out more about the extra features of iPod nano, including
how to use it as a pedometer; record voice memos; use it as an external disk, alarm, or
sleep timer; play games; show the time of day in other parts of the world; display notes;
and sync contacts, calendars, and to-do lists. Learn about how to use iPod nano as a
stopwatch and to lock the screen, and about the accessories available for iPod nano.
Using iPod nano as a Pedometer
You can use iPod nano as a pedometer to count your steps and record your workouts.
For more accurate results, keep iPod nano in your pocket or in the iPod nano Armband
while using the pedometer.
To use iPod nano as a pedometer:
1 From the Extras menu, choose Fitness and then choose Pedometer.
2 If you’re using the pedometer for the first time, enter your weight using the Click
Wheel, and then press the Center button to begin a session.
3 At the end of the session, press the Center button to stop.Chapter 8 More Settings, Extra Features, and Accessories 73
To customize the pedometer settings:
1 From the Extras menu, choose Fitness and then choose Settings.
2 Choose from the following options:
To view your workout history:
1 From the Extras menu, choose Fitness and then choose History.
2 Select a date from the calendar. Use the Click Wheel to select a day. Press Next/Fastforward (‘) or Previous/Rewind (]) to navigate through the months.
3 Press the Center button to display your workout history for the selected date. If you
had multiple workout sessions on the selected date, choose a session.
iPod nano displays your step goal, workout duration, start and end times, calories
burned, and totals for the week and month.
To view a bar graph of one of your workout sessions, choose a session, and then rotate
iPod nano to landscape mode.
The Pedometer menu item appears in the main menu when the pedometer is on, so
you can stop your session quickly. The preview panel below the main menu displays
your step count when you scroll to the Pedometer menu item.
To set iPod nano to count your steps throughout the day, choose Pedometer in the
Settings menu and select Always On. The Pedometer records your daily totals, so you
can track your history without turning the pedometer off at the end of each day. The
Pedometer menu item appears continuously in the main menu.
To start sessions quickly, you can also add the Pedometer menu item to the main menu
manually. See“Adding or Removing Items on the Main Menu” on page 10.
With the Nike + iPod Sport Kit (available separately), iPod nano can also monitor and
record your speed, distance, time elapsed, and calories burned, and track your cardio
workouts on Nike + iPod-compatible gym equipment.
To Do this
Choose a pedometer mode Select Pedometer, and press the Center button to switch between
Manual and Always On.
Set a workout goal Choose Daily Step Goal, and then choose a goal from the list, or
choose Custom and then use the Click Wheel to set a goal.
Set your weight Choose Weight, use the Click Wheel to set your weight, and then
press the Center button to enter.
Set the pedometer orientation Choose Screen Orientation, and then choose Vertical, Left, or RIght.74 Chapter 8 More Settings, Extra Features, and Accessories
When you sync iPod nano with iTunes, you can upload your pedometer and other
workout information to the Nike+ website, where you can track your history, compete
with your friends, and more. You’ll need a Nike+ account, which you can set up when
you sync.
To upload your workout information to Nike+:
1 Connect iPod nano to your computer and open iTunes (if it doesn’t open
automatically). If you’re syncing workout information for the first time, a message
appears:
2 Click Send, and then follow the onscreen instructions to set up your Nike+ account.
Once you set up your account, a new tab appears in the iTunes window:
3 Click the Nike + iPod tab, and select “Automatically send workout data to Nike+” if it
isn’t selected already.
4 Click Apply.
To view and share your information at Nike+, click “Visit Nike+” in the Nike + iPod pane
when iPod nano is connected to your computer, or go to www.nike.com and then log
in to your account.
Recording Voice Memos
You can record voice memos using the built-in microphone in iPod nano or an optional
iPod nano–compatible microphone (available for purchase at www.apple.com/
ipodstore). You can set chapter marks while you record, store voice memos on
iPod nano and sync them with your computer, and add labels to voice memos.
Voice memos can be up to two hours long. If you record for more than two hours,
iPod nano automatically starts a new voice memo to continue your recording.Chapter 8 More Settings, Extra Features, and Accessories 75
To record a voice memo:
1 From the Extras menu, choose Voice Memos. The Record screen appears.
2 Press Play/Pause (’) or the Center button to begin recording. Be careful not to block
the microphone, which is on the back of iPod nano.
3 To pause recording, press Play/Pause (’).
Choose Resume to continue recording or press Play/Pause (’) again.
4 When you finish, press Menu and then choose “Stop and Save.” Your saved recording is
listed by date and time.
To set chapter marks:
m While recording, press the Center button whenever you want to set a chapter mark.
During playback, you can go directly to the next chapter by pressing the Next/Fastforward button. Press Previous/Rewind (]) once to go to the start of the current
chapter, and twice to go to the start of the previous chapter.
To label a recording:
1 From the Extras menu, choose Voice Memos and then press Menu.
2 Choose Voice Memos, and then choose a recording.
3 Choose Label, and then choose a label for the recording.
You can choose Podcast, Interview, Lecture, Idea, Meeting, or Memo. To remove a label
from a recording, choose None.
To play a recording:
1 From the Extras menu, choose Voice Memos and then press Menu.
2 Choose Voice Memos, and then choose a recording.
3 Choose Play and then press the Center button.76 Chapter 8 More Settings, Extra Features, and Accessories
To delete a recording:
1 From the Extras menu, choose Voice Memos and then press Menu.
2 Choose Voice Memos, and then choose a recording.
3 Choose Delete and then press the Center button.
To sync voice memos with your computer:
Voice memos are saved in a Recordings folder on iPod in the WAV file format. If you
enable iPod nano for disk use, you can drag voice memos from the folder to copy them
to your computer.
If iPod nano is set to sync songs automatically (see “Syncing Music Automatically” on
page 25) voice memos on iPod nano are automatically synced to a playlist in iTunes
called Voice Memos (and removed from iPod nano) when you connect iPod nano. The
Voice Memos playlist appears below Playlists on the left side of the iTunes window.
Playing Games
iPod nano comes with three games: Klondike, Maze, and Vortex.
To play a game:
m Choose Extras > Games and choose a game.
When you play a game created for previous versions of iPod nano, you’re first shown
how iPod nano controls work in the game you’re about to play.
You can purchase additional games from the iTunes Store (in some countries) to play
on iPod nano. After purchasing games in iTunes, you can add them to iPod nano by
syncing them automatically or by managing them manually.
Many games can be played in portrait or landscape mode.
To buy a game:
1 In iTunes, select iTunes Store under Store on the left side of the iTunes window.
2 Choose iPod Games in the iTunes Store.
3 Select the game you want and click Buy Game.
To sync games automatically to iPod nano:
1 In iTunes, select iPod nano in the device list and click the Games tab.
2 Select “Sync games.”
3 Click “All games” or “Selected games.” If you click “Selected games,” also select the
games you want to sync.
4 Click Apply.Chapter 8 More Settings, Extra Features, and Accessories 77
Using Extra Settings
You can set the date and time, clocks in different time zones, and alarm and sleep
features on iPod nano. You can use iPod nano as a stopwatch or to play games, and you
can lock the iPod nano screen.
Setting and Viewing the Date and Time
The date and time are set automatically from your computer’s clock when you connect
iPod nano, but you can change the settings.
To set date and time options:
1 Choose Settings > Date & Time.
2 Choose one or more of the following options:
Adding Clocks for Other Time Zones
To add clocks for other time zones:
1 Choose Extras > Clocks.
2 On the Clocks screen, click the Center button and choose Add.
3 Choose a region and then choose a city.
The clocks you add appear in a list. The last clock you added appears last.
To delete a clock:
1 Choose Extras > Clocks.
2 Choose the clock.
3 Press the Center button.
4 Choose Delete.
To Do this
Set the date Choose Date. Use the Click Wheel to change the selected value.
Press the Center button to move to the next value.
Set the time Choose Time. Use the Click Wheel to change the selected value.
Press the Center button to move to the next value.
Specify the time zone Choose Time Zone and use the Click Wheel to select a city in
another time zone.
Display the time in 24-hour
format
Choose 24 Hour Clock and press the Center button to turn the
24-hour format on or off.
Display the time in the title bar Choose Time in Title and press the Center button to turn the option
on or off. 78 Chapter 8 More Settings, Extra Features, and Accessories
Setting Alarms
You can set an alarm for any clock on iPod nano.
To use iPod nano as an alarm clock:
1 Choose Extras > Alarms.
2 Choose Create Alarm and set one or more of the following options:
If you sync calendar events with alarms to iPod nano, the events appear in the Alarms
menu.
To delete an alarm:
1 Choose Extras > Alarms.
2 Choose the alarm and then choose Delete.
Setting the Sleep Timer
You can set iPod nano to turn off automatically after playing music or other content for
a specific period of time.
To set the sleep timer:
1 Choose Extras > Alarms.
2 Choose Sleep Timer and choose how long you want iPod nano to play.
To Do this
Turn the alarm on Choose Alarm and choose On.
Set the date Choose Date. Use the Click Wheel to change the selected value.
Press the Center button to move to the next value.
Set the time Choose Time. Use the Click Wheel to change the selected value.
Press the Center button to move to the next value.
Set a repeat option Choose Repeat and choose an option (for example, “weekdays”).
Choose a sound Choose Alerts or a playlist. If you choose Alerts, select Beep to hear
the alarm through the internal speaker. If you choose a playlist,
connect iPod nano to speakers, earphones, or headphones to hear
the alarm.
Name the alarm Choose Label and choose an option (for example, “Wake up”).Chapter 8 More Settings, Extra Features, and Accessories 79
Using the Stopwatch
You can use the stopwatch as you exercise to track your overall time and, if you’re
running on a track, your lap times. You can play music while you use the stopwatch.
To use the stopwatch:
1 Choose Extras > Stopwatch.
2 Press Play/Pause (’) to start the timer.
3 Press the Center button to record lap times. The two most recent lap times appear
above the overall time. All lap times are recorded in the log.
4 Press Play/Pause (’) to stop the overall timer. To start the timer again, press Play/Pause
(’).
To start a new stopwatch session, press Menu and then choose New Timer.
To review or delete a logged stopwatch session:
1 Choose Extras > Stopwatch.
The current log and a list of saved sessions appear.
2 Choose a log to view session information.
iPod nano stores stopwatch sessions with dates, times, and lap statistics. You see the
date and time the session started; the total time of the session; the shortest, longest,
and average lap times; and the last several lap times.
3 Press the Center button and choose Delete Log to delete the chosen log, or Clear Logs
to delete all current logs.
Locking the iPod nano Screen
You can set a combination to prevent iPod nano from being used by someone without
your permission. If you lock iPod nano while it isn’t connected to a computer, you must
then enter a combination to unlock and use it.
This combination is different from the Hold button, which just prevents iPod nano
buttons from being pressed accidentally. The combination prevents another person
from using iPod nano.80 Chapter 8 More Settings, Extra Features, and Accessories
To set a combination for iPod nano:
1 Choose Extras > Screen Lock.
2 On the New Combination screen, enter a combination:
 Use the Click Wheel to select a number for the first position. Press the Center button
to confirm your choice and move to the next position.
 Use the same method to set the remaining numbers of the combination. Press Next/
Fast-forward (‘) to move to the next position, or Previous/Rewind (]) to move to
the previous position. Press the Center button in the final position.
3 On the Confirm Combination screen, enter the combination to confirm it, or press
Menu to exit without locking the screen.
When you finish, you return to the Screen Lock screen, where you can lock the screen
or reset the combination. Press Menu to exit without locking the screen.
To lock the iPod nano screen:
m Choose Extras > Screen Lock > Lock.
If you just finished setting your combination, Lock will already be selected on the
screen. Just press the Center button to lock iPod.
When the screen is locked, you see a picture of a lock.
You might want to add the Screen Lock menu item to the main menu so that you can
quickly lock the iPod nano screen. See “Adding or Removing Items on the Main Menu”
on page 10.
When you see the lock on the screen, you can unlock the iPod nano screen in two
ways:
 Press the Center button to enter the combination on iPod nano. Use the Click Wheel
to select the numbers and press the Center button to confirm them. If you enter the
wrong combination, the lock remains. Try again.
 Connect iPod nano to the primary computer you use it with, and iPod nano
automatically unlocks.
If you try these methods and you still can’t unlock iPod nano, you can restore
iPod nano. See “Updating and Restoring iPod Software” on page 92.
To change a combination you’ve already set:
1 Choose Extras > Screen Lock > Reset Combination.
2 On the Enter Combination screen, enter the current combination.
3 On the New Combination screen, enter and confirm a new combination.
If you can’t remember the current combination, the only way to clear it and enter a
new one is to restore the iPod nano software. See “Updating and Restoring iPod
Software” on page 92.Chapter 8 More Settings, Extra Features, and Accessories 81
Syncing Contacts, Calendars, and To-Do Lists
iPod nano can store contacts, calendar events, and to-do lists for viewing on the go.
You can use iTunes to sync the contact and calendar information on iPod nano with
Address Book and iCal.
If you’re using Windows XP, and you use Windows Address Book or Microsoft Outlook
2003 or later to store your contact information, you can use iTunes to sync the address
book information on iPod nano. If you use Microsoft Outlook 2003 or later to keep a
calendar, you can also sync calendar information.
To sync contacts or calendar information using Mac OS X:
1 Connect iPod nano to your computer.
2 In iTunes, select iPod nano in the device list and click the Contacts tab.
3 Do one of the following:
 To sync contacts, in the Contacts section, select “Sync Address Book contacts,” and
select an option:
 To sync all contacts automatically, select “All contacts.”
 To sync selected groups of contacts automatically, select “Selected groups” and
select the groups you want to sync.
 To copy contacts’ photos to iPod nano, when available, select “Include contacts’
photos.”
When you click Apply, iTunes updates iPod nano with the Address Book contact
information you specified.
 To sync calendars, in the Calendars section, select “Sync iCal calendars,” and choose
an option:
 To sync all calendars automatically, choose “All calendars.”
 To sync selected calendars automatically, choose “Selected calendars” and select
the calendars you want to sync.
When you click Apply, iTunes updates iPod nano with the calendar information you
specified. 82 Chapter 8 More Settings, Extra Features, and Accessories
To sync contacts or calendars using Windows Address Book or Microsoft Outlook for
Windows:
1 Connect iPod nano to your computer.
2 In iTunes, select iPod nano in the device list and click the Contacts tab.
3 Do one of the following:
 To sync contacts, in the Contacts section, select “Sync contacts from” and choose
Windows Address Book or Microsoft Outlook from the pop-up menu. Then select
which contact information you want to sync.
 To sync calendars from Microsoft Outlook, in the Calendars section, select “Sync
calendars from Microsoft Outlook.”
4 Click Apply.
You can also add contact and calendar information to iPod nano manually. iPod nano
must be enabled as an external disk (see “Using iPod nano as an External Disk” on
page 84).
To add contact information manually:
1 Connect iPod nano and open your favorite email or contacts application. You can add
contacts using Palm Desktop, Microsoft Outlook, Microsoft Entourage, and Eudora,
among others.
2 Drag contacts from the application’s address book to the Contacts folder on iPod nano.
In some cases, you might need to export contacts and then drag the exported file or
files to the Contacts folder. See the documentation for your email or contacts
application.
To add appointments and other calendar events manually:
1 Export calendar events from any calendar application that uses the standard iCal
format (filenames end in .ics) or vCal format (filenames end in .vcs).
2 Drag the files to the Calendars folder on iPod nano.
To add to-do lists to iPod nano manually, save them in a calendar file with an .ics or
.vcs extension.
To view contacts on iPod nano:
m Choose Extras > Contacts.
To sort contacts by first or last name:
m Choose Settings > General > Sort Contacts, and then select First or Last.
To view calendar events:
m Choose Extras > Calendars > All Calendars, and then choose a calendar.
To view to-do lists:
m Choose Extras > Calendars > To Do’s.Chapter 8 More Settings, Extra Features, and Accessories 83
Mono Audio
Mono Audio combines the sound of the left and right channels into a monaural signal
that’s played through both sides. This enables users with a hearing impairment in one
ear to hear both channels with the other ear.
To turn Mono Audio on or off:
m Choose Settings > Playback > Mono Audio, and then select On or Off.
Using Spoken Menus for Accessibility
iPod nano features optional spoken menus, enabling visually impaired users to browse
through their iPod nano content more easily.
iTunes generates spoken menus using voices that are included in your computer’s
operating system or that you may have purchased from third parties. Not all voices
from computer operating systems or third parties are compatible with spoken menus,
and not all languages are supported.
To use spoken menus, VoiceOver must be enabled on iPod nano. For more information,
see “Setting Up VoiceOver” on page 32.
You must enable spoken menus in iTunes before you can activate them on iPod nano.
To enable spoken menus in iTunes:
1 Connect iPod nano to your computer.
2 In iTunes, select iPod nano in the device list and click the Summary tab.
3 Select “Enable spoken menus.”
In Mac OS X, if you have VoiceOver turned on in Universal Access preferences, this
option is selected by default.
4 Click Apply.
After iPod nano syncs with iTunes, spoken menus are enabled and activated on your
iPod nano. iPod nano takes longer to sync if spoken menus are being enabled.
To deactivate spoken menus on iPod nano:
m Choose Settings > General > Spoken Menus and then choose Off.
To turn spoken menus on again, choose Settings > General > Spoken Menus, and then
choose On.
If VoiceOver is enabled, turning off spoken menus doesn’t disable VoiceOver.
Note: The Spoken Menus option appears in the Settings menu on iPod nano only if
spoken menus have been enabled in iTunes.84 Chapter 8 More Settings, Extra Features, and Accessories
Using iPod nano as an External Disk
You can use iPod nano as an external disk to store data files.
You won’t see songs you add using iTunes in the Mac Finder or in Windows Explorer.
And if you copy music files to iPod nano in the Mac Finder or Windows Explorer, you
won’t be able to play them on iPod nano.
Important: To import photos and recorded videos from iPod nano to your computer,
external disk use must be enabled.
To enable iPod nano as an external disk:
1 In iTunes, select iPod nano in the device list and click the Summary tab.
2 In the Options section, select “Enable disk use.”
3 Click Apply.
When you use iPod nano as an external disk, the iPod nano disk icon appears on
the desktop on a Mac, or as the next available drive letter in Windows Explorer on a
Windows PC. Drag files to and from iPod nano to copy them.
You can also click Summary and select “Manually manage music and videos” in the
Options section to use iPod nano as an external disk.
If you use iPod nano primarily as an external disk, you might want to keep iTunes from
opening automatically when you connect iPod nano to your computer.
To prevent iTunes from opening automatically when you connect iPod nano to your
computer:
1 In iTunes, select iPod nano in the device list and click the Summary tab.
2 In the Options section, deselect “Open iTunes when this iPod is connected.”
3 Click Apply.
Storing and Reading Notes
You can store and read text notes on iPod nano, if it’s enabled as an external disk
(see “Using iPod nano as an External Disk” on page 84).
1 Save a document in any word-processing application as a text (.txt) file.
2 Place the file in the Notes folder on iPod nano.
To view notes:
m Choose Extras > Notes.Chapter 8 More Settings, Extra Features, and Accessories 85
Learning About iPod nano Accessories
iPod nano comes with some accessories, and many other accessories are available.
To purchase iPod nano accessories, go to www.apple.com/ipodstore.
Available accessories include:
 iPod nano Armband
 Apple Earphones with Remote and Mic
 Apple In-Ear Headphones with Remote and Mic
 Apple USB Power Adapter
 Apple Component AV Cable
 Apple Composite AV Cable
 Apple Universal Dock
 Nike + iPod Sport Kit
 iPod Socks
To use the earphones included with iPod nano:
m Plug the earphones into the Headphones port. Then place the earbuds in your ears as
shown.
WARNING: Permanent hearing loss may occur if earbuds or headphones are used at
high volume. You can adapt over time to a higher volume of sound that may sound
normal but can be damaging to your hearing. If you experience ringing in your ears or
muffled speech, stop listening and have your hearing checked. The louder the volume,
the less time is required before your hearing could be affected. Hearing experts
suggest that to protect your hearing:
 Limit the amount of time you use earbuds or headphones at high volume.
 Avoid turning up the volume to block out noisy surroundings.
 Turn the volume down if you can’t hear people speaking near you.
For information about setting a maximum volume limit on iPod, see “Setting the
Maximum Volume Limit” on page 45.
The earphone
cord is adjustable.9
86
9 Tips and Troubleshooting
Most problems with iPod nano can be solved quickly by
following the advice in this chapter.
General Suggestions
Most problems with iPod nano can be solved by resetting it. First, make sure iPod nano
is charged.
To reset iPod nano:
1 Toggle the Hold switch on and off (slide it to HOLD and then back again).
2 Press and hold Menu and the Center button for at least 6 seconds, until the
Apple logo appears.
If iPod nano won’t turn on or respond
 Make sure the Hold switch isn’t set to HOLD.
 The iPod nano battery might need to be recharged. Connect iPod nano to your
computer or to an Apple USB Power Adapter and let the battery recharge. Look for
the lightning bolt icon on the iPod nano screen to verify that iPod nano is receiving
a charge.
To charge the battery, connect iPod nano to a USB 2.0 port on your computer.
 Try the 5 Rs, one by one, until iPod nano responds.
The 5 Rs: Reset, Retry, Restart, Reinstall, Restore
Remember these five basic suggestions if you have a problem with iPod nano.
Try these steps one at a time until your issue is resolved. If one of the following
doesn’t help, read on for solutions to specific problems.
 Reset iPod nano. See “General Suggestions,” below.
 Retry with a different USB port if you cannot see iPod nano in iTunes.
 Restart your computer, and make sure you have the latest software updates installed.
 Reinstall iTunes software from the latest version on the web.
 Restore iPod nano. See “Updating and Restoring iPod Software” on page 92.Chapter 9 Tips and Troubleshooting 87
If you want to disconnect iPod nano, but you see the message “Connected” or “Sync
in Progress”
 If iPod nano is syncing music, wait for it to complete.
 Select iPod nano in the iTunes device list and click the Eject (C) button.
 If iPod nano disappears from the device list in iTunes, but you still see the
“Connected” or “Sync in Progress” message on the iPod nano screen, disconnect
iPod nano.
 If iPod nano doesn’t disappear from the device list in iTunes, drag the iPod nano icon
from the desktop to the Trash if you’re using a Mac. If you’re using a Windows PC,
eject the device in My Computer or click the Safely Remove Hardware icon in the
system tray and select iPod nano. If you still see the “Connected” or “Sync in Progress”
message, restart your computer and eject iPod nano again.
If iPod nano isn’t playing music
 Make sure the Hold switch isn’t set to HOLD.
 Make sure the headphone connector is pushed in all the way.
 Make sure the volume is adjusted properly. A maximum volume limit might have
been set. You can change or remove it by using Settings > Volume Limit. See “Setting
the Maximum Volume Limit” on page 45.
 iPod nano might be paused. Try pressing Play/Pause (’).
 Make sure you’re using iTunes 9.0 or later (available at www.apple.com/downloads).
Songs purchased from the iTunes Store using earlier versions of iTunes won’t play on
iPod nano until you upgrade iTunes.
 If you’re using the Apple Universal Dock, make sure the iPod nano is seated firmly in
the dock and make sure all cables are connected properly.
If the internal speaker continues to play audio after you connect earphones or
headphones to iPod nano
 Disconnect and then reconnect the earphones or headphones.
If the internal speaker doesn’t start playing audio after you disconnect earphones or
headphones from iPod nano
 Any audio that’s playing pauses automatically when you disconnect earphones or
headphones from iPod nano. Press Play/Pause (’) to resume.
 The FM radio doesn’t play through the internal speaker, because iPod nano uses the
earphone or headphone cord as the radio antenna.88 Chapter 9 Tips and Troubleshooting
If you connect iPod nano to your computer and nothing happens
 Make sure you’ve installed the latest iTunes software from www.apple.com/downloads.
 Try connecting to a different USB port on your computer.
Note: A USB 2.0 port is recommended to connect iPod nano. USB 1.1 is significantly
slower than USB 2.0. If you have a Windows PC that doesn’t have a USB 2.0 port, in
some cases you can purchase and install a USB 2.0 card. For more information, go to
www.apple.com/ipod.
 iPod nano might need to be reset (see page 86).
 If you’re connecting iPod nano to a portable computer using the Apple Dock
Connector to USB 2.0 Cable, connect the computer to a power outlet before
connecting iPod nano.
 Make sure you have the required computer and software. See “If you want to doublecheck the system requirements” on page 91.
 Check the cable connections. Unplug the cable at both ends and make sure
no foreign objects are in the USB ports. Then plug the cable back in securely.
Make sure the connectors on the cables are oriented correctly. They can be inserted
only one way.
 Try restarting your computer.
 If none of the previous suggestions solves your problems, you might need to restore
iPod nano software. See “Updating and Restoring iPod Software” on page 92.
If iPod nano displays a “Connect to Power” message
This message may appear if iPod nano is exceptionally low on power and the battery
needs to be charged before iPod nano can communicate with your computer. To
charge the battery, connect iPod nano to a USB 2.0 port on your computer.
Leave iPod nano connected to your computer until the message disappears and
iPod nano appears in iTunes or the Finder. Depending on how depleted the battery is,
you may need to charge iPod nano for up to 30 minutes before it will start up.
To charge iPod nano more quickly, use the optional Apple USB Power Adapter.
If iPod nano displays a “Use iTunes to restore” message
 Make sure you have the latest version of iTunes on your computer (download it
from www.apple.com/downloads).
 Connect iPod nano to your computer. When iTunes opens, follow the onscreen
prompts to restore iPod nano.
 If restoring iPod nano doesn’t solve the problem, iPod nano may need to be
repaired. You can arrange for service at the iPod Service & Support website:
www.apple.com/support/ipodChapter 9 Tips and Troubleshooting 89
If songs or data sync more slowly over USB 2.0
 If you sync a large number of songs or amount of data using USB 2.0 and the
iPod nano battery is low, iPod nano syncs the information at a reduced speed in
order to conserve battery power.
 If you want to sync at higher speeds, you can stop syncing and keep the iPod nano
connected so that it can recharge, or connect it to the optional iPod USB 2.0 Power
Adapter. Let iPod nano charge for about an hour, and then resume syncing your
music or data.
If you can’t add a song or other item to iPod nano
The song may have been encoded in a format that iPod nano doesn’t support.
The following audio file formats are supported by iPod nano. These include formats
for audiobooks and podcasting:
 AAC (M4A, M4B, M4P, up to 320 Kbps)
 Apple Lossless (a high-quality compressed format)
 MP3 (up to 320 Kbps)
 MP3 Variable Bit Rate (VBR)
 WAV
 AA (audible.com spoken word, formats 2, 3, and 4)
 AIFF
A song encoded using Apple Lossless format has full CD-quality sound, but takes up
only about half as much space as a song encoded using AIFF or WAV format. The same
song encoded in AAC or MP3 format takes up even less space. When you import music
from a CD using iTunes, it’s converted to AAC format by default.
Using iTunes for Windows, you can convert nonprotected WMA files to AAC or MP3
format. This can be useful if you have a library of music encoded in WMA format.
iPod nano doesn’t support WMA, MPEG Layer 1, MPEG Layer 2 audio files, or
audible.com format 1.
If you have a song in iTunes that isn’t supported by iPod nano, you can convert it to a
supported format. For information, see iTunes Help.
If iPod nano displays a “Connect to iTunes to activate Genius” message
You haven’t turned on Genius in iTunes, or you haven’t synced iPod nano since you
turned on Genius in iTunes. See “Turning On Genius in iTunes” on page 22.90 Chapter 9 Tips and Troubleshooting
If iPod nano displays a “Genius is not available for the selected song” message
Genius is on but is unable to make a Genius playlist using the selected song. New
songs are added to the iTunes Store Genius database all the time, so try again soon.
If iPod nano can’t receive a radio signal
You haven’t connected earphones or headphones. Make sure the connection is
complete, and try moving around the earphone or headphone cord.
If you accidentally set iPod nano to use a language you don’t understand
You can reset the language:
1 Press and hold Menu until the main menu appears.
2 Use the Click Wheel to find the menu item that causes the iPod nano device name to
appear in the preview panel, and then choose it. That’s the Settings menu.
3 Choose the last menu item (Reset Settings).
4 Choose the first item (Reset) and select a language.
Other iPod nano settings, such as song repeat, are also reset. Your synced content is
not deleted or modified.
If you can’t find the Reset Settings menu item, you can restore iPod nano to its original
state and choose a language. See “Updating and Restoring iPod Software” on page 92.
If you can’t see videos or photos on your TV
 Use RCA-type cables made specifically for iPod nano, such as the Apple Component
or Apple Composite AV cables, to connect iPod nano to your TV. Other similar RCAtype cables won’t work.
 Make sure your TV is set to display images from the correct input source (see the
documentation that came with your TV).
 Make sure all cables are connected correctly (see “Watching Videos on a TV
Connected to iPod nano” on page 50).
 Make sure the yellow end of the Apple Composite AV Cable is connected to the
video port on your TV.
 If you’re trying to view a video, choose Videos > Settings and set TV Out to On, and
then try again. If you’re trying to view a slideshow, choose Photos > Slideshow
Settings and set TV Out to On, and then try again.
 If that doesn’t work, choose Videos > Settings (for video) or Photos > Settings (for a
slideshow) and set TV Signal to PAL or NTSC, depending on which type of TV you
have. Try both settings.Chapter 9 Tips and Troubleshooting 91
If you want to double-check the system requirements
To use iPod nano, you must have:
 One of the following computer configurations:
 A Mac with a USB 2.0 port
 A Windows PC with a USB 2.0 port or a USB 2.0 card installed
 One of the following operating systems:
 Mac OS X v10.4.11 or later
 Windows Vista
 Windows XP Home or Professional with Service Pack 3 or later
 iTunes 9 or later (iTunes can be downloaded from www.apple.com/downloads)
If your Windows PC doesn’t have a USB 2.0 port, you can purchase and install a USB 2.0
card. For more information about cables and compatible USB cards, go to
www.apple.com/ipod.
On a Mac, iPhoto 7.1.5 or later is recommended for adding photos and albums to
iPod nano. iPhoto 8.1 or later is required to use all iPod nano photo features. This
software is optional. iPhoto might already be installed on your Mac. Check the
Applications folder.
On both Mac and Windows PC, iPod nano can sync digital photos from folders on your
computer’s hard disk.
If you want to use iPod nano with a Mac and a Windows PC
If you’re using iPod nano with a Mac and you want to use it with a Windows PC, you
must restore the iPod software for use with the PC (see “Updating and Restoring iPod
Software” on page 92). Restoring the iPod software erases all data from iPod nano,
including all songs.
You cannot switch from using iPod nano with a Mac to using it with a Windows PC
without erasing all data on iPod nano.
If you lock the iPod nano screen and can’t unlock it
Normally, if you can connect iPod nano to the computer it’s authorized to work with,
iPod nano automatically unlocks. If the computer authorized to work with iPod nano is
unavailable, you can connect iPod nano to another computer and use iTunes to restore
iPod software. See the next section for more information.
If you want to change the screen lock combination and you can’t remember the
current combination, you must restore the iPod software and then set a new
combination.92 Chapter 9 Tips and Troubleshooting
Updating and Restoring iPod Software
You can use iTunes to update or restore iPod software. It’s recommended that you
update iPod nano to use the latest software. You can also restore the software,
which puts iPod nano back to its original state.
 If you choose to update, the software is updated, but your settings and songs aren’t
affected.
 If you choose to restore, all data is erased from iPod nano, including songs, videos,
files, contacts, photos, calendar information, and any other data. All iPod nano
settings are restored to their original state.
To update or restore iPod nano:
1 Make sure you have an Internet connection and have installed the latest version of
iTunes from www.apple.com/downloads.
2 Connect iPod nano to your computer.
3 In iTunes, select iPod nano in the device list and click the Summary tab.
The Version section tells you whether iPod nano is up to date or needs a newer version
of the software.
4 Click Update to install the latest version of the software.
5 If necessary, click Restore to restore iPod nano to its original settings (this erases all data
from iPod nano). Follow the onscreen instructions to complete the restore process.10
93
10 Safety and Cleaning
Read the following important safety and handling
information before using iPod nano to avoid injury.
Keep this safety information and the iPod nano User Guide handy for future reference.
For downloadable versions of the iPod nano User Guide and the latest safety
information, visit support.apple.com/manuals/ipod.
Important Safety Information
Handling iPod nano Do not drop, disassemble, open, crush, bend, deform, puncture,
shred, microwave, incinerate, paint, or insert foreign objects into iPod nano.
Avoiding water and wet locations Do not use iPod nano in rain, or near washbasins or
other wet locations. Take care not to spill any food or liquid on iPod nano. In case
iPod nano gets wet, unplug all cables, turn iPod nano off, and slide the Hold switch to
HOLD before cleaning, and allow it to dry thoroughly before turning it on again. Do not
attempt to dry iPod nano with an external heat source such as a microwave oven or
hair dryer. An iPod nano that has been damaged as a result of exposure to liquids is
not serviceable.
± Read all safety information below and operating instructions before using
iPod to avoid injury.
WARNING: Failure to follow these safety instructions could result in fire, electric shock,
or other injury or damage.94 Chapter 10 Safety and Cleaning
Repairing iPod nano Never attempt to repair iPod nano yourself. iPod nano does not
contain any user-serviceable parts. If iPod nano has been submerged in water,
punctured, or subjected to a severe fall, do not use it until you take it to an Apple
Authorized Service Provider. For service information, choose iPod Help from the Help
menu in iTunes or go to www.apple.com/support/ipod. The rechargeable battery in
iPod nano should be replaced only by an Apple Authorized Service Provider. For more
information about batteries, go to www.apple.com/batteries.
Charging iPod nano To charge iPod nano, only use the included Apple Dock
Connector to USB Cable with an Apple USB Power Adapter, or a high-power USB port
on another device that is compliant with the USB 2.0 standard; another Apple branded
product or accessory designed to work with iPod; or a third-party accessory certified to
use the Apple “Made for iPod” logo.
Read all safety instructions for any products and accessories before using with
iPod nano. Apple is not responsible for the operation of third party accessories or their
compliance with safety and regulatory standards.
When you use the Apple USB Power Adapter (sold separately at www.apple.com/
ipodstore) to charge iPod nano, make sure that the power adapter is fully assembled
before you plug it into a power outlet. Then insert the Apple USB Power Adapter firmly
into the power outlet. Do not connect or disconnect the Apple USB Power Adapter
with wet hands. Do not use any power adapter other than an Apple iPod power
adapter to charge your iPod.
The Apple USB Power Adapter may become warm during normal use. Always allow
adequate ventilation around the Apple USB Power Adapter and use care when handling.
Unplug the Apple USB Power Adapter if any of the following conditions exist:
 The power cord or plug has become frayed or damaged.
 The adapter is exposed to rain, liquids, or excessive moisture.
 The adapter case has become damaged.
 You suspect the adapter needs service or repair.
 You want to clean the adapter.
Avoiding hearing damage Permanent hearing loss may occur if the internal speaker,
earbuds or headphones are used at high volume. Set the volume to a safe level. You
can adapt over time to a higher volume of sound that may sound normal but can be
damaging to your hearing. If you experience ringing in your ears or muffled speech,
stop listening and have your hearing checked. The louder the volume, the less time is
required before your hearing could be affected. Hearing experts suggest that to protect
your hearing:
 Limit the amount of time you use earbuds or headphones at high volume.
 Avoid turning up the volume to block out noisy surroundings.Chapter 10 Safety and Cleaning 95
 Turn the volume down if you can’t hear people speaking near you.
For information about how to set a maximum volume limit on iPod nano, see “Setting
the Maximum Volume Limit” on page 45.
Driving and riding safely Use of iPod nano alone, or with headphones (even if used
in only one ear) while operating a vehicle is not recommended and is illegal in some
areas. Check and obey the laws and regulations on the use of mobile devices like
iPod nano in areas where you drive or ride. Be careful and attentive while driving or
riding a bicycle. Stop using iPod nano if you find it disruptive or distracting while
operating any type of vehicle, or performing any other activity that requires your
full attention.
Seizures, blackouts, and eye strain A small percentage of people may be susceptible
to blackouts or seizures (even if they have never had one before) when exposed to
flashing lights or light patterns such as when playing games or watching video. If you
have experienced seizures or blackouts or have a family history of such occurrences,
please consult a physician before playing games (if available) or watching videos on
your iPod nano. Discontinue use and consult a physician if you experience: headaches,
blackouts, seizures, convulsion, eye or muscle twitching, loss of awareness, involuntary
movements, or disorientation. To reduce risk of headaches, blackouts, seziures, and
eyestrain, avoid prolonged use, hold iPod nano some distance from your eyes, use
iPod nano in a well lit room, and take frequent breaks.
Glass parts The outside cover of the display on iPod nano is made of glass. This glass
could break if iPod nano is dropped on a hard surface or receives a substantial impact.
If the glass chips or cracks, do not touch or attempt to remove the broken glass. Stop
using iPod nano until the glass is replaced by an Apple Authorized Service Provider.
Glass cracked due to misuse or abuse is not covered under the warranty.
Repetitive motion When you perform repetitive activities such as playing games on
iPod nano, you may experience occasional discomfort in your hands, arms, shoulders,
neck, or other parts of your body. Take frequent breaks and if you have discomfort
during or after such use, stop use and see a physician.
Exercising Before starting any exercise program, you should have a complete physical
examination by your physician. Do a warmup or stretching exercise before beginning
any workout. Be careful and attentive while exercising. Slow down, if necessary, before
adjusting your device while running. Stop exercising immediately if you feel pain, or
feel faint, dizzy, exhausted, or short of breath. By exercising, you assume the risks
inherent in physical exercise, including any injury that may result from such activity.96 Chapter 10 Safety and Cleaning
Important Handling Information
Carrying iPod nano iPod nano contains sensitive components, including, in some
cases, a hard drive. Do not bend, drop, or crush iPod nano. If you are concerned about
scratching iPod nano, you can use one of the many cases sold separately.
Using connectors and ports Never force a connector into a port. Check for
obstructions on the port. If the connector and port don’t join with reasonable ease,
they probably don’t match. Make sure that the connector matches the port and that
you have positioned the connector correctly in relation to the port.
Operating iPod nano in acceptable temperatures Operate iPod nano in a place where
the temperature is always between 0º and 35º C (32º to 95º F). In low-temperature
conditions, iPod nano play time may temporarily shorten and battery charge time may
temporarily lengthen.
Store iPod nano in a place where the temperature is always between -20º and 45º C
(-4º to 113º F). Don’t leave iPod nano in your car, because temperatures in parked cars
can exceed this range.
When you’re using iPod nano or charging the battery, it is normal for iPod nano to get
warm. The exterior of iPod nano functions as a cooling surface that transfers heat from
inside the unit to the cooler air outside.
Keeping the outside of iPod nano clean To clean iPod nano, unplug all cables, turn
iPod nano off, and slide the Hold switch to HOLD. Then use a soft, slightly damp, lintfree cloth. Avoid getting moisture in openings. Don’t use window cleaners, household
cleaners, aerosol sprays, solvents, alcohol, ammonia, or abrasives to clean iPod nano.
Disposing of iPod nano properly For information about the proper disposal of
iPod nano, including other important regulatory compliance information, see
“Regulatory Compliance Information” on page 98.
NOTICE: Failure to follow these handling instructions could result in damage to
iPod nano or other property.11
97
11 Learning More, Service,
and Support
You can find more information about using iPod nano in
onscreen help and on the web.
The following table describes where to get more iPod-related software and service
information.
To learn about Do this
Service and support,
discussions, tutorials, and
Apple software downloads
Go to: www.apple.com/support/ipodnano
Using iTunes Open iTunes and choose Help > iTunes Help.
For an online iTunes tutorial (available in some areas only), go to:
www.apple.com/support/itunes
Using iPhoto (on Mac OS X) Open iPhoto and choose Help > iPhoto Help.
Using iCal (on Mac OS X) Open iCal and choose Help > iCal Help.
The latest information on
iPod nano
Go to: www.apple.com/ipodnano
Registering iPod nano To register iPod nano, install iTunes on your computer and connect
iPod nano.
Finding the iPod nano serial
number
Look at the back of iPod nano or choose Settings > About and
press the Center button. In iTunes (with iPod nano connected to
your computer), select iPod nano in the device list and click the
Settings tab.
Obtaining warranty service First follow the advice in this booklet, the onscreen help, and
online resources. Then go to: www.apple.com/support/ipodnano98
Regulatory Compliance Information
FCC Compliance Statement
This device complies with part 15 of the FCC rules.
Operation is subject to the following two conditions:
(1) This device may not cause harmful interference,
and (2) this device must accept any interference
received, including interference that may cause
undesired operation. See instructions if interference
to radio or TV reception is suspected.
Radio and TV Interference
This computer equipment generates, uses, and can
radiate radio-frequency energy. If it is not installed
and used properly—that is, in strict accordance with
Apple’s instructions—it may cause interference with
radio and TV reception.
This equipment has been tested and found to
comply with the limits for a Class B digital device in
accordance with the specifications in Part 15 of FCC
rules. These specifications are designed to provide
reasonable protection against such interference in a
residential installation. However, there is no
guarantee that interference will not occur in a
particular installation.
You can determine whether your computer system is
causing interference by turning it off. If the
interference stops, it was probably caused by the
computer or one of the peripheral devices.
If your computer system does cause interference to
radio or TV reception, try to correct the interference
by using one or more of the following measures:
 Turn the TV or radio antenna until the interference
stops.
 Move the computer to one side or the other of the
TV or radio.
 Move the computer farther away from the TV or
radio.
 Plug the computer in to an outlet that is on a
different circuit from the TV or radio. (That is, make
certain the computer and the TV or radio are on
circuits controlled by different circuit breakers or
fuses.)
If necessary, consult an Apple Authorized Service
Provider or Apple. See the service and support
information that came with your Apple product. Or,
consult an experienced radio/TV technician for
additional suggestions.
Important: Changes or modifications to this product
not authorized by Apple Inc. could void the EMC
compliance and negate your authority to operate
the product.
This product was tested for EMC compliance under
conditions that included the use of Apple peripheral
devices and Apple shielded cables and connectors
between system components.
It is important that you use Apple peripheral devices
and shielded cables and connectors between system
components to reduce the possibility of causing
interference to radios, TV sets, and other electronic
devices. You can obtain Apple peripheral devices and
the proper shielded cables and connectors through
an Apple Authorized Reseller. For non-Apple
peripheral devices, contact the manufacturer or
dealer for assistance.
Responsible party (contact for FCC matters only):
Apple Inc. Corporate Compliance
1 Infinite Loop, MS 26-A
Cupertino, CA 95014
Industry Canada Statement
This Class B device meets all requirements of the
Canadian interference-causing equipment
regulations.
Cet appareil numérique de la classe B respecte
toutes les exigences du Règlement sur le matériel
brouilleur du Canada.
VCCI Class B Statement
Korea Class B Statement
( ૺૺဧ ઠધබ 99
Russia
European Community
Battery Replacement
The rechargeable battery in iPod nano should be
replaced only by an authorized service provider. For
battery replacement services go to:
www.apple.com/support/ipod/service/battery
Disposal and Recycling Information
Your iPod must be disposed of properly according to
local laws and regulations. Because this product
contains a battery, the product must be disposed of
separately from household waste. When your iPod
reaches its end of life, contact Apple or your local
authorities to learn about recycling options.
For information about Apple’s recycling program,
go to: www.apple.com/environment/recycling
Deutschland: Dieses Gerät enthält Batterien. Bitte
nicht in den Hausmüll werfen. Entsorgen Sie dieses
Gerätes am Ende seines Lebenszyklus entsprechend
der maßgeblichen gesetzlichen Regelungen.
Nederlands: Gebruikte batterijen kunnen worden
ingeleverd bij de chemokar of in een speciale
batterijcontainer voor klein chemisch afval (kca)
worden gedeponeerd.
China:
Taiwan:
European Union—Disposal Information:
This symbol means that according to local laws and
regulations your product should be disposed of
separately from household waste. When this product
reaches its end of life, take it to a collection point
designated by local authorities. Some collection
points accept products for free. The separate
collection and recycling of your product at the time
of disposal will help conserve natural resources and
ensure that it is recycled in a manner that protects
human health and the environment.
Apple and the Environment
At Apple, we recognize our responsibility to
minimize the environmental impacts of our
operations and products.
For more information, go to:
www.apple.com/environment
© 2009 Apple Inc. All rights reserved. Apple, the Apple logo, iCal, iLife,
iPhoto, iPod, iPod nano, iPod Socks, iTunes, Mac, Macintosh, and Mac
OS are trademarks of Apple Inc., registered in the U.S. and other
countries. Finder, the FireWire logo, and Shuffle are trademarks of
Apple Inc. iTunes Store is a service mark of Apple Inc., registered in the
U.S. and other countries. NIKE is a trademark of NIKE, Inc. and its
affiliates and is used under license. Other company and product
names mentioned herein may be trademarks of their respective
companies.
Mention of third-party products is for informational purposes only and
constitutes neither an endorsement nor a recommendation. Apple
assumes no responsibility with regard to the performance or use of
these products. All understandings, agreements, or warranties, if any,
take place directly between the vendors and the prospective users.
Every effort has been made to ensure that the information in this
manual is accurate. Apple is not responsible for printing or clerical
errors.
019-1716/2009-11Index
100
Index
A
accessibility
using spoken menus 83
accessing additional options 6, 39
accessories for iPod 85
adding album artwork 21
adding menu items 10, 45
adding music
disconnecting iPod 13
from more than one computer 25, 27
manually 30
methods 24
On-The-Go playlists 42
tutorial 97
adding other content 30
adding photos
about 67
all or selected photos 67, 68
from computer to iPod 67
from iPod to computer 71
full-resolution image 68
address book, syncing 81
alarms
deleting 78
setting 78
album, browsing by 42
album artwork
adding 21
viewing 37
Apple USB Power Adapter 16
charging the battery 17
artist, browsing by 42
audiobooks 48
adding to iPod nano 29
setting play speed 48
automatic syncing 25, 27
AV cables 50, 51, 71
B
backlight
setting timer 11
turning on 6, 11
battery
charge states when disconnected 17
charging 16
Energy Saver 18
improving performance 18
rechargeable 18
replacing 18
very low 88
viewing charge status 16
brightness setting 11
browsing
by album 42
by artist 42
quickly 11, 38
songs 6, 34
videos 6
with Cover Flow 37, 38
buttons
Center 5
disabling with Hold switch 6
Eject 15
C
calendar events, syncing 81
Camera Roll 54
Center button, using 5, 34
Charging, Please Wait message 88
charging the battery
about 16
using the Apple USB Power Adapter 17
using your computer 16
when battery very low 88
cleaning iPod 96
Click Wheel
browsing songs 34
turning off the Click Wheel sound 11
using 5
clocks
adding for other time zones 77
settings 77
close captions 51
compilations 45
component AV cable 50, 51, 71Index 101
composite AV cable 50, 51, 71
computer
adding photos to iPod 67
charging the battery 16
connecting iPod 13
getting photos from iPod 71
problems connecting iPod 88
requirements 91
connecting iPod
about 13
charging the battery 16
to a TV 51, 71
contacts
sorting 82
syncing 81
controls
disabling with Hold switch 8
using 5
converting unprotected WMA files 89
Cover Flow 37, 38
crossfading 47
customizing the Music menu 45
D
data files, storing on iPod 84
date and time
setting 77
viewing 77
determining battery charge 17
diamond icon on scrubber bar 6
digital photos. See photos
disconnecting iPod
about 13
during music update 13
ejecting first 14
instructions 15
troubleshooting 87
disk, using iPod as 84
displaying time in title bar 77
downloading
See also adding; syncing
E
Eject button 15
ejecting before disconnecting 13, 14
Energy Saver 18
external disk, using iPod as 55, 84
F
fast-forwarding a song or video 6
file formats, supported 89
finding your iPod serial number 8
fit video to screen 51
font size
setting 10
full-resolution images 68
G
games 76
buying games 76
Genius
creating a playlist 7, 40
Genius slider 35
playing a playlist 7, 40, 41
saving a playlist 7, 40
syncing to iPod nano 26
turning on in iTunes 22
using on iPod nano 39
Genius Mixes
playing 7, 40
syncing to iPod nano 26
getting help 97
getting information about your iPod 12
getting started with iPod 91
H
hearing loss warning 85
help, getting 97
Hold switch 6, 8
I
iCal, getting help 97
images. See photos
importing
video 22
importing contacts, calendars, and to-do lists. See
syncing
iPhoto
getting help 97
importing camera videos 56
recommended version 91
iPod Dock 13
iPod Dock Connector 13
iPod Updater application 92
iTunes
ejecting iPod 15
getting help 97
setting not to open automatically 84
Sound Check 47
iTunes U 29, 48, 50
L
language
resetting 90
specifying 10
letterbox 51
lightning bolt on battery icon 16
Live Pause 61
navigating 63
locating your iPod serial number 8102 Index
locking iPod screen 79, 80
lyrics
adding 21
viewing on iPod 37
M
Mac OS X operating system 91
main menu
adding or removing items 10
opening 5
returning to 6
settings 10, 45
using 9
Managing iPod manually 25
managing iPod manually 30
manually adding 30
maximum volume limit, setting 45
memos, recording 74
menu items
adding or removing 10, 45
choosing 6
returning to main menu 6
returning to previous menu 6
modifying playlists 31
movies
See also videos
music
iPod not playing 87
rating 36
setting for slideshows 70
tutorial 97
See also adding music; songs
Music menu, customizing 45
music videos
syncing 26
N
navigating quickly 11
notes, storing and reading 84
Now Playing screen
moving to any point in a song or video 6
scrubber bar 6
NTSC TV 50, 70
O
On-The-Go playlists
copying to computer 42
making 39, 41
rating songs 36
saving 42
operating system requirements 91
organizing your music 21
P
PAL TV 50, 70
pausing
a song 6
a video 6
Pedometer 72
settings 73
workout history 73
phone numbers, syncing 81
photo library 67
photos
adding to iPod nano 67
deleting 68, 71
full-resolution 68
importing 67
syncing 67, 68
viewing on iPod 69
viewing slideshows 70
playing
games 76
songs 6
videos 6
playlists
adding songs 6, 31
making on iPod 39, 41
modifying 31
On-The-Go 39, 41
plug on battery icon 16
podcasting 47
podcasts
listening 47
updating 28
ports
RCA video and audio 51, 71
USB 91
preview panel 10
previous menu, returning to 6
problems. See troubleshooting
Q
quick navigation 11
R
Radio 58
antenna 58, 90
Live Pause 61
Radio menu 65
screens 59
tagging songs 64
tuning 60
random play 6
rating songs 36
RCA video and audio ports 51, 71
rechargeable batteries 18
recorded videos
adding music 57
deleting from iPod nano 55Index 103
importing to your computer 55
playing 54
sharing with iPhoto 56
recording voice memos 74
registering iPod 97
relative volume, playing songs at 46
removing menu items 10, 45
repairing iPod 94
replacing battery 18
replaying a song or video 6
requirements
computer 91
operating system 91
reset all settings 12
resetting iPod 6, 86
resetting the language 90
restore message 88
restoring iPod software 92
rewinding a song or video 6
S
Safely Remove Hardware icon 15
safety considerations
setting up iPod 93
saving On-The-Go playlists 42
screen brightness, setting 11
screen lock 79
scrolling quickly 11
scrubber bar 6
searching
iPod 44
Select button. See Center button
serial number 8, 12
serial number, locating 97
service and support 97
sets of songs. See playlists
setting combination for iPod 79
settings
about your iPod 12
alarm 78
audiobook play speed 48
backlight timer 11
brightness 11
Click Wheel sound 11
date and time 77
font size 10
language 10
main menu 10, 45
PAL or NTSC TV 50, 70
playing songs at relative volume 46
repeating songs 44
reset all 12
shuffle songs 43
sleep timer 78
slideshow 70
TV 50
volume limit 45
setup 24
shake to shuffle 43
shuffle 35
shuffling songs on iPod 6, 43
sleep mode and charging the battery 16
sleep timer, setting 78
slideshows
background music 70
random order 70
settings 70
viewing on iPod 70
software
getting help 97
iPhoto 91
iPod Updater 92
updating 92
songs
adding to On-The-Go playlists 6
browsing 6
browsing and playing 34
fast-forwarding 6
pausing 6
playing 6
playing at relative volume 46
rating 36
repeating 44
replaying 6
rewinding 6
shuffling 6, 43
skipping ahead 6
viewing lyrics 21
See also music
sorting contacts 82
Sound Check 47
spoken menus 83
standard TV 50
stopwatch 79
storing
data files on iPod 84
notes on iPod 84
supported operating systems 91
suppressing iTunes from opening 84
syncing
address book 81
music 24
music videos 26
photos 67, 68
to-do lists 81
See also adding
system requirements 91
T
tagging songs 64104 Index
previewing and purchasing 64
time, displaying in title bar 77
timer, setting for backlight 11
time zones, clocks for 77
title bar, displaying time 77
to-do lists, syncing 81
transitions for slides 70
troubleshooting
connecting iPod to computer 88
cross-platform use 91
disconnecting iPod 87
iPod not playing music 87
iPod won’t respond 86
resetting iPod 86
restore message 88
safety considerations 93
setting incorrect language 90
slow syncing of music or data 89
software update and restore 92
TV slideshows 90
unlocking iPod screen 91
turning iPod on and off 6
tutorial 97
TV
connecting to iPod 51, 71
PAL or NTSC 50, 70
settings 50
viewing slideshows 51, 71
TV shows
See also videos
U
unlocking iPod screen 80, 91
unresponsive iPod 86
unsupported audio file formats 89
updating and restoring software 92
USB 2.0 port
recommendation 91
slow syncing of music or data 89
USB port on keyboard 13
Use iTunes to restore message in display 88
V
Video Camera 52
importing recorded videos 55
playing recorded videos 54
recording video 53
sharing recorded videos 56
special effects 53
video captions 51
video podcasts
viewing on a TV 50
videos
adding to iPod 27
browsing 6
fast-forwarding 6
importing from video camera 55
pausing 6
playing 6
playing recorded 54
renting 22
replaying 6
rewinding 6
skipping ahead 6
viewing on a TV 50
viewing on iPod 49
viewing album artwork 37
viewing lyrics 37
viewing photos 69
viewing slideshows
on a TV 51, 71
on iPod 70
settings 70
troubleshooting 90
Voice Memos
recording 74
syncing with your computer 76
VoiceOver
setting up 32
using 44
volume
changing 6
setting maximum limit 45
W
warranty service 97
widescreen TV 50
Windows
supported operating systems 91
troubleshooting 91
WMA files, converting 89
iPod touch
Features Guide2
1 Contents
Chapter 1 4 Getting Started
4 What You Need
4 Setting Up iPod touch
5 Getting Music, Videos, and Other Content onto iPod touch
9 Disconnecting iPod touch from Your Computer
Chapter 2 10 Basics
10 iPod touch at a Glance
12 Home Screen
15 iPod touch Buttons and Touchscreen
21 Connecting to the Internet
22 Charging the Battery
23 Cleaning iPod touch
Chapter 3 24 Music and Video
24 Syncing Content from Your iTunes Library
25 Playing Music
30 Watching Videos
32 Setting a Sleep Timer
33 Changing the Buttons on the Music Screen
Chapter 4 34 Photos
34 Syncing Photos from Your Computer
35 Viewing Photos
37 Using a Photo as Wallpaper
Chapter 5 39 iTunes Wi-Fi Music Store
39 Browsing and Searching
42 Purchasing Songs and Albums
43 Syncing Purchased Content
44 Verifying purchases
44 Changing Your iTunes Store Account InformationContents 3
Chapter 6 45 Applications
45 Safari
50 Calendar
53 Mail
58 Contacts
60 YouTube
63 Stocks
64 Maps
69 Weather
70 Clock
72 Calculator
73 Notes
Chapter 7 74 Settings
74 Wi-Fi
75 Brightness
75 General
79 Music
80 Video
80 Photos
81 Mail
83 Safari
84 Contacts
84 Restoring or Transferring Your iPod touch Settings
Appendix A 86 Tips and Troubleshooting
86 General Suggestions
89 Updating and Restoring iPod touch Software
90 Using iPod touch Accessibility Features
Appendix B 91 Learning More, Service, and Support
Index 931
4
1 Getting Started
What You Need
To use iPod touch, you need:
 A Mac or a PC with a USB 2.0 port and one of the following operating systems:
 Mac OS X version 10.4.10 or later
 Windows XP Home or Professional with Service Pack 2 or later
 Windows Vista Home Premium, Business, Enterprise, or Ultimate edition
 iTunes 7.6 or later, available at www.apple.com/itunes
 iTunes Store account (to purchase music over Wi-Fi)
 An Internet connection
Setting Up iPod touch
Before you can use any of the iPod touch features, you must use iTunes to set up
iPod touch. You can also register iPod touch and create an iTunes Store account
(available in some countries) if you don’t already have one.
Set up iPod touch
1 Download and install the latest version of iTunes from www.apple.com/itunes.
2 Connect iPod touch to a USB 2.0 port on your Mac or PC using the included cable.
· To avoid injury, read all operating instructions in this guide and safety
information in the Important Product Information Guide at www.apple.com/
support/manuals/ipod before using iPod touch.Chapter 1 Getting Started 5
The USB port on most keyboards doesn’t provide enough power. Unless your keyboard
has a high-powered USB 2.0 port, you must connect iPod touch to a USB 2.0 port on
your computer.
3 Follow the onscreen instructions in iTunes to set up iPod touch and sync your music,
video, photos, and other content.
Your computer must be connected to the Internet.
By default, iTunes automatically syncs all songs and videos in your iTunes library to
iPod touch. If you have more content in your library than will fit on iPod touch, iTunes
alerts you that it can’t sync your content. You’ll need to use iTunes to select some of
your songs, videos, and other content to sync. The following section tells you how.
Getting Music, Videos, and Other Content onto
iPod touch
iPod touch lets you enjoy music, videos, photos, and much more, with its great sound
and stunning 3.5-inch widescreen display. You get media and other content onto
iPod touch by connecting iPod touch to your computer and using iTunes to sync your
iTunes library and other information on your computer.
You can set iTunes to sync any or all of the following:
 Music and audiobooks
 Movies
 TV Shows6 Chapter 1 Getting Started
 Podcasts
 Photos
 Contacts—names, phone numbers, addresses, email addresses, and so on
 Calendars—appointments and events
 Email account settings
 Webpage bookmarks
Music, movies, TV shows, and podcasts are synced from your iTunes library. If you don’t
already have content in iTunes, the iTunes Store (part of iTunes and available in some
countries) makes it easy to purchase or subscribe to content and download it to iTunes.
You can also get music into iTunes from your CDs. To learn about iTunes and the iTunes
Store, open iTunes and choose Help > iTunes Help.
Photos, contacts, calendars, and webpage bookmarks are synced from applications on
your computer, as described below.
Email account settings are only synced from your computer’s email application to
iPod touch. This allows you to customize your email accounts on iPod touch without
affecting email account settings on your computer.
You can set iPod touch to sync with only a portion of what’s on your computer.
For example, you might want to sync certain playlists, the most recent unwatched
movie, the most recent episodes of your favorite TV shows, and all unplayed podcasts.
The sync settings make it easy to get just what you want onto iPod touch. You can
adjust sync settings whenever iPod touch is connected to your computer.
Important: You cannot connect and sync more than one iPod at a time. Disconnect
one before connecting another. You should be logged in to your own user account on
the computer before connecting iPod touch. On a PC, if you sync more than one iPod
to the same user account, use the same sync settings for each.
Syncing iPod touch
You use the iPod touch settings panes in iTunes to specify the iTunes content and other
information you want to sync to iPod touch.
Sync iPod touch
1 Connect iPod touch to your computer, and open iTunes (if it doesn’t open
automatically).Chapter 1 Getting Started 7
The USB port on most keyboards doesn’t provide enough power. You must connect
iPod touch to a USB 2.0 port on your computer, unless your keyboard has a highpowered USB 2.0 port.
2 Select iPod touch in the iTunes source list (below Devices, on the left).
3 Configure the sync settings in each of the settings panes.
4 Click Apply in the lower-right corner of the screen.
The following sections provide an overview of each of the iPod touch settings panes.
For more information, open iTunes and choose Help > iTunes Help.
Summary Pane
Select “Open iTunes when this iPod is connected” to have iTunes open and sync
iPod touch automatically whenever you connect it to your computer. Deselect this
option if you want to sync only by clicking the Sync button in iTunes. For more
information about preventing automatic syncing, see page 9.
Select “Sync only checked songs and videos” if you want to sync only items that are
checked in your iTunes library.
Select “Manually manage music and videos” to turn off syncing in the Music, Movies,
and TV Shows settings panes.
Music, Movies, TV Shows, and Podcasts Panes
Use these panes to specify the iTunes library content that you want to sync. You can
sync all music, movies, TV shows, and podcasts, or select the specific playlists and items
you want on iPod touch. Audiobooks and music videos are synced along with music. 8 Chapter 1 Getting Started
If you want to watch rented movies on iPod touch, transfer them to iPod touch using
the Movies pane in iTunes.
If there’s not enough room on iPod touch for all the content you’ve specified,
iTunes asks if you want to create a special playlist and set it to sync with iPod touch.
Then iTunes randomly fills the playlist.
Photos Pane
You can sync photos from iPhoto 4.0.3 or later on a Mac, or from Adobe Photoshop
Album 2.0 or later or Adobe Photoshop Elements 3.0 or later on a PC. You can also sync
photos from any folder on your computer that contains images.
Info Pane
The Info pane lets you configure the sync settings for your contacts, calendars, and web
browser bookmarks.
Contacts
You can sync contacts with applications such as Mac OS X Address Book, Microsoft
Entourage, and Yahoo! Address Book on a Mac, or with Yahoo! Address Book, Windows
Address Book (Outlook Express), or Microsoft Outlook 2003 or 2007 on a PC. (On a Mac,
you can sync contacts on your computer with more than one application. On a PC, you
can sync contacts with only one application.)
If you sync with Yahoo! Address Book, you only need to click Configure to enter your
new login information when you change your Yahoo! ID or password after you’ve set
up syncing.
Note: Syncing won’t delete any contact in Yahoo! Address Book that contains a
Messenger ID, even if you’ve deleted the contact from your address book on your
computer. To delete a contact with a Messenger ID, log in to your Yahoo! account and
delete the contact using Yahoo! Address Book online.
Calendars
You can sync calendars from applications such as iCal and Microsoft Entourage on a
Mac, or Microsoft Outlook on a PC. (On a Mac, you can sync calendars on your
computer with more than one application on your computer. On a PC, you can sync
calendars with only one application.)
Mail Accounts
You can sync email account settings from Mail on a Mac, and from Microsoft Outlook
2003 or 2007 or Outlook Express on a PC. Account settings are only transferred from
your computer to iPod touch. Changes you make to an email account on iPod touch
don’t affect the account on your computer.Chapter 1 Getting Started 9
The password for your Yahoo! email account isn’t saved on your computer.
If you sync a Yahoo! email account, you must enter the password on iPod touch.
From the Home screen choose Settings > Mail, choose your Yahoo! account, then enter
your password in the password field.
Web Browser
You can sync bookmarks from Safari on a Mac, or Safari or Microsoft Internet Explorer
on a PC.
Advanced
These options let you replace the information on iPod touch with the information on
your computer during the next sync.
Preventing Automatic Syncing
You may want to prevent iPod touch from syncing automatically if you prefer to add
items manually, or when you connect iPod touch to a computer other than the one you
sync with.
Turn off automatic syncing for iPod touch
m Connect iPod touch to your computer, then select iPod touch in the iTunes source list
(below Devices, on the left) and click the Summary tab. Deselect “Open iTunes when
this iPod is connected.” You can still use iTunes to sync manually by clicking the
Sync button.
Prevent automatic syncing one time, without changing settings
m Open iTunes. Then, as you connect iPod touch to your computer, press and hold
Command-Option (if you’re using a Mac) or Shift-Control (if you’re using a PC) until you
see iPod touch in the iTunes source list (below Devices, on the left).
Sync manually
m Select iPod touch in the iTunes source list, then click Sync in the lower-right corner of
the window. Or, if you’ve changed any sync settings, click Apply.
Disconnecting iPod touch from Your Computer
Unless iPod touch is syncing with your computer, you can disconnect it from your
computer at any time.
When iPod touch is syncing with your computer, it shows “Sync in progress.” If you
disconnect iPod touch before it finishes syncing, some data may not be transferred.
When iPod touch finishes syncing, iTunes shows “iPod sync is complete.”
To cancel a sync so you can disconnect iPod touch, drag the “slide to cancel” slider.2
10
2 Basics
iPod touch at a Glance
Sleep/Wake button
Headphones port
Dock connector
Wi-Fi antenna
Home button
Touch screen
Application icons
Status barChapter 2 Basics 11
Status Icons
The icons in the status bar at the top of the screen give information about iPod touch:
Item What you can do with it
Stereo headphones Listen to music and videos.
Dock connector to
USB cable
Use the cable to connect iPod touch to your computer to sync and charge,
or to the Apple USB Power Adapter (available separately) to charge.
The cable can be used with the optional dock or plugged directly into
iPod touch.
Stand Stand up iPod touch for viewing videos or photo slideshows.
Polishing cloth Wipe the iPod touch screen.
Stand
Stereo headphones Dock connector to USB cable
Polishing cloth
iPod
Status icon What it means
Wi-Fi Shows that iPod touch is connected to a Wi-Fi network. The more
bars, the stronger the connection. See page 21.
¥ Lock Shows that iPod touch is locked. See page 15.
Play Shows that a song is playing. See page 26.
Alarm Shows that an alarm is set. See page 71.
Battery Shows the battery level or charging status. See page 22.12 Chapter 2 Basics
Home Screen
Press the Home button at any time to see the applications on iPod touch. Tap any
application icon to get started.
iPod touch Applications
The following applications are included with iPod touch:
Music
Listen to your songs, podcasts, and audiobooks.
Videos
Watch movies, music videos, video podcasts, and TV shows.
Photos
View photos transferred from your computer. View them in portrait or landscape
mode. Zoom in on any photo for a closer look. Watch a slideshow. Use photos as
wallpaper.
iTunes
Search the iTunes Wi-Fi Music Store music catalog, or browse, preview, and purchase
new releases, top-ten songs and albums, and more.1
In select Starbucks locations,2
find out what song is playing in the café, then buy it instantly. Browse, preview,
and purchase other songs from featured Starbucks Collections.
Safari
Browse websites over a Wi-Fi connection. Rotate iPod touch sideways for viewing in
landscape orientation. Double-tap to zoom in or out—Safari automatically fits sections
to the screen for easy reading. Add Safari Web Clips to the Home screen for fast access
to favorite websites.
Calendar
View your iCal, Microsoft Entourage, or Microsoft Outlook calendar synced from
your computer.
Mail
Send and receive email using your existing email accounts. iPod touch works with the
most popular email systems—including Yahoo! Mail, Google email, AOL, and .Mac
Mail—as well as most industry-standard POP3 and IMAP email systems.
Contacts
Get contact information synced from Mac OS X Address Book, Yahoo! Address Book,
Windows Address Book (Outlook Express), or Microsoft Outlook. Add, change,
or delete contacts, which get synced back to your computer.
YouTube
Play videos from YouTube’s online collection.3
Search for any video, or browse featured,
most viewed, most recently updated, and top-rated videos.Chapter 2 Basics 13
Customizing the Home Screen Layout
You can customize the layout of icons on the Home screen—including the Dock icons
along the bottom of the screen. If you want, arrange them over multiple Home screens.
Rearrange icons
1 Touch and hold any Home screen icon until all the icons begin to wiggle.
2 Arrange the icons by dragging them.
3 Press the Home button to save your arrangement.
You can also add links to your favorite webpages on the Home screen. See “Adding
Safari Web Clips to the Home Screen” on page 49.
Stocks
Watch your favorite stocks, updated automatically from the Internet.
Maps
See a street map, satellite, or hybrid view of locations around the world.
Zoom in for a closer look. Find your current approximate location. Get detailed driving
directions and see current highway traffic conditions. Find businesses in the area.4
Weather
Get current weather conditions and a six-day forecast. Store your favorite cities for a
quick weather report anytime.
Clock
View the time in cities around the world—create clocks for your favorites. Set one or
more alarms. Use the stopwatch, or set a countdown timer.
Calculator
Add, subtract, multiply, and divide.
Notes
Jot notes on the go—reminders, grocery lists, brilliant ideas. Send them in email.
Settings
Adjust all iPod touch settings in one convenient place. Join Wi-Fi networks. Set your
wallpaper and screen brightness, and settings for music, video, photos, and more.
Set auto-lock and a passcode for security.
1 Not available in all areas.
2
In the U.S. only.
3 Not available in all areas.
4
Some features or services not available in all areas.14 Chapter 2 Basics
Create additional Home screens
m While arranging icons, drag a button to the edge of the screen until a new screen
appears. You can flick to return to the original screen and drag more icons to the new
screen.
You can create up to nine screens. The number of dots at the bottom shows the
number of screens you have, and indicates which screen you are viewing.
Switch to another Home screen
m Flick left or right.
Reset your Home screen to the default layout
m Choose Settings > General > Reset and tap Reset Home Screen Layout.Chapter 2 Basics 15
iPod touch Buttons and Touchscreen
A few simple buttons and a high-resolution touchscreen make it easy to learn and use
iPod touch.
Locking iPod touch and Turning It On or Off
When you’re not using iPod touch, you can lock it. When iPod touch is locked, nothing
happens if you touch the screen. By default, if you don’t touch the screen for a minute,
iPod touch locks automatically.
Locking iPod touch does not stop music playback, so you can lock iPod touch and
continue to listen to music. To temporarily display playback controls when iPod touch is
locked, double-click the Home button.
For information about locking iPod touch with a passcode, see “Passcode Lock” on
page 77.
Sleep/Wake
button
To Do this
Lock iPod touch Press the Sleep/Wake button.
Unlock iPod touch Press the Home button or the Sleep/Wake button, then
drag the slider.
Turn iPod touch completely off Press and hold the Sleep/Wake button for a few seconds until
the red slider appears, then drag the slider.
Turn iPod touch on Press and hold the Sleep/Wake button until the Apple logo
appears.
Display playback controls when
iPod touch is locked
Double-click the Home button.16 Chapter 2 Basics
Using the Touchscreen
The controls on the touchscreen change dynamically depending on the task you are
performing.
m Tap any application to open it.
m Press the Home button below the display at any time to return to the Home screen
and see all the applications.
m Drag up or down to scroll.
Dragging your finger to scroll doesn’t choose or activate anything on the screen.Chapter 2 Basics 17
m Flick to scroll quickly.
You can wait for scrolling to stop, or tap or touch anywhere on the screen to stop it
immediately. Tapping or touching to stop scrolling doesn’t choose or activate anything
on the screen.
m Some lists have an index along the right side. Tap a letter to jump to items starting with
that letter. Drag your finger along the index to scroll quickly through the list.
m Tap an item in the list to choose it. Depending on the list, tapping an item can do
different things—for example, it may open a new list, play a song, or show someone’s
contact information.
m The back button in the upper-left corner shows the name of the previous list. Tap it to
go back.
Index18 Chapter 2 Basics
m When viewing photos, web pages, email, or maps, you can zoom in and out. Pinch your
fingers together or apart. For photos and web pages, you can double-tap (tap twice
quickly) to zoom in, then double-tap again to zoom out. For maps, double-tap to zoom
in and tap once with two fingers to zoom out.
Onscreen Keyboard
You can use the onscreen keyboard to enter text, such as contact information.
The intelligent keyboard automatically suggests corrections as you type (some
languages only), to help prevent mistyped words.
iPod touch provides keyboards in multiple languages, and supports the following
keyboard formats:
 QWERTY
 QWERTZ
 AZERTY
 QZERTY
 Japanese IME
See “Keyboard” on page 78 for information about turning on keyboards for different
languages and other keyboard settings.Chapter 2 Basics 19
Entering text
Start by typing with just your index finger. As you get more proficient, you can type
more quickly by using your thumbs.
1 Tap a text field, such as in a note or new contact, to bring up the keyboard.
2 Tap keys on the keyboard.
As you type, each letter appears above your thumb or finger. If you touch the wrong
key, you can slide your finger to the correct key. The letter is not entered until you
release your finger from the key.
To Do this
Type uppercase Tap the Shift key before tapping a letter.
Quickly type a period and space Double-tap the space bar.
Turn caps lock on Enable caps lock (see page 78), then double-tap the
Shift key. The Shift key turns blue, and all letters you type
are uppercase. Tap the Shift key again to turn caps lock off.
Shows numbers, punctuation,
or symbols
Tap the Number key. Tap the Symbol key to see
additional punctuation and symbols.20 Chapter 2 Basics
Accepting or Rejecting Dictionary Suggestions
iPod touch has dictionaries for English, English (UK), French, French (Canada), German,
Japanese, Spanish, Italian, and Dutch. The appropriate dictionary is activated
automatically when you select a keyboard on iPod touch.
iPod touch uses the active dictionary to suggest corrections or complete the word
you’re typing. If you’re using a keyboard that doesn’t have a dictionary, iPod touch
won’t make suggestions.
You don’t need to interrupt your typing to accept the suggested word.
 To use the suggested word, type a space, punctuation mark, or return character.
 To reject the suggested word, finish typing the word as you want it, then tap the “x” to
dismiss the suggestion before typing anything else. Each time you reject a
suggestion for the same word, iPod touch becomes more likely to accept your word.
Editing text
m Touch and hold to see a magnified view, then drag to position the insertion point.
Suggested wordChapter 2 Basics 21
Connecting to the Internet
iPod touch connects to the Internet via Wi-Fi networks. iPod touch can join AirPort
and other Wi-Fi networks at home, at work, or at Wi-Fi hotspots around the world.
When joined to a Wi-Fi network that is connected to the Internet, iPod touch connects
to the Internet automatically whenever you use Mail, Safari, YouTube, Stocks, Maps,
Weather, or the iTunes Wi-Fi Music Store.
Many Wi-Fi networks can be used free of charge. Some Wi-Fi networks require a fee.
To join a Wi-Fi network at a hotspot where charges apply, you can usually open Safari
to see a webpage that allows you to sign up for service.
Joining a Wi-Fi Network
The Wi-Fi settings let you turn on Wi-Fi and join Wi-Fi networks.
Turn on Wi-Fi
m Choose Settings > Wi-Fi and turn Wi-Fi on.
Join a Wi-Fi network
m Choose Settings > Wi-Fi, wait a moment as iPod touch detects networks in range,
then select a network. If necessary, enter a password and tap Join (networks that
require a password appear with a lock icon).
Once you’ve joined a Wi-Fi network manually, iPod touch will automatically connect to
it whenever the network is in range. If more than one previously used network is in
range, iPod touch joins the one last used.
When iPod touch is connected to a Wi-Fi network, the Wi-Fi icon in the status bar at
the top of the screen shows connection strength. The more bars you see, the stronger
the connection.
For more information about joining Wi-Fi networks and configuring Wi-Fi settings,
see page 74.22 Chapter 2 Basics
Charging the Battery
iPod touch has an internal rechargeable battery.
Charge the battery and sync iPod touch
m Connect iPod touch to your computer (not your keyboard) using the included cable.
Note: If iPod touch is connected to a computer that’s turned off or in sleep or standby
mode, the iPod touch battery may drain instead of charge.
An icon in the upper-right corner of the screen shows battery charging status.
If you charge the battery while syncing or using iPod touch, it may take longer to
charge. You can also charge iPod touch using the Apple USB Power Adapter, available
separately.
WARNING: For important safety information about charging iPod touch, see the
Important Product Information Guide at www.apple.com/support/manuals/ipod.
Charging ChargedChapter 2 Basics 23
Important: If iPod touch is very low on power, it may display one of the following
images indicating that iPod touch needs to charge for up to ten minutes before you
can use it. If iPod touch is extremely low on power, the display may be blank for up to
two minutes before one of the low-battery images appears.
Rechargeable batteries have a limited number of charge cycles and may eventually
need to be replaced. The iPod touch battery is not user replaceable; it can be replaced
only by an authorized service provider. For more information, go to:
www.apple.com/batteries
Cleaning iPod touch
Use the polishing cloth that came with iPod touch to gently wipe the glass screen and
the case.
You can also use a soft, slightly damp, lint-free cloth. Unplug and turn off iPod touch
(press and hold the Sleep/Wake button, then drag the onscreen red slider).
Avoid getting moisture in openings. Don’t use window cleaners, household cleaners,
aerosol sprays, solvents, alcohol, ammonia, or abrasives to clean iPod touch.
or3
24
3 Music and Video
Tap Music to listen to songs, audiobooks, and podcasts,
or tap Video to watch TV shows, movies, and other video.
iPod touch syncs with iTunes on your computer to get the songs, movies, TV shows,
and other content you’ve collected in your iTunes library.
For information about using iTunes to get music and other media onto your computer,
open iTunes and choose Help > iTunes Help.
Syncing Content from Your iTunes Library
If you’ve turned on syncing, iTunes automatically syncs content from your iTunes library
to iPod touch each time you connect it to your computer. iTunes lets you sync all of
your media, or specific songs, movies, videos, and podcasts. For example, you could set
iTunes to sync selected music playlists, the most recent unwatched movie, and the
three most recent episodes of your favorite TV show.
If there are more songs in your iTunes library than can fit on iPod touch, iTunes asks if
you want to create a special playlist and set it to sync with iPod touch. Then iTunes
randomly fills the playlist. You can add or delete songs from the playlist and sync again.
If you set iTunes to sync more songs, videos, and other content than can fit on
iPod touch, you can have iTunes automatically delete random content from iPod touch
to make room, or you can stop the sync and reconfigure your sync settings.
When you sync podcasts or audiobooks on iPod touch with those on your computer,
both iTunes and iPod touch remember where you stopped listening and start playing
from that position.
For more information about syncing iPod touch with your iTunes library, see “Getting
Music, Videos, and Other Content onto iPod touch” on page 5.Chapter 3 Music and Video 25
Transferring Purchased Content from iPod touch to Another
Authorized Computer
Music, video, and podcasts sync from your iTunes library to iPod touch, but not from
iPod touch to your iTunes library. However, content you purchased using the iTunes
Wi-Fi Music Store on iPod touch is automatically copied to your iTunes library.
You can also transfer content on iPod touch that was purchased using iTunes on one
computer to an iTunes library on another authorized computer.
Transfer content from iPod touch to another computer
m Connect iPod touch to the other computer. iTunes asks if you want to transfer
purchased content. You can also connect iPod touch and, in iTunes, choose File >
Transfer Purchases.
To play the content, the computer must be authorized to play content from your iTunes
account.
Supported Music and Video Formats
Only songs and videos encoded in formats that iPod touch supports are transferred to
iPod touch. For information about which formats iPod touch supports, see page 88.
Converting Videos for iPod touch
You can add videos other than those purchased from iTunes to iPod touch, such as
videos you create in iMovie on a Macintosh or videos you download from the Internet.
If you try to add a video from iTunes to iPod touch and a message says the video can’t
play on iPod touch, you can convert the video.
Convert a video to work with iPod touch
m Select the video in your iTunes library and choose Advanced > “Convert Selection for
iPod.” Then add the converted video to iPod touch.
Playing Music
The high resolution multi-touch display makes listening to songs on iPod touch as
much a visual experience as a musical one. You can scroll through your playlists, or use
Cover Flow to browse through your album art.
WARNING: For important information about avoiding hearing loss, see the Important
Product Information Guide at www.apple.com/support/manuals/ipod.26 Chapter 3 Music and Video
Playing Songs, Audiobooks, and Podcasts
Browse your collection
m Tap Music, then tap Playlists, Artists, Songs, or Albums. Tap More to browse
Audiobooks, Compilations, Composers, Genres, or Podcasts.
Play a song
m Tap the song.
Controlling Song Playback
When you play a song, the Now Playing screen appears:
To Do this
Adjust the volume Drag the volume slider.
Pause a song Tap .
Resume playback Tap .
Restart a song or a chapter in an
audiobook or podcast
Tap .
Skip to the next or previous song or
chapter in an audiobook or podcast
Tap twice to skip to the previous song. Tap to skip to
the next song.
Rewind or fast-forward Touch and hold or .
Return to the browse lists Tap . Or swipe to the right over the album cover.
Return to the Now Playing screen Tap Now Playing.
See the tracks in your collection
from the current album
Tap . Tap any track to play it.
Display a song’s lyrics Tap the album cover when playing a song. (Lyrics appear only
if you’ve added them to the song using the song’s Info window
in iTunes.)
Next/Fast-forward
Play/Pause
Track List
Now Playing screen
Back
Previous/Rewind
VolumeChapter 3 Music and Video 27
Displaying playback controls at any time
You can display playback controls at any time when you’re listening to music and using
another application—or even when iPod touch is locked—by double-clicking the
Home button. If iPod touch is active, the playback controls appear over the
application you’re using. After using the controls, you can close them or tap Music to
go to the Now Playing screen. If iPod touch is locked, the controls appear onscreen,
then are dismissed automatically after you finish using them.
Additional Controls
m From the Now Playing screen tap the album cover.
The repeat and shuffle controls and the scrubber bar appear. You can see time elapsed,
time remaining, and the song number. The song’s lyrics appear also, if you’ve added
them to the song using iTunes.
To Do this
Set iPod touch to repeat songs Tap . Tap again to set iPod touch to repeat only the
current song.
= iPod touch is set to repeat all songs in the current album
or list.
= iPod touch is set to repeat the current song over and
over.
= iPod touch is not set to repeat songs.
Skip to any point in a song Drag the playhead along the scrubber bar.
Set iPod touch to shuffle songs Tap . Tap again to set iPod touch to play songs in order.
= iPod touch is set to shuffle songs.
= iPod touch is set to play songs in order.
Shuffle the tracks in any playlist,
album, or other list of songs
Tap Shuffle at the top of the list. For example, to shuffle all the
songs on iPod touch, choose Songs > Shuffle.
Whether or not iPod touch is set to shuffle, if you tap Shuffle at
the top of a list of songs, iPod touch plays the songs from that
list in random order.
Scrubber bar
Repeat Shuffle
Playhead28 Chapter 3 Music and Video
Browsing Album Covers in Cover Flow
When you’re browsing music, you can rotate iPod touch sideways to see your iTunes
content in Cover Flow and browse your music by album artwork.
To Do this
See Cover Flow Rotate iPod touch sideways.
Browse album covers Drag or flick left or right.
See the tracks on an album Tap a cover or .
To Do this
Play any track Tap the track. Drag up or down to scroll through the tracks.
Return to the cover Tap the title bar. Or tap again.
Play or pause the current song Tap or . Chapter 3 Music and Video 29
Viewing All Tracks on an Album
See all the tracks on the album that contains the current song
m From the Now Playing screen tap . Tap a track to play it. Tap the album cover
thumbnail to return to the Now Playing screen.
In track list view, you can assign ratings to songs. You can use ratings to create
smart playlists in iTunes that dynamically update to show, for example, your highest
rated songs.
Rate a song
m Drag your finger across the ratings bar to give the song zero to five stars.
Making Playlists Directly on iPod touch
Make an on-the-go playlist
1 Tap Playlists and tap On-The-Go.
2 Browse for songs using the buttons at the bottom of the screen. Tap any song or video
to add it to the playlist. Tap Add All Songs at the top of any list of songs to add all the
songs in the list.
3 When you finish, tap Done.
When you make an on-the-go playlist and then sync iPod touch to your computer, the
playlist is saved in your iTunes library, then deleted from iPod touch. The first is saved as
“On-The-Go 1,” the second as “On-The-Go 2,” and so on. To get a playlist back on
iPod touch, select iPod touch in the iTunes source list, click the Music tab, and set the
playlist to sync.
Edit an on-the-go playlist
m Tap Playlists, tap On-The-Go, tap Edit, then do one of the following:
 To move a song higher or lower in the list, drag next to the song.
 To delete a song from the playlist, tap next to the song, then tap Delete. Deleting a
song from the on-the-go playlist doesn’t delete it from iPod touch.
 To clear the entire playlist, tap Clear Playlist.
 To add more songs, tap .
Track list view
Ratings bar
Back to Now
Playing screen
Album tracks30 Chapter 3 Music and Video
Watching Videos
With iPod touch, you can view video content such as movies, music videos, and video
podcasts. Videos play in widescreen to take full advantage of the display. If a video
contains chapters, you can skip to the next or previous chapter, or bring up a list and
start playing at any chapter that you choose. If a video provides alternate language
features, you can choose an audio language or display subtitles.
Playing Videos on iPod touch
Play a video
m Tap Videos and tap the video.
Display playback controls
m Tap the screen to show the controls. Tap again to hide them.
Say It Right by Nelly Furtado is available on iTunes in select countries.
Restart/Rewind
Video controls
Playhead
Scale
Play/Pause
Fast-forward
Volume
Scrubber bar
To Do this
Play or pause a video Tap or .
Raise or lower the volume Drag the volume slider.
Start a video over Drag the playhead on the scrubber bar all the way to the left,
or tap if the video doesn’t contain chapters.
Skip to the previous or next chapter
(when available)
Tap to skip to the previous chapter. Tap to skip to the
next chapter.
Start playing at a specific chapter Tap , then choose the chapter from the list.
Rewind or fast-forward Touch and hold or .
Skip to any point in a video Drag the playhead along the scrubber bar.
Stop watching a video before it
finishes playing
Tap Done. Or press the Home button.Chapter 3 Music and Video 31
Watching Rented Movies
You can rent movies from the iTunes Store and watch them on iPod touch. You use
iTunes to rent the movies and transfer them to iPod touch. (Rented movies are available
only in some regions. iTunes version 7.6 or later is required.)
Rented movies are playable only for a limited time. The remaining time in which you
must finish watching a rented movie appears near its title. Movies are automatically
deleted when they expire. Check the iTunes Store for the expiration times before
renting a movie.
Transfer rented movies to iPod touch
m Connect iPod touch to your computer. Then select iPod touch in the iTunes window
(below Devices, on the left), click Movies and select the rented movies you want to
transfer. Your computer must be connected to the internet.
Note: Once a rented movie is transferred to iPod touch, you can’t transfer it back to
your computer to watch it there.
View a rented movie
m Tap Videos and select a movie.
Scale a video to fill the screen or fit
to the screen
Tap to make the video fill the screen. Tap to make it
fit the screen.
You can also double-tap the video to toggle between fitting
and filling the screen.
When you scale a video to fill the screen, the sides or top may
be cropped from view. When you scale it to fit the screen,
you may see black bars above and below or on the sides of
the video.
Select an alternate audio language
(when available)
Tap , then choose a language from the Audio list.
Show or hide subtitles (when
available)
Tap , then choose a language, or Off, from the Subtitles list.
Play the sound from a music video
or video podcast without showing
the video
Browse for the music video or podcast through Music lists.
To play the music and video for a music video or podcast,
browse for it through the Videos list.
To Do this32 Chapter 3 Music and Video
Watching Videos on a TV Connected to iPod touch
You can connect iPod touch to your TV and watch your videos on the larger screen.
Use the Apple Component AV Cable, Apple Composite AV Cable, or other iPod touch
compatible cable. You can also use these cables with the Apple Universal Dock,
available separately, to connect iPod touch to your TV. (The Apple Universal Dock
includes a remote, which allows you to control playback from a distance.) Apple cables
and docks are available for purchase at www.apple.com/ipodstore.
Video Settings
Video settings let you set where to resume playing videos that you previously started,
turn closed captioning on or off, turn widescreen on or off, and set the TV signal to
NTSC or PAL. See page 80.
Set Video settings
m Choose Settings > Video.
Deleting Videos from iPod touch
You can delete videos directly from iPod touch to save space.
Delete a video
m In the Videos list, swipe left or right over the video, then tap Delete.
When you delete a video (not including rented movies) from iPod touch, it isn’t deleted
from your iTunes library and you can sync the video back to iPod touch later. If you
don’t want to sync the video back to iPod touch, set iTunes to not sync the video
(see page 6).
If you delete a rented movie from iPod touch, it is deleted permanently and can’t be
transferred back to your computer.
Setting a Sleep Timer
You can set iPod touch to stop playing music or videos after a period of time.
m From the Home screen choose Clock > Timer, then flick to set the number of hours and
minutes. Tap When Timer Ends and choose Sleep iPod, tap Set, then tap Start to start
the timer.
When the timer ends, iPod touch stops playing music or video, closes any other open
application, and then locks itself. Chapter 3 Music and Video 33
Changing the Buttons on the Music Screen
You can replace the Playlists, Artist, Songs, or Albums buttons at the bottom of the
screen with ones you use more frequently. For example, if you listen to podcasts a lot
and don’t browse by album, you can replace the Albums button with Podcasts.
m Tap More and tap Edit, then drag a button to the bottom of the screen, over the button
you want to replace.
You can drag the buttons at the bottom of the screen left or right to rearrange them.
When you finish, tap Done.
Tap More at any time to access the buttons you replaced.4
34
4 Photos
Tap Photos to view your photos, use a photo as
wallpaper, and play slideshows.
iPod touch lets you sync photos from your computer so you can share them with your
family, friends, and associates on the high-resolution display.
Syncing Photos from Your Computer
If you’ve set up photo syncing, iTunes automatically copies or updates your photo
library (or selected albums) from your computer to iPod touch whenever you connect
iPod touch to your computer. iTunes can sync your photos from the following
applications:
 On a Mac: iPhoto 4.0.3 or later
 On a PC: Adobe Photoshop Album 2.0 or later or Adobe Photoshop Elements 3.0
or later
For information about syncing iPod touch with photos and other information on your
computer, see “Getting Music, Videos, and Other Content onto iPod touch” on page 5.Chapter 4 Photos 35
Viewing Photos
Photos synced from your computer can be viewed in Photos.
View photos
m Tap Photo Library to see all your photos, or tap an album to see just those photos.
See a photo at full screen
m Tap the thumbnail of a photo to see it at full screen. Tap the full screen photo to hide
the controls.
Tap the photo again to show the controls.
See the next or previous photo
m Flick left or right. Or tap the screen to show the controls, then tap or .36 Chapter 4 Photos
Changing the Size or Orientation
See a photo in landscape orientation
m Rotate iPod touch sideways. The photo automatically reorients and, if it’s in landscape
format, expands to fit the screen.
Zoom in on part of a photo
m Double-tap the part you want to zoom in on. Double-tap again to zoom out.
Zoom in or out
m Pinch to zoom in or out.
Pan around a photo
m Drag the photo.Chapter 4 Photos 37
Viewing Slideshows
View photos in a slideshow
m Choose an album and tap a photo, then tap . If you don’t see , tap the photo to
show the controls.
Stop a slideshow
m Tap the screen.
Set slideshow settings
1 From the Home screen choose Settings > Photos.
2 To set:
 The length of time each slide is shown, tap Play Each Slide For and choose a time.
 Transition effects when moving from photo to photo, tap Transition and choose a
transition type.
 Whether slideshows repeat, turn Repeat on or off.
 Whether photos are shown in random order, turn Shuffle on or off.
Play music during a slideshow
m From the Home screen choose Music, and play a song. Then choose Photos from the
Home screen and start a slideshow.
Using a Photo as Wallpaper
You see a wallpaper background picture as you unlock iPod touch.
Set a photo as wallpaper
1 Choose any photo.
2 Drag to pan, or pinch to zoom in or out, until the photo looks the way you want.
3 Tap the photo to display the controls, then tap and tap Set Wallpaper.
You can also choose from several wallpaper pictures included with iPod touch by
choosing Settings > General > Wallpaper > Wallpaper from the Home screen.
Emailing a Photo
Email a photo
m Choose any photo and tap , then tap Email Photo.
iPod touch must be set up for email (see “Setting Up Email Accounts” on page 53).38 Chapter 4 Photos
Sending a Photo to a Web Gallery
If you have a .Mac account, you can send photos directly from iPod touch to a Web
Gallery created with iPhoto ‘08. You can also send photos to someone else’s .Mac Web
Gallery if that person has enabled email contributions.
To send photos to a Web Gallery, you need to do the following:
 Set up your .Mac mail account on iPod touch
 Publish an iPhoto ‘08 album to a .Mac Web Gallery
 Select “Allow photo uploading by email” in the Publish Settings pane of iPhoto ‘08
For more information about creating a Web Gallery in iPhoto ‘08, open iPhoto ‘08,
choose Help, and search for Web Gallery.
Send a photo to your web gallery
Choose any photo and tap , then tap Send to Web Gallery.
Assigning a Photo to a Contact
You can assign a photo to a contact.
Assign a photo to a contact
1 Choose any photo on iPod touch and tap .
2 Tap Assign to Contact and choose a contact.
3 Drag the photo to pan, or pinch the photo to zoom in or out, until it looks the way
you want.
4 Tap Set Photo.
You can also assign a photo to a contact in Contacts by tapping edit and then tapping
the picture icon.5
39
5 iTunes Wi-Fi Music Store
Tap iTunes to purchase songs and albums from the
iTunes Wi-Fi Music Store.
You can search for, browse, preview, purchase, and download songs and albums from
the iTunes Wi-Fi Music Store directly to iPod touch. Your purchased content is
automatically copied to your iTunes library the next time you sync iPod touch with your
computer.
To use the iTunes Wi-Fi Music Store, iPod touch must join a Wi-Fi network that is
connected to the Internet. For information about joining a Wi-Fi network, see page 21.
You’ll also need an iTunes Store account to purchase songs over Wi-Fi (available in
some countries). If you don’t already have an iTunes Store account, open iTunes and
choose Store > Account to set one up.
Browsing and Searching
You can browse featured selections, top-ten categories, or search the iTunes Wi-Fi
Music Store music catalog for the songs and albums you’re looking for. Use the
featured selections to see new releases and iTunes Wi-Fi Music Store recommendations.
Top Tens lets you see the most popular songs and albums in each of several categories.
If you’re looking for a specific song, album, or artist, use Search.40 Chapter 5 iTunes Wi-Fi Music Store
Browse featured songs and albums
m Tap Featured and select a category at the top of the screen.
Browse top ten songs and albums
m Tap Top Tens, then choose a category and tap Top Songs or Top Albums.Chapter 5 iTunes Wi-Fi Music Store 41
Search for songs and albums
m Tap Search, tap the search field and enter one or more words, then tap Search.
See the songs on an album
m Tap the album.
See the album a song is on
m Double-tab the song.
Browsing Starbucks Selections
If you’re in a select Starbucks location (available in the U.S. only), the Starbucks icon
appears at the bottom of the screen next to Featured. Tap the Starbucks icon to find
out what song is playing in the café and browse featured Starbucks Collections.
For a list of select Starbucks locations, go to:
www.apple.com/itunes/starbucks42 Chapter 5 iTunes Wi-Fi Music Store
Find out what song is playing
m Tap Starbucks.
The currently playing song appears at the top of the screen. Tap the song to see the
album the song is on and the other songs on the album.
View Recently Played and other Starbucks playlists
m Tap Starbucks, then choose Recently Played or one of the Starbucks playlists.
Purchasing Songs and Albums
When you find a song or album you like in the the iTunes Wi-Fi Music Store, you can
purchase and download it to iPod touch. You can preview a song before you purchase
it to make sure it’s a song you want. In select Starbucks locations (available in the U.S.
only), you can also preview and purchase the currently playing and other songs from
featured Starbucks Collections.
Preview a song
m Tap the song.
Purchase and download a song or album
1 Tap the price, then tap Buy Now.
Note: To purchase songs on iPod touch, you must have been signed in to your iTunes
Store account in iTunes the last time you synced iPod touch. Chapter 5 iTunes Wi-Fi Music Store 43
2 Enter your password and tap OK.
Your purchase is charged to your iTunes Store account. For additional purchases made
within the next fifteen minutes, you don’t have to enter your password again.
An alert appears if you’ve previously purchased one or more songs from an album.
Tap Buy if you want to purchase the entire album including the songs you’ve already
purchased, or tap Cancel if you want to purchase the remaining songs individually.
Note: Some albums include bonus content, which is downloaded to your iTunes library
on your computer. Not all bonus content is downloaded directly to iPod touch.
See the status of downloading songs and albums
m Tap Downloads.
To pause a download, tap .
If you need to turn off iPod touch or leave the area of your Wi-Fi connection, don’t
worry about interrupting the download. iPod touch starts the download again the next
time iPod touch joins a Wi-Fi network with an Internet connection. Or if you open
iTunes on your computer, iTunes completes the download to your iTunes library.
Purchased songs are added to a Purchased playlist on iPod touch. If you delete the
Purchased playlist, iTunes creates a new one when you buy an item from the iTunes
Wi-Fi Music Store.
Syncing Purchased Content
iTunes automatically syncs songs and albums you’ve purchased on iPod touch to your
iTunes library when you connect iPod touch to your computer. This lets you listen to
the purchases on your computer and provides a backup if you delete purchases from
iPod touch. The songs are synced to the “Purchased on ” playlist.
iTunes creates the playlist if it doesn’t exist.
iTunes also copies your purchases to the Purchased playlist that iTunes uses for
purchases you make on your computer, if that playlist exists and is set to sync with
iPod touch.44 Chapter 5 iTunes Wi-Fi Music Store
Verifying purchases
You can use iTunes to verify that all the music, videos, and other items you bought
from the the iTunes Wi-Fi Music Store are in your iTunes library. You might want to do
this if a download was interrupted.
Verify your purchases
1 Make sure your computer is connected to the Internet.
2 In iTunes, choose Store > Check for Purchases.
3 Enter your iTunes Store account ID and password, then click Check.
Purchases not yet on your computer will be downloaded.
The Purchased playlist displays all your purchases. However, because you can add or
remove items in this list, it might not be accurate. To see all your purchases, make
sure you’re signed in to your account, choose Store > View My Account, and click
Purchase History.
Changing Your iTunes Store Account Information
iPod touch gets your iTunes Store account information from iTunes, including whether
you get iTunes Plus music (when available). You can view and change your iTunes Store
account information using iTunes.
View and change your iTunes Store account information
m In iTunes, choose Store > View My Account.
You must be signed in to your iTunes Store account. If “View My Account” doesn’t
appear in the Store menu, choose Store > Sign in.
Purchase music from another iTunes Store account
m Sign in to that account when you connect to iTunes Wi-Fi Music Store.6
45
6 Applications
Safari
Surfing the Web
Safari lets you see webpages just as they were designed to be seen in computer-based
browsers. A simple double-tap lets you zoom in; rotate iPod touch sideways for a wider
view. Search using Google or Yahoo!—both are built in.
To use Safari, iPod touch must join a Wi-Fi network that is connected to the Internet.
For information about joining a Wi-Fi network, see page 21.
Opening and Navigating Webpages
Open a webpage
m Tap the address field at the top of the screen, type the web address—apple.com or
www.google.com, for example—and tap Go. If you don’t see the address field, tap the
status bar at the top of the screen.
As you type, any web address in your bookmarks or history list that contains those
letters appears below. Tap a web address to go to its webpage.
Erase all the text in the address field
m Tap the address field, then tap .
Follow a link on a webpage
m Tap the link.
Text links are typically underlined in blue. Many images are also links. 46 Chapter 6 Applications
If a link leads to a sound or movie file supported by iPod touch, Safari plays the sound
or movie. For supported file types, see page 88.
Zooming In to See a Page More Easily
View a webpage in landscape orientation
m Rotate iPod touch sideways. Safari automatically reorients and expands the page.
To Do this
See a link’s destination address Touch and hold the link. The address pops up next to your
finger. You can touch and hold an image to see if it has a link.
Stop a page from loading if you
change your mind
Tap .
Reload a webpage Tap .
Return to the previous or next
webpage
Tap or at the bottom of the screen.
Return to any of the last several
webpages you’ve visited
Tap and tap History. To clear the history list, tap Clear.
Send a webpage address over email Tap and tap Mail Link to this Page. You must have an email
account set up on iPod touch (see page 53).Chapter 6 Applications 47
Resize any column to fit the screen
m Double-tap the column. The column expands so you can read it more easily.
Double-tap again to zoom out.
Zoom in on part of a webpage
m Double-tap the part of the page you want to zoom in on. Double-tap again to
zoom out.
Zoom in or out manually
m Pinch to zoom in or out.
Scroll around the page
m Drag up, down, or sideways. When scrolling, you can touch and drag anywhere on the
page without activating any links. If you tap a link, you follow the link, but if you drag a
link, the page scrolls.
Scroll within a frame on a webpage
Use two fingers to scroll within a frame on a webpage. Use one finger to scroll the
entire webpage.
Jump to the top of a webpage
Tap the status bar at the top of the iPod touch screen.
Searching the Web
By default, Safari searches using Google. You can set it to search using Yahoo!, instead.
Search for anything on the web
1 Tap to go to the Google search field.
2 Type a word or phrase that describes what you’re looking for, then tap Google.
3 Tap a link in the list of search results to open a webpage.
Set Safari to search using Yahoo!
m From the Home screen choose Settings > Safari > Search Engine, then choose Yahoo!.48 Chapter 6 Applications
Opening Multiple Pages at Once
You can have more than one webpage open at a time. Some links automatically open a
new page instead of replacing the current one.
The number inside the pages icon at the bottom of the screen shows how many
pages are open. If there’s no number, just one page is open.
For example:
= one page is open
= three pages are open
Open a new page
m Tap and tap New Page.
See all open pages and go to another page that’s open
m Tap and flick left or right. When you get to the page you want, tap it.
Close a page
m Tap and tap . You can’t close a page if it’s the only one that’s open.
Typing in Text Fields
Some webpages have forms or text fields you can enter information in.
Bring up the keyboard
m Tap inside a text field.
Move to other text fields on the page
m Tap another text field. Or tap the Next or Previous button.Chapter 6 Applications 49
Submit the form
m Once you finish filling out the text fields on the page, tap Go or Search. Most pages
also have a link you can tap to submit the form.
Dismiss the keyboard without submitting the form
m Tap Done.
Adding Safari Web Clips to the Home Screen
You can add Web Clips for your favorite webpages to the Home screen for fast access.
Web Clips appear as icons, which you can arrange however you want on the Home
screen. See “Customizing the Home Screen Layout” on page 13.
Add a Web Clip to the Home screen
m Open the page and tap . Then tap “Add to Home Screen.”
Web Clips remember the displayed portion—zoom level and location—of webpages.
When you open a Web Clip, Safari automatically zooms and scrolls to that portion of
the webpage again. The displayed portion is also used to create the icon for the Web
Clip on the Home screen.
Before you add a Web Clip, you can edit its name. If the name is too long (more than
about 10 characters), it may appear abbreviated on the Home screen.
Delete a Web Clip from the Home screen
1 Touch and hold any Home screen icon until the icons begin to wiggle.
2 Tap the “x” in the corner of the Web Clip you want to delete.
3 Tap Delete, then press the Home button to save your arrangement.
Using Bookmarks
You can bookmark webpages so that you can quickly return to them at any time
without having to type the address.
Bookmark a webpage
m Open the page and tap . Then tap Add Bookmark.
Before you save a bookmark you can edit its title or choose where to save it. By default,
the bookmark is saved in the top-level Bookmarks folder. Tap Bookmarks to choose
another folder.
Open a bookmarked webpage
m Tap , then choose a bookmark or tap a folder to see the bookmarks inside.50 Chapter 6 Applications
Edit a bookmark or bookmark folder
m Tap , choose the folder that has the bookmark or folder you want to edit, then tap
Edit. Then do one of the following:
 To make a new folder, tap New Folder.
 To delete a bookmark or folder, tap next to the bookmark or folder, then tap
Delete.
 To reposition a bookmark or folder, drag next to the item you want to move.
 To edit the name or address of a bookmark or folder, or to put it in a different folder,
tap the bookmark or folder.
When you finish, tap Done.
Syncing Bookmarks
If you use Safari on a Mac, or Safari or Microsoft Internet Explorer on a PC, you can sync
bookmarks on iPod touch with bookmarks on your computer.
Sync bookmarks between iPod touch and your computer
m Connect iPod touch to your computer. If bookmarks are set to be synced (see page 9),
the sync begins.
Safari Settings
From the Home screen choose Settings > Safari to adjust security and other settings.
See page 83.
Calendar
Adding Calendar Events to iPod touch
If you’ve set iTunes to sync calendars, you can enter appointments and events on your
computer and sync them with iPod touch. You can also enter appointments and events
directly on iPod touch.
Entering Calendar Events on Your Computer
You can enter appointments and events in iCal and Microsoft Entourage on a Mac,
or in Microsoft Outlook 2003 or 2007 on a PC.
Syncing Calendars
Sync calendars between iPod touch and your computer
Connect iPod touch to your computer. If iPod touch is set to sync calendars
automatically (see page 6), the update begins.Chapter 6 Applications 51
Adding and Editing Calendar Events Directly on iPod touch
Add an event
m Tap and enter event information. Then tap Done.
You can enter any of the following:
 Title
 Location
 Starting and ending times (or turn on All-day if it’s an all-day event)
 Repeat times—none, or every day, week, two weeks, month, or year
 Alert time—from five minutes to two days before the event
If you set an alert time, iPod touch gives you the option to set a second alert time,
in case you miss the first one.
 Notes
Set iPod touch to make a sound when you get a calendar alert
m In Settings, choose General > Sound Effects and select whether you want sound effects
to play over the internal speaker, through the headphones, or both. Select Off to turn
sound effects off.
If Sound Effects is off, iPod touch displays a message instead of making a sound when
you get a calendar alert.
Edit an event
m Tap the event and tap Edit.
Delete an event
Tap the event, tap Edit, then scroll down and tap Delete Event.
Viewing Your Calendar
View your calendar
m Tap Calendar.
Switch views
m Tap List, Day, or Month.
 List view: All your appointments and events appear in an easy-to-scan list. Scroll up
or down to see previous or upcoming days.
 Day view: Scroll up or down to see hours earlier or later in the day. Tap or to see
the previous or next day.52 Chapter 6 Applications
 Month view: Days with events show a dot below the date. Tap a day to see its events
in a list below the calendar. Tap or to see the previous or next month.
See today’s events
m Tap Today.
See the details of an event
m Tap the event.
Set iPod touch to adjust event times for a selected time zone
m From the Home screen tap Settings > General > Date & Time, then turn Time Zone
Support on. Then tap Time Zone and search for a major city in the time zone you want.
When Time Zone Support is on, Calendar displays event dates and times in the time
zone set for your calendars. When Time Zone Support is off, Calendar displays events in
the time zone of your current location.
Days with dots have
scheduled events
Month view
Switch views
Events for selected day
Go to todayChapter 6 Applications 53
Mail
Mail is a rich HTML email client that retrieves your email in the background while you
do other things on iPod touch. iPod touch works with the most popular email
systems—including Yahoo! Mail, Google email, AOL, and .Mac Mail—as well as most
industry-standard POP3 and IMAP email systems. Mail lets you send and receive photos
and graphics, which are displayed in your message along with the text. You can also
get PDFs and other attachments and view them on iPod touch.
Setting Up Email Accounts
You must have an email address—which looks like “yourname@example.com”—to use
iPod touch for email. If you have Internet access, you most likely got an email address
from your Internet service provider.
If you chose automatic syncing during setup, your existing email accounts should be
already set up and ready to go. Otherwise, you can set iTunes to sync your email
accounts, or configure email accounts directly on iPod touch.
Syncing Email Accounts to iPod touch
You use iTunes to sync your email accounts to iPod touch. iTunes supports Mail and
Microsoft Entourage on a Mac, and Microsoft Outlook 2003 or 2007 and Outlook
Express on a PC. See “Getting Music, Videos, and Other Content onto iPod touch” on
page 5.
Note: Syncing an email account to iPod touch copies the email account setup, not the
messages themselves. Whether the messages in your inbox appear on both iPod touch
and your computer depends on the type of email account you have and how it’s
configured.
If You Don’t Have an Email Account
Email accounts are available from most Internet service providers. If you use a Mac,
you can get an email address, along with other services, at www.mac.com. Fees may
apply.
Free accounts are also available online:
 www.mail.yahoo.com
 www.google.com/mail
 www.aol.com54 Chapter 6 Applications
Setting Up an Email Account on iPod touch
You can set up and make changes to an email account directly on iPod touch.
Your email service provider can provide the account settings you need to enter.
Changes you make on iPod touch to an email account synced from your computer are
not copied to your computer.
To use the online Mail Setup Assistant, go to:
www.apple.com/support/ipodtouch/mailhelper
Enter account settings directly on iPod touch
1 If this is the first account you’re setting up on iPod touch, tap Mail. Otherwise, from the
Home screen choose Settings > Mail > Accounts > Add Account.
2 Choose your email account type: Y! Mail (for Yahoo!), Google email, .Mac, AOL,
or Other.
3 Enter your account information:
If you’re setting up a Yahoo!, Google email, .Mac, or AOL account, enter your name,
email address, and password. After that, you’re done.
Otherwise, click Other, select a server type—IMAP, POP, or Exchange—and enter your
account information:
 Your email address
 The email server type (IMAP, POP, or Exchange)
 The Internet host name for your incoming mail server (which may look like
“mail.example.com”)
 The Internet host name for your outgoing mail server (which may look like
“smtp.example.com”)
 Your user name and password for incoming and outgoing servers (you may not need
to enter a user name and password for an outgoing server)
Note: Exchange email accounts must be configured for IMAP in order to work with
iPod touch. Contact your IT organization for more information.
Sending Email
You can send an email message to anyone who has an email address. You can send the
message to one person or to a group of people.
Compose and send a message
1 Tap .
2 Type one or more names or email addresses in the To or Cc (carbon copy) fields, or tap
and choose a contact to add the contact’s email address.
As you type an email address, comparable email addresses from your contacts list
appear below. Tap one to add it.Chapter 6 Applications 55
3 Type a subject, then type a message.
4 Tap Send.
Send a photo in a message
m From the Home screen choose Photos, then choose a photo. Then tap and tap
Email Photo.
If you have more than one email account on iPod touch, the photo is sent using the
default account (see page 83).
Save a message as a draft so you can work on it later
m Start composing the message and tap Cancel. Then tap Save. You can find the message
in the Drafts mailbox, add to it or change it, and then send it.
Reply to a message
m Open a message and tap . Tap Reply to reply to just the person who sent the
message. Tap Reply All to reply to the sender and the other recipients. Then add a
message of your own if you like, and tap Send.
When you reply to a message, files or images attached to the initial message aren’t sent
back.
Forward a message
m Open a message and tap , then tap Forward. Add one or more email addresses and
a message of your own if you like, then tap Send.
When you forward a message, you can include the files or images attached to the
original message.
Send a message to a recipient of a message you received
m Open the message and tap the recipient’s name or email address, then tap Email.
Checking and Reading Email
The Mail button shows the total number of unread messages in all of your inboxes.
You may have other unread messages in other mailboxes.
Number of
unread emails56 Chapter 6 Applications
On each account screen, you can see the number of unread messages next to each
mailbox.
Tap a mailbox to see its messages. Unread messages have a blue dot next to them.
Read a message
m Tap a mailbox, then tap a message. Within a message, tap or to see the next or
previous message.
Delete a message
m Open the message and tap .
You can also delete a message directly from the mailbox message list by swiping left or
right over the message title and then tapping Delete.
Or you can tap Edit and tap next to a message.
Number of
unread messages
Tap to see all
your email accounts
Unread messages
To show the Delete button, swipe
left or right over the message.Chapter 6 Applications 57
Check for new messages
m Choose a mailbox, or tap at any time.
Open an attached file
You can view or read some types of files and images attached to messages you receive.
For example, if someone sends you a PDF, Microsoft Word, or Microsoft Excel
document, you canread it on iPod touch.
m Tap the attachment. It downloads to iPod touch and then opens.
You can view attachments in both portrait and landscape orientation. If the format of
an attached file isn’t supported by iPod touch, you can see the name of the file but you
can’t open it. iPod touch supports the following email attachment file formats:
 .doc, .docx, .htm, .html, .pdf, .txt, .xls, .xlsx
See all the recipients of a message
m Open the message and tap Details.
Tap a name or email address to see the recipient’s contact information. Then tap an
email address to email the person. Tap Hide to hide the recipients.
Add an email recipient to your contacts list
m Tap the message and, if necessary, tap Details to see the recipients. Then tap a name or
email address and tap Create New Contact or “Add to Existing Contact.”
Mark a message as unread
m Open the message and tap “Mark as Unread.”
A blue dot appears next to the message in the mailbox list until you open it again.
Move a message to another mailbox
m Open the message and tap , then choose a mailbox.
Tap attachment
to download58 Chapter 6 Applications
Zoom in to a part of a message
m Double-tap the part you want to zoom in on. Double-tap again to zoom out.
Resize any column of text to fit the screen
m Double-tap the text.
Resize a message manually
m Pinch to zoom in or out.
Follow a link
m Tap the link.
Text links are typically underlined in blue. Many images also have links. A link can take
you to a webpage, open a map, or open a new preaddressed email message.
Web and map links open Safari or Maps on iPod touch. To return to your email, press
the Home button and tap Mail.
Mail Settings
From the Home screen choose Settings > Mail to set up and customize your email
accounts for iPod touch. See page 81.
Contacts
With Contacts, it’s easy to have all your contact information with you.
Syncing Contact Information from Your Computer
If you’ve set iTunes to sync contacts, iTunes automatically keeps your contacts up-todate—whether you make changes on your computer or on iPod touch. You can sync
contacts with applications such as:
 On a Mac: Mac OS X Address Book, Microsoft Entourage, and Yahoo! Address Book
 On a PC: Yahoo! Address Book, Windows Address Book (Outlook Express),
or Microsoft Outlook
For information about syncing iPod touch with your contacts, see “Getting Music,
Videos, and Other Content onto iPod touch” on page 5.
Viewing a Contact
m Tap Contacts, then tap a contact.
To view a specific group, tap the Group button.Chapter 6 Applications 59
Setting the Sort and Display Order
Use Contacts settings to set whether your contacts are sorted by first or last name, and
to set the order in which the names are displayed.
m Tap Settings > Contacts, then tap Sort Order or Display Order and select “First, Last” or
“Last, First.”
Adding and Editing Contacts Directly on iPod touch
You can enter new contacts on iPod touch, edit existing contacts, and delete contacts.
Add a contact to iPod touch
m Choose Contacts and tap , then enter the contact information.
Edit a contact’s phone number, address, and other information
m Tap Contacts and choose a contact, then tap Edit.
 To add an item—such as a web address or mobile phone number, tap next to
the item.
 To delete an item, tap next to it.
 To delete the contact from your contacts list, scroll down and tap Delete Contact.
Enter a pause in a number
m Tap , then tap Pause.
Pauses are sometimes required by phone systems—before an extension or password,
for example. Each pause lasts 2 seconds. You may need to enter more than one.
Assign a photo to a contact or change a contact’s photo
1 Tap Contacts and choose a contact.
2 Tap Edit and tap Add Photo, or tap the existing photo.
3 Choose a photo. 60 Chapter 6 Applications
4 Move and scale the photo the way you want it. Drag the photo up, down, or sideways.
Pinch or double-tap to zoom in or out.
5 Tap Set Photo.
Delete a contact
1 Tap Contacts and choose a contact.
2 Tap Edit.
3 Scroll to the bottom of the contact information and tap Delete.
YouTube
Finding and Viewing Videos
YouTube features short videos submitted by people from around the world (not
available in all languages, may not be available in all locations).
To use YouTube, iPod touch must join a Wi-Fi network that is connected to the Internet.
For information about joining a Wi-Fi network, see page 21.
Browse videos
m Tap Featured, Most Viewed, or Bookmarks. Or tap More to browse by Most Recent,
Top Rated, or History.
 Featured: Videos reviewed and featured by YouTube staff.
 Most Viewed: Videos most seen by YouTube viewers. Tap All for all-time most viewed
videos, or Today or This Week for most viewed videos of the day or week.
 Bookmarks: Videos you’ve bookmarked.
 Most Recent: Videos most recently submitted to YouTube.
 Top Rated: Videos most highly rated by YouTube viewers. To rate videos, go to
www.youtube.com.
 History: Videos you’ve viewed most recently.
Search for a video
1 Tap Search, then tap the YouTube search field.
2 Type a word or phrase that describes what you’re looking for, then tap Search. YouTube
shows results based on video titles, descriptions, tags, and user names. Chapter 6 Applications 61
Play a video
m Tap the video. The video begins to download to iPod touch and a progress bar shows
progress. When enough of the video has downloaded, it begins to play. You can also
tap to start the video.
Controlling Video Playback
When a video starts playing, the controls disappear so they don’t obscure the video.
m Tap the screen to show or hide the controls.
To Do this
Play or pause a video Tap or .
Raise or lower the volume Drag the volume slider.
Start a video over Tap .
Skip to the next or previous video Tap twice to skip to the previous video. Tap to skip to
the next video.
Rewind or fast-forward Touch and hold or .
Skip to any point in a video Drag the playhead along the scrubber bar.
Stop watching a video before it
finishes playing
Tap Done. Or press the Home button.
Toggle between scaling a video to
fill the screen or fit the screen.
Double-tap the video. You can also tap to make the video
fill the screen, or tap to make it fit the screen.
Bookmark a video Tap next to a video and tap Bookmark. Or start playing a
video and tap . Tap Bookmarks to see your bookmarked
videos.
See details about a video and
browse related videos
Play the whole video, tap Done while a video is playing, or tap
next to any video in a list.
iPod touch shows the video’s rating, description, date added,
and other information. You also see a list of related videos that
you can tap to view.
Next/Fast-forward
Play/Pause
Scale
Download progress
Playback controls
Volume
Previous/rewind
Bookmark
Playhead
Scrubber bar62 Chapter 6 Applications
Changing the Buttons at the Bottom of the Screen
You can replace the Featured, Most Viewed, Bookmarks, and Search buttons at the
bottom of the screen with ones you use more frequently. For example, if you often
watch top rated videos but don’t watch many featured videos, you could replace the
Featured button with Top Rated.
m Tap More and tap Edit, then drag a button to the bottom of the screen, over the button
you want to replace.
You can drag the buttons at the bottom of the screen left or right to rearrange them.
When you finish, tap Done.
When you’re browsing for videos, tap More to access the buttons that aren’t visible.
Add Your Own Videos to YouTube
For information about adding your own videos to YouTube, go to www.youtube.com
and tap Help.Chapter 6 Applications 63
Stocks
Viewing Stock Quotes
When you tap Stocks from the Home screen, the stock reader shows updated quotes
for all your stocks. Quotes are updated every time you open Stocks while connected to
the Internet. Quotes may be delayed by up to 20 minutes.
Add a stock, index, or fund to the stock reader
1 Tap , then tap .
2 Enter a symbol, company name, index, or fund name, then tap Search.
3 Choose an item in the search list.
Delete a stock
m Tap and tap next to a stock, then tap Delete.
Reorder stocks
m Tap . Then drag next to a stock to a new place in the list.
Switch between showing percentage change and change in monetary value
m Tap the number showing the change. Tap it again to switch back.
You can also tap and tap % or Numbers.
Show a stock’s progress over a longer or shorter time period
m Tap a stock symbol, then tap 1d, 1w, 1m, 3m, 6m, 1y, or 2y. The chart adjusts to show
progress over one day, one week, one, three, or six months, or one or two years.
See information about a stock at Yahoo.com
m Tap .
You can see news, information, websites related to the stock, and more.64 Chapter 6 Applications
Maps
Maps provides street maps, satellite photos, and hybrid views of locations in many of
the world’s countries. You can get detailed driving directions and, in some areas, traffic
information. Also in some areas, you can find your current approximate location, and
use that location to get driving directions to or from another place.1
Finding and Viewing Locations
Find a location and see a map
m Tap the search field to bring up the keyboard, then type an address, intersection,
general area, name of a landmark, bookmark name, name of someone in your contacts
list, or zip code. Then tap Search.
A pin marks the location on the map. Tap the pin to see the name or description of the
location.
Find your current approximate location on a map
m Tap . A circle appears to show your current approximate location. Your approximate
location is determined using information from some local Wi-Fi networks (if you have
Wi-Fi turned on). The more accurate the available information, the smaller the circle on
the map. This feature is not available in all areas.
1 Maps, directions, and location information depend on data collected and services provided by third parties.
These data services are subject to change and may not be available in all geographic areas, resulting in maps,
directions, or location information that may be unavailable, inaccurate, or incomplete. For more information, see
www.apple.com/ipodtouch. In order to provide your location, data is collected in a form that does not personally
identify you. If you do not want such data collected, don’t use the feature. Not using this feature will not impact the
functionality of your iPod touch.
WARNING: For important information about driving and navigating safely, see the
Important Product Information Guide at www.apple.com/support/manuals/ipod.
Tap to get information about
the location, get directions, or add
the location to your bookmarks or
contacts listChapter 6 Applications 65
Use the dropped pin
m Tap , then tap Drop Pin. A pin drops down on the map, which you can then drag to
any location you choose.
To quickly move the pin to the area currently displayed, tap , then tap Replace Pin.
Zoom in to a part of a map
m Pinch the map with two fingers. Or double-tap the part you want to zoom in on.
Double-tap again to zoom in even closer.
Zoom out
m Pinch the map. Or tap the map with two fingers. Tap with two fingers again to zoom
out further.
Pan or scroll to another part of the map
m Drag up, down, left, or right.66 Chapter 6 Applications
See a satellite or hybrid view
m Tap , then tap Satellite or Hybrid to see just a satellite view or a combined street
map and satellite view.
Tap Map to return to map view.
See the location of someone’s address in your contacts list
m Tap in the search field, then tap Contacts and choose a contact.
To locate an address in this way, the contact must include at least one address. If the
contact has more than one address, you must choose the one you want to locate.
You can also find the location of an address by tapping the address directly in Contacts.
Bookmark a location
m Find a location, tap the pin that points to it, tap next to the name or description,
then tap “Add to Bookmarks.”
See a bookmarked location or recently viewed location
m Tap in the search field, then tap Bookmarks or Recents.
Add a location to your contacts list
m Find a location, tap the pin that points to it, tap next to the name or description,
then tap Create New Contact or “Add to Existing Contact.”
Getting Directions
Get directions
1 Tap Directions.
2 Enter starting and ending locations in the Start and End fields. By default, iPod touch
starts with your current approximate location (when available). Tap in either field
and choose a location in Bookmarks (including your current approximate location and
the dropped pin, when available), Recents, or Contacts.
For example, if a friend’s address is in your contacts list, you can tap Contacts and tap
your friend’s name instead of having to type the address.
To reverse the directions, tap .Chapter 6 Applications 67
3 Tap Route, then do one of the following:
 To view directions one step at a time, tap Start, then tap to see the next leg of the
trip. Tap to go back.
 To view all the directions in a list, tap , then tap List. Tap any item in the list to see a
map showing that leg of the trip.
The approximate driving time appears at the top of the screen. If traffic data is
available, the driving time is adjusted accordingly.
You can also get directions by finding a location on the map, tapping the pin that
points to it, tapping next to the name, then tapping Directions To Here or
Directions From Here.
Show or hide traffic conditions
When available, you can show highway traffic conditions on the map.
m Tap , then tap Show Traffic or Hide Traffic.
Highways are color-coded according to the flow of traffic:
If you tap Show Traffic and don’t see color-coded highways, you may need to zoom out
to a level where you can see major roads, or traffic conditions may not be available for
that area.
Switch start and end points, for reverse directions
m Tap .
If you don’t see , tap List, then tap Edit.
See recently viewed directions
m Tap in the search field, then tap Recents.
Traffic
Gray = No data currently available
Red = less than 25 miles per hour
Yellow = 25–50 miles per hour
Green = more than 50 miles per hour68 Chapter 6 Applications
Finding and Contacting Businesses
Find businesses in an area
1 Find a location—for example, a city and state or country, or a street address—or scroll
to a location on a map.
2 Type the kind of business in the text field and tap Search.
Pins appear for matching locations. For example, if you locate your city and then type
“movies” and tap Search, pins mark movie theatres in your city.
Tap the pin that marks a business to see its name or description.
Find businesses without finding the location first
m Type things like:
 restaurants san francisco ca
 apple inc new york
Contact a business or get directions
m Tap the pin that marks a business, then tap next to the name.
From there, you can do the following:
 Depending on what information is stored for that business, you can tap an email
address to email, or web address to visit a website.
 For directions, tap Directions To Here or Directions From Here.
 To add the business to your contacts list, scroll down and tap Create New Contact or
“Add to Existing Contact.”
See a list of the businesses found in the search
From the Map screen, tap List. Tap a business to see its location on the map. Or tap
next to a business to see its information.
Get directions
Visit website
Call
Tap to show
contact infoChapter 6 Applications 69
Weather
Viewing Weather Summaries
Tap Weather from the Home screen to see the current temperature and a six-day
forecast for a city of your choice. You can store multiple cities, for quick access.
If the weather board is light blue, it’s daytime in that city—between 6:00 a.m. and
6:00 p.m. If the board is dark purple, it’s nighttime—between 6:00 p.m. and 6:00 a.m.
Switch to another city
m Flick left or right. The number of dots below the weather board shows how many cities
are stored.
Reorder cities
m Tap . Then drag next to a city to a new place in the list.
Add a city
1 Tap , then tap .
2 Enter a city name or zip code, then tap Search.
3 Choose a city in the search list.
Delete a city
m Tap and tap next to a city, then tap Delete.
Six-day forecast
Current temperature
Current conditions
Today’s high and low
Add and delete cities
Number of cities stored
Weather screen70 Chapter 6 Applications
Set whether iPod touch shows the temperature in Fahrenheit or Celsius
m Tap , then tap ºF or ºC.
See information about a city at Yahoo.com
m Tap .
You can see a more detailed weather report, news and websites related to the city,
and more.
Clock
Adding and Viewing Clocks for Locations Around the World
You can add multiple clocks to show the time in major cities and time zones around
the world.
View clocks
m Tap World Clock.
If the clock face is white, it’s daytime in that city. If it’s black, it’s nighttime. If you have
more than four clocks, scroll to see them all.
Add a clock
m Tap World Clock, then tap and type the name of a city. Cities matching what you’ve
typed appear below. Tap a city to add a clock for that city.
If you don’t see the city you’re looking for, try a major city in the same time zone.
Delete a clock
m Tap World Clock and tap Edit. Then tap next to a clock and tap Delete.
Rearrange clocks
m Tap World Clock and tap Edit. Then drag next to a clock to a new place in the list.Chapter 6 Applications 71
Setting Alarm Clocks
You can set multiple alarms. Set each alarm to repeat on days you specify, or to sound
only once.
Set an alarm
m Tap Alarm and tap , then adjust any of the following settings:
 To set the alarm to repeat on certain days, tap Repeat and choose the days.
 To choose the sound that’s played when the alarm goes off, tap Sound.
 To set whether the alarm gives you the option to snooze, turn Snooze on or off.
If Snooze is on and you tap Snooze when the alarm sounds, the alarm stops and then
sounds again in ten minutes.
 To give the alarm a description, tap Label. iPod touch displays the label when the
alarm sounds.
If at least one alarm is set and turned on, appears in the status bar at the top of the
screen.
Turn an alarm on or off
m Tap Alarm and turn any alarm on or off. If an alarm is turned off, it won’t sound again
unless you turn it back on.
If an alarm is set to sound only once, it turns off automatically after it sounds. You can
turn that alarm on again to reenable it.
Change settings for an alarm
m Tap Alarm and tap Edit, then tap next to the alarm you want to change.
Delete an alarm
m Tap Alarm and tap Edit, then tap next to the alarm and tap Delete.
Using the Stopwatch
Use the stopwatch to measure time
m Tap Stopwatch. Tap Start to start the stopwatch. To record lap times, tap Lap after each
lap. Tap Stop to pause the stopwatch, then tap Start to resume. Tap Reset to reset the
stopwatch to zero.
If you start the stopwatch and go to another iPod touch application, the stopwatch
continues running in the background.72 Chapter 6 Applications
Setting the Timer
Set the timer
m Tap Timer, then flick to set the number of hours and minutes. Tap When Timer Ends to
choose the sound iPod touch makes when the timer ends. Tap Start to start the timer.
Set a sleep timer
m Set the timer, then tap When Timer Ends and choose Sleep iPod.
When you set a sleep timer, iPod touch stops playing music or videos when the
timer ends.
If you start the timer and go to another iPod touch application, the timer continues
running in the background.
Calculator
Using the Calculator
m Add, subtract, multiply, and divide, as with a standard calculator.
When you tap the add, subtract, multiply, or divide button, a white ring appears around
the button to indicate the operation to be carried out.
Using the Memory Functions
m C: Tap to clear the displayed number.
m M+: Tap to add the displayed number to the number in memory. If no number is in
memory, tap to store the displayed number in memory.
m M–: Tap to subtract the displayed number from the number in memory.
m MR/MC: Tap once to replace the displayed number with the number in memory.
Tap twice to clear the memory. If the MR/MC button has a white ring around it, there is
a number stored in memory. If zero (“0”) is displayed, tap once to see the number
stored in memory.Chapter 6 Applications 73
Notes
Writing, Reading, and Emailing Notes
Notes are listed by date added, with the most recent note at the top. You can see the
first few words of each note in the list.
Add a note
m Tap , then type your note and tap Done.
Read or edit a note
m Tap the note. Tap anywhere on the note to bring up the keyboard and edit the note.
Tap or to see the next or previous note.
Delete a note
m Tap the note, then tap .
Email a note
m Tap the note, then tap .
To email a note, iPod touch must be set up for email (see “Setting Up Email Accounts”
on page 53).7
74
7 Settings
Tap Settings to adjust iPod touch settings.
Settings allows you to customize iPod touch applications, set the date and time,
configure Wi-Fi connections, and enter other preferences for iPod touch.
Wi-Fi
Wi-Fi settings determine when and how iPod touch joins a Wi-Fi network.
Turn Wi-Fi on or off
m Choose Wi-Fi and turn Wi-Fi on or off.
Join a Wi-Fi network
m Choose Wi-Fi, wait a moment as iPod touch detects networks in range, then select a
network. If necessary, enter the password and tap Join. (Networks that require a
password appear with a lock icon.)
Once you’ve joined a Wi-Fi network manually, iPod touch automatically joins it
whenever the network is in range. If more than one previously used network is in
range, iPod touch joins the one last used.
When iPod touch is joined to a Wi-Fi network, the Wi-Fi icon in the status bar at the
top of the screen shows signal strength. The more bars you see, the stronger the signal.
Set iPod touch to ask if you want to join a new network
This option tells iPod touch to look for another network when you aren’t in range of a
Wi-Fi network you’ve previously joined. iPod touch displays a list of all available Wi-Fi
networks that you can choose from. (Networks that require a password appear with a
lock icon.)
m Choose Wi-Fi and turn “Ask to Join Networks” on or off. If you turn “Ask to Join
Networks” off, you can still join new networks manually.Chapter 7 Settings 75
Forget a network, so iPod touch doesn’t join it automatically
m Choose Wi-Fi and tap next to a network you’ve joined before. Then tap “Forget this
Network.”
Join a closed Wi-Fi network (an available Wi-Fi network that isn’t shown in the list of
scanned networks)
m Choose Wi-Fi > Other and enter the network name. If the network requires a password,
tap Security, select the type of security the network uses, and then tap Other Network
and enter the password.
You must already know the network name, password, and security type to connect to a
closed network.
Some Wi-Fi networks may require you to enter or adjust additional settings, such as a
client ID or static IP address. Ask the network administrator which settings to use.
Adjust settings for joining a Wi-Fi network
m Choose Wi-Fi, then tap next to the network.
Brightness
Screen brightness affects battery life. Dim the screen to extend the time before you
need to recharge iPod touch. Or use Auto-Brightness, which is designed to conserve
battery life.
Adjust the screen brightness
m Choose Brightness and drag the slider.
Set whether iPod touch adjusts screen brightness automatically
m Choose Brightness and turn Auto-Brightness on or off. If Auto-Brightness is on,
iPod touch adjusts the screen brightness for current light conditions using the built-in
ambient light sensor.
General
The General settings include date and time, security, and other settings that affect
more than one application. This is also where you can find information about your
iPod touch and reset iPod touch to its original state.
About
Choose General > About to get information about iPod touch, including:
 number of songs
 number of videos
 number of photos
 total storage capacity76 Chapter 7 Settings
 storage available
 software version
 serial number
 model number
 Wi-Fi address
 legal information
Wallpaper
You see a wallpaper background picture when you unlock iPod touch. You can select
one of the images that came with iPod touch, or use a photo you’ve synced to
iPod touch from your computer.
Set wallpaper
m Choose General > Wallpaper and choose a picture.
Date and Time
These settings apply to the time shown in the status bar at the top of the screen,
world clocks, and your calendar.
Set whether iPod touch shows 24-hour time or 12-hour time
m Choose General > Date & Time and turn 24-Hour Time on or off.
Set the time zone
m Choose General > Date & Time > Time Zone and enter your location.
Set the date and time
1 Choose General > Date & Time > Set Date & Time
2 Tap a button to select the date or time, then use the spinners to change the setting.
Calendar Settings
Turn on calendar time zone support
m Choose General > Date & Time and turn Time Zone Support on. When Time Zone
Support is on, Calendar displays event dates and times in the time zone set for your
calendars. When Time Zone Support is off, Calendar displays events in the time zone of
your current location.
Set calendar time zone
m Choose General > Date & Time > Time Zone and enter the time zone of your calendar.
International
Use the International settings to set the language for iPod touch, turn keyboards for
different languages on and off, and set the date, time, and telephone number formats
for your region.Chapter 7 Settings 77
Set the language for iPod touch
m Choose General > International > Language, choose the language you want to use, and
tap Done.
Turn international keyboards on or off
You can change the language for your keyboard on iPod touch, or make two or more
keyboards available.
m Choose General > International > Keyboards, and turn on the keyboards you want.
If more than one keyboard is turned on, tap to switch keyboards when you’re
typing. When you tap the symbol, the name of the newly active keyboard appears
briefly.
Set date, time, and telephone number formats
m Choose General > International > Region Format, and choose your region.
Auto-Lock
Locking iPod touch turns off the display to save your battery and to prevent
unintended operation of iPod touch.
Set the amount of time before iPod touch locks
m Choose General > Auto-Lock and choose a time.
Passcode Lock
By default, iPod touch doesn’t require you to enter a passcode to unlock it.
Set a passcode
m Choose General > Passcode Lock and enter a 4-digit passcode. iPod touch then requires
you to enter the passcode to unlock it.
Turn passcode lock off
m Choose General > Passcode Lock and tap Turn Passcode Off, then enter your passcode.
Change the passcode
m Choose General > Passcode Lock and tap Change Passcode, enter the current passcode,
then enter and reenter your new passcode.
If you forget your passcode, you must restore the iPod touch software. See page 89.
Set how long before your passcode is required
m Choose General > Passcode Lock > Require Passcode, then select how long iPod touch
can be locked before you need to enter a passcode to unlock it.78 Chapter 7 Settings
Sound Effects
iPod touch can play sound effects when you:
 have an appointment
 lock or unlock iPod touch
 type on the keyboard
Turn sound effects on or off
m Choose General > Sound Effects and select whether you want sound effects to play
over the internal speaker, through the headphones, or both. Select Off to turn sound
effects off.
Keyboard
Turn auto-capitalization on or off
By default, iPod touch automatically capitalizes the next word after you type sentenceending punctuation or a return character.
m Choose General > Keyboard and turn Auto-Capitalization on or off.
Set whether caps lock is enabled
If caps lock is enabled and you double-tap the Shift key on the keyboard, all letters
you type are uppercase. The Shift key turns blue when caps lock is on.
m Choose General > Keyboard and turn Enable Caps Lock on or off.
Turn “.” shortcut on or off
The “.” shortcut lets you double-tap the space bar to enter a period followed by a space
when you’re typing. It is on by default.
m Choose General > Keyboard and turn “.” Shortcut on or off.
Turn international keyboards on or off
You can change the language for your keyboard on iPod touch, or make two or more
keyboards available.
m Choose General > Keyboards > International Keyboards and turn on the keyboards
you want.
If more than one keyboard is turned on, tap to switch keyboards when you’re typing.
When you tap the symbol, the name of the newly active keyboard appears briefly.
Resetting iPod touch Settings
Reset all settings
m Choose General > Reset and tap Reset All Settings.
All your preferences and settings are reset. Data (such as your contacts and calendars)
and media (such as your songs and videos) are not deleted.Chapter 7 Settings 79
Erase all content and settings
m Choose General > Reset and tap “Erase All Content and Settings.”
All your data and media are deleted. You must sync iPod touch with your computer to
restore contacts, songs, videos, and other data and media.
Reset the keyboard dictionary
m Choose General > Reset and tap Reset Keyboard Dictionary.
You add words to the keyboard dictionary by rejecting words iPod touch suggests as
you type. Tap a word to reject the correction and add the word to the keyboard
dictionary. Resetting the keyboard dictionary erases all words you’ve added.
Reset network settings
m Choose General > Reset and tap Reset Network Settings.
When you reset network settings, your list of previously used networks is removed.
Wi-Fi is turned off and then back on (disconnecting you from any network you’re on),
and the “Ask to Join Networks” setting is turned on.
Music
The Music settings apply to songs, podcasts, and audiobooks.
Set iTunes to play songs at the same sound level
iTunes can automatically adjust the volume of songs, so they play at the same relative
volume level.
m In iTunes, choose iTunes > Preferences if you’re using a Mac, or Edit > Preferences if
you’re using a PC, then click Playback and select Sound Check.
You can set iPod touch to use the iTunes volume settings.
Set iPod touch to use the iTunes volume settings (Sound Check)
m Choose Music and turn Sound Check on.
Set audiobook play speed
You can set audiobooks to play faster than normal so you can hear them more quickly,
or slower so you can hear them more clearly.
m Choose Music > Audiobook Speed, then choose Slower, Normal, or Faster.
Use the equalizer to change the sound on iPod touch to suit a particular sound
or style
m Choose Music > EQ and choose a setting.
Set a volume limit for music and videos
m Choose Music > Volume Limit and drag the slider to adjust the maximum volume.
Tap Lock Volume Limit to assign a code to prevent the setting from being changed.80 Chapter 7 Settings
Setting a volume limit only limits the volume of music (including podcasts and
audiobooks) and videos (including rented movies), and only when headphones,
earphones, or speakers are connected to the headphones port on iPod touch.
Video
Video settings apply to apply to video content (including rented movies). You can set
where to resume playing videos that you previously started, turn closed captioning on
or off, and set up iPod touch to play videos on your TV.
Set where to resume playing
m Choose Video > Start Playing, then select whether you want videos that you previously
started watching to resume playing from the beginning or where you left off.
Turn closed captioning on or off
m Choose Video and turn Closed Captioning on or off.
TV Out Settings
Use these settings to set up how iPod touch plays videos on your TV. For more
information about using iPod touch to play videos on your TV, see “Watching Videos on
a TV Connected to iPod touch” on page 32.
Turn widescreen on or off
m Choose Video and turn Widescreen on or off.
Set TV signal to NTSC or PAL
m Choose Video > TV Signal and select NTSC or PAL.
NTSC and PAL are TV broadcast standards. NTSC displays 480i and PAL displays 576i.
Your TV might use either of these, depending on where it was sold. If you’re not sure
which to use, check the documentation that came with your TV.
Photos
Photos settings let you specify how slideshows display your photos.
Set the length of time each slide is shown
m Choose Photos > Play Each Slide For and select the length of time.
Set transition effect
m Choose Photos > Transition and select the transition effect.
WARNING: For important information about avoiding hearing loss, see the Important
Product Information Guide at www.apple.com/support/manuals/ipod.Chapter 7 Settings 81
Set whether to repeat slideshows
m Choose Photos and turn Repeat on or off.
Set photos to appear randomly or in order
m Choose Settings > Photos and turn Shuffle on or off.
Mail
Use Mail settings to customize your email account for iPod touch. Changes you make
to accounts settings are not synced to your computer, allowing you to configure email
to work with iPod touch without affecting email account settings on your computer.
Account Settings
The specific accounts settings that appear on iPod touch depend on the type of
account you have—POP or IMAP.
Note: Microsoft Outlook 2003 or 2007 email accounts must be configured for IMAP in
order to work with iPod touch.
Stop using an account
m Choose Mail, choose an account, then turn Account off.
If an account is off, iPod touch doesn’t display the account and doesn’t send or check
email from that account, until you turn it back on.
Adjust advanced settings
m Choose Mail > Accounts, choose an account, then do one of the following:
 To set whether drafts, sent messages, and deleted messages are stored on iPod touch or
remotely on your email server (IMAP accounts only), tap Advanced and choose Drafts
Mailbox, Sent Mailbox, or Deleted Mailbox.
If you store messages on iPod touch, you can see them even when iPod touch isn’t
connected to the Internet.
 To set when deleted messages are removed permanently from iPod touch, tap Advanced
and tap Remove, then choose a time: Never, or after one day, one week, or one
month.
 To adjust email server settings, tap Host Name, User Name, or Password under
Incoming Mail Server or Outgoing Mail Server. Ask your network administrator or
Internet service provider for the correct settings.
 To adjust SSL and password settings, tap Advanced. Ask your network administrator or
Internet service provider for the correct settings.
Delete an email account from iPod touch
m Choose Mail, tap an account, then scroll down and tap Delete Account.
Deleting an email account from iPod touch doesn’t delete it from your computer.82 Chapter 7 Settings
Settings for Email Messages
iPod touch checks for and retrieves new email in your accounts whenever your open
Mail. You can also set Mail to regularly check for email and download your messages
even when you don’t have Mail open.
Set whether iPod touch checks for new messages automatically
m Choose Mail > Auto-Check, then tap Manual, “Every 15 minutes,” “Every 30 minutes,”
or “Every hour.”
If you have a Yahoo! email account, email is instantly transferred to iPod touch as it
arrives at the Yahoo! server.
Set the number of messages shown on iPod touch
m Choose Mail > Show, then choose a setting. You can choose to see the most recent 25,
50, 75,100, or 200 messages. To download additional messages when you’re in Mail,
scroll to the bottom of your inbox and tap “Download . . . more.”
Set how many lines of each message are previewed in the message list
m Choose Mail > Preview, then choose a setting. You can choose to see anywhere from
zero to five lines of each message. That way, you can scan a list of messages in a
mailbox and get an idea of what each message is about.
Set a minimum font size for messages
m Choose Mail > Minimum Font Size, then choose Small, Medium, Large, Extra Large,
or Giant.
Setting whether iPod touch shows To and Cc labels in message lists
m Choose Mail, then turn Show To/Cc Label on or off.
If Show To/Cc Label is on, or next to each message in a list indicates whether
the message was sent directly to you or you were Cc’ed.
Setting iPod touch to confirm that you want to delete a message
m Choose Mail and turn Ask Before Deleting on or off.
If Ask Before Deleting is on, to delete a message you must tap , then confirm by
tapping Delete.
To CcChapter 7 Settings 83
Settings for Sending Email
Set whether iPod touch sends you a copy of every message you send
m Choose Mail, then turn Always Bcc Myself on or off.
Add a signature to your messages
You can set iPod touch to add a signature—your favorite quote, or your name, title, and
phone number, for example—that appears in every message you send.
m Choose Mail > Signature, then type a signature.
Set the default email account
When you initiate sending a message from another iPod touch application, such as
sending a photo from Photos or tapping a business’ email address in Maps, the
message is sent from your default email account.
m Choose Mail > Default Account, then choose an account.
Safari
General Settings
You can use Google or Yahoo! to perform Internet searches.
Select a search engine
m Choose Safari > Search Engine and select the search engine you want to use.
Security Settings
By default, Safari is set to show some of the features of the web, like some movies,
animation, and web applications. You may wish to turn off some of these features to
help protect iPod touch from possible security risks on the Internet.
Change security settings
m Choose Safari, then do one of the following:
 To enable or disable JavaScript, turn JavaScript on or off.
JavaScript lets web programmers control elements of the page—for example, a page
that uses JavaScript might display the current date and time or cause a linked page
to appear in a new pop-up page.
 To enable or disable plug-ins, turn Plug-ins on or off. Plug-ins allow Safari to play some
types of audio and video files and to display Microsoft Word files and Microsoft Excel
documents.
 To block or allow pop-ups, turn Block Pop-ups on or off. Blocking pop-ups stops only
pop-ups that appear when you close a page or open a page by typing its address.
It doesn’t block pop-ups that open when you click a link.84 Chapter 7 Settings
 To set whether Safari accepts cookies, tap Accept Cookies and choose Never,
“From visited,” or Always.
A cookie is a piece of information that a website puts on iPod touch so the website
can remember you when you visit again. That way, webpages can be customized for
you based on information you may have provided.
Some pages won’t work correctly unless iPod touch is set to accept cookies.
 To clear the history of webpages you’ve visited, tap Clear History.
 To clear all cookies from Safari, tap Clear Cookies.
 To clear the browser cache, tap Clear Cache.
The browser cache stores the content of pages so the pages open faster the next
time you visit them. If a page you open isn’t showing new content, clearing the
cache may help.
Developer Settings
The Debug Console can help you resolve webpage errors. When turned on, the console
appears automatically when a webpage error occurs.
Turn the debug console on or off
m Choose Safari > Developer, and turn Debug Console on or off.
Contacts
Use Contacts settings to determine the sort and display order of your contacts.
Set the sort order
m Choose Settings > Contacts > Sort Order, and select “First, Last” or “Last, First.”
Set the display order
m Choose Settings > Contacts > Display Order, and select “First, Last” or “Last, First.”
Restoring or Transferring Your iPod touch Settings
When you connect iPod touch to your computer, settings on iPod touch are
automatically backed up to your computer. You can restore this information if you need
to—if you get a new iPod touch, for example, and want to transfer your previous
settings to it. You may also want to reset the information on iPod touch if you’re having
trouble connecting to a Wi-Fi network.
Automatically backed-up information includes notes, contact favorites, sound settings,
and other preferences.Chapter 7 Settings 85
Restore or transfer settings
Do one of the following:
m Connect a new iPod touch to the same computer you used with your other iPod touch,
open iTunes, and follow the onscreen instructions.
m Reset the information on iPod touch. In Settings, choose General > Reset, then choose
“Reset All Settings,” “Erase All Content and Settings,” or “Reset Network Settings.” Then
connect iPod touch to your computer, open iTunes, and follow the onscreen
instructions.
When you reset network settings, your list of previously used networks is removed.
Wi-Fi is turned off and then back on, disconnecting you from any network you’re on.
The Wi-Fi and “Ask to Join Networks” settings are left turned on.
Delete a set of backed-up settings
m Open iTunes and choose iTunes > Preferences (on a Mac) or Edit > Preferences (on a
PC). Then click Syncing, select an iPod touch, and click “Remove Backup.”
iPod touch doesn’t need to be connected to your computer.A
86
A Tips and Troubleshooting
Most problems with iPod touch can be solved quickly by
following the advice in this chapter.
General Suggestions
If the screen is blank or shows a low-battery image
iPod touch is low on power and needs to charge for up to ten minutes before you can
use it. For information about charging iPod touch, see “Charging the Battery” on
page 22.
If iPod touch doesn’t appear in iTunes or you can’t sync iPod touch
 The iPod touch battery might need to be recharged. For information about charging
iPod touch, see “Charging the Battery” on page 22.
 If that doesn’t work, disconnect other USB devices from your computer and connect
iPod touch to a different USB 2.0 port on your computer (not on your keyboard).
 If that doesn’t work, turn off iPod touch and turn it on again. Press and hold the
Sleep/Wake button on top of iPod touch for a few seconds until a red slider appears,
then drag the slider. Then press and hold the Sleep/Wake button until the Apple logo
appears.
 If that doesn’t work, restart your computer and reconnect iPod touch to your
computer.
orAppendix A Tips and Troubleshooting 87
 If that doesn’t work, download and install (or reinstall) the latest version of iTunes
from www.apple.com/itunes.
If iPod touch won’t turn on, or if the display freezes or doesn’t respond
 iPod touch may need charging. See “Charging the Battery” on page 22.
 Press and hold the Home button for at least six seconds, until the application you
were using quits.
 If that doesn’t work, turn off iPod touch and turn it on again. Press and hold the
Sleep/Wake button on top of iPod touch for a few seconds until a red slider appears,
and then drag the slider. Then press and hold the Sleep/Wake button until the Apple
logo appears.
 If that doesn’t work, reset iPod touch. Press and hold both the Sleep/Wake button
and the Home button for at least ten seconds, until the Apple logo appears.
If iPod touch continues to freeze or not respond after you reset it
 Reset iPod touch settings. From the Home screen choose Settings > General > Reset
> Reset All Settings. All your preferences are reset, but your data and media are left
untouched.
 If that doesn’t work, erase all content on iPod touch. From the Home screen choose
Settings > General > Reset > “Erase All Content and Settings.” All your preferences
are reset, and all your data and media are removed from iPod touch.
 If that doesn’t work, restore the iPod touch software. See “Updating and Restoring
iPod touch Software” on page 89.
If iPod touch isn’t playing sound
 Unplug and reconnect the headphones. Make sure the connector is pushed in all
the way.
 Make sure the volume isn’t turned down all the way.
 Music on iPod touch might be paused. From the Home screen tap Music, tap Now
Playing, then tap .
 Check to see if a volume limit is set. From the Home screen choose Settings > Music
> Volume Limit. For more information, see page 79.
 Make sure you are using iTunes 7.6 or later (go to www.apple.com/itunes). Songs
purchased from the iTunes Store using earlier versions of iTunes won’t play on
iPod touch until you upgrade iTunes.
 If you are using the optional dock’s line out port, make sure your stereo or external
speakers are turned on and working properly.
If iPod touch shows a message saying “This accessory is not supported by iPod”
The accessory you attached will not work with iPod touch.88 Appendix A Tips and Troubleshooting
If you can’t play a song you just purchased
Your purchase may still be downloading. Close and reopen Music, then try playing the
song again.
If you can’t add or play a song, video, or other item
The media may have been encoded in a format that iPod touch doesn’t support.
The following audio file formats are supported by iPod touch. These include formats for
audiobooks and podcasting:
 AAC (M4A, M4B, M4P, up to 320 Kbps)
 Apple Lossless (a high-quality compressed format)
 MP3 (up to 320 Kbps)
 MP3 Variable Bit Rate (VBR)
 WAV
 AA (audible.com spoken word, formats 2, 3, and 4)
 AAX (audible.com spoken word, AudibleEnhanced format)
 AIFF
The following video file formats are supported by iPod touch:
 H.264 (Baseline Profile Level 3.0)
 MPEG-4 (Simple Profile)
A song encoded using Apple Lossless format has full CD-quality sound, but takes up
only about half as much space as a song encoded using AIFF or WAV format. The same
song encoded in AAC or MP3 format takes up even less space. When you import music
from a CD using iTunes, it is converted to AAC format by default.
Using iTunes for Windows, you can convert nonprotected WMA files to AAC or MP3
format. This can be useful if you have a library of music encoded in WMA format.
iPod touch does not support WMA, MPEG Layer 1, MPEG Layer 2 audio files,
or audible.com format 1.
If you have a song or video in your iTunes library that isn’t supported by iPod touch,
you may be able to convert it to a format iPod touch supports. See iTunes Help for
more information.
If you can’t remember your passcode
You must restore the iPod touch software. See “Updating and Restoring iPod touch
Software” on page 89.
If you entered contacts on iPod touch that you don’t want to sync to your computer
Replace contacts on iPod touch with information from your computer.
1 Open iTunes. Appendix A Tips and Troubleshooting 89
2 As you connect iPod touch to your computer, press and hold Command-Option
(if you’re using a Mac) or Shift-Control (if you’re using a PC) until you see iPod touch in
the iTunes source list on the left. This prevents iPod touch from syncing automatically.
3 Select iPod touch in the iTunes source list and click the Info tab.
4 Under “Replace information on this iPod,” select Contacts. You can select more
than one.
5 Click Apply.
The contacts on iPod touch are replaced with the contacts on your computer. The next
time you sync, iPod touch syncs normally, adding data you’ve entered on iPod touch to
your computer, and vice versa.
If you can’t sync with Yahoo! Address Book
iTunes may not be able to connect to Yahoo!. Make sure your computer is connected to
the Internet and that you’ve entered the correct Yahoo! ID and password in iTunes.
Connect iPod touch to your computer, click the Info tab in iTunes, select “Sync Yahoo!
Address Book contacts,” then enter your current Yahoo! ID and password.
If contacts you deleted on iPod touch or your computer are not removed from
Yahoo! Address Book after syncing
Yahoo! Address Book does not allow contacts containing a Messenger ID to be deleted
through syncing. To delete a contact containing a Messenger ID, log in to your online
Yahoo! account and delete the contact using Yahoo! Address Book.
If you can’t access the iTunes Wi-Fi Music Store
To use the iTunes Wi-Fi Music Store, iPod touch must join a Wi-Fi network that is
connected to the Internet. For information about joining a Wi-Fi network, see page 21.
The iTunes Wi-Fi Music Store is not available in all countries.
If you can’t purchase music from the iTunes Wi-Fi Music Store
To purchase songs from the iTunes Wi-Fi Music Store (only available in some countries),
you must have an iTunes Store account, and you must have been signed in to that
account when you last synced iPod touch with iTunes. If you get a message that no
account information is found when you try to purchase music, open iTunes, sign in to
your iTunes Store account, and then connect and sync iPod touch.
Updating and Restoring iPod touch Software
You can use iTunes to update or restore iPod touch software. You should always update
iPod touch to use the latest software. You can also restore the software, which puts
iPod touch back to its original state.
 If you update, the iPod touch software is updated but your settings and songs are not
affected. 90 Appendix A Tips and Troubleshooting
 If you restore, all data is erased from iPod touch, including songs, videos, contacts,
photos, calendar information, and any other data. All iPod touch settings are restored
to their original state.
Update or restore iPod touch
1 Make sure you have an Internet connection and have installed the latest version of
iTunes from www.apple.com/itunes.
2 Connect iPod touch to your computer.
3 In iTunes, select iPod touch in the source list and click the Summary tab.
4 Click “Check for Update.” iTunes tells you if there’s a newer version of the iPod touch
software available.
5 Click Update to install the latest version of the software. Or click Restore to restore
iPod touch to its original settings and erase all data and media on iPod touch. Follow
the onscreen instructions to complete the restore process.
Using iPod touch Accessibility Features
The following features may make it easier for you to use iPod touch if you have a
disability.
Closed captioning
When available, you can turn on closed captioning for videos. See “Turn closed
captioning on or off” on page 80.
Minimum font size for Mail messages
Set a minimum font size for Mail message text to Large, Extra Large, or Giant to
increase readability. See “Set a minimum font size for messages” on page 82.
Zooming
Double-tap or pinch webpages, photos, and maps to zoom in. See page 18.
Universal Access in Mac OS X
Take advantage of the Universal Access features in Mac OS X when you use iTunes to
sync information and content from your iTunes library to iPod touch. In the Finder,
choose Help > Mac Help, then search for “universal access.”
For more information about iPod touch and Mac OS X accessibility features, go to:
www.apple.com/accessibilityB
91
B Learning More, Service,
and Support
There’s more information about using iPod touch,
in onscreen help and on the web.
The following table describes where to get more iPod-related software and service
information.
To learn about Do this
Using iPod touch safely Go to www.apple.com/support/manuals/ipod for the latest
Important Product Information Guide, including any updates to the
safety and regulatory information.
iPod touch support, tips,
forums, and Apple software
downloads
Go to www.apple.com/support/ipodtouch.
The latest information about
iPod touch
Go to www.apple.com/ipodtouch.
Using iTunes Open iTunes and choose Help > iTunes Help.
For an online iTunes tutorial (available in some areas only), go to
www.apple.com/support/itunes.
Using iPhoto in Mac OS X Open iPhoto and choose Help > iPhoto Help.
Using Address Book in
Mac OS X
Open Address Book and choose Help > Address Book Help.
Using iCal on Mac OS X Open iCal and choose Help > iCal Help.
Microsoft Outlook, Windows
Address Book, Adobe
Photoshop Album, and Adobe
Photoshop Elements
See the documentation that came with those applications.
Finding your iPod touch serial
number
Look at the back of your iPod touch or choose Settings > General >
About from the Home screen.
Obtaining warranty service First follow the advice in this guide and online resources. Then go
to www.apple.com/support or see the Important Product
Information Guide that comes with iPod touch.K Apple Inc.
© 2008 Apple Inc. All rights reserved.
Apple, the Apple logo, AirPort, Cover Flow, iCal, iPhoto,
iPod, iTunes, Mac, Macintosh, and Mac OS are
trademarks of Apple Inc., registered in the U.S. and other
countries. Finder, Safari, and Shuffle are trademarks of
Apple Inc. .Mac is a service mark of Apple Inc., registered
in the U.S. and other countries. iTunes Store is a service
mark of Apple Inc. Adobe and Photoshop are
trademarks or registered trademarks of Adobe Systems
Incorporated in the U.S. and/or other countries. Other
company and product names mentioned herein may be
trademarks of their respective companies.
Mention of third-party products is for informational
purposes only and constitutes neither an endorsement
nor a recommendation. Apple assumes no responsibility
with regard to the performance or use of these
products. All understandings, agreements, or warranties,
if any, take place directly between the vendors and the
prospective users. Every effort has been made to ensure
that the information in this manual is accurate. Apple is
not responsible for printing or clerical errors.
The product described in this manual incorporates
copyright protection technology that is protected by
method claims of certain U.S. patents and other
intellectual property rights owned by Macrovision
Corporation and other rights owners. Use of this
copyright protection technology must be authorized by
Macrovision Corporation and is intended for home and
other limited viewing uses only unless otherwise
authorized by Macrovision Corporation. Reverse
engineering or disassembly is prohibited.
Apparatus Claims of U.S. Patent Nos. 4,631,603, 4,577,216,
4,819,098 and 4,907,093 licensed for limited viewing
uses only.
019-1215/2008-03 93
Index
Index
12-hour time 76
24-hour time 76
A
accessibility features 90
accounts
default email 83
email 81
Address Book 8, 58, 91
See also contacts
address field, erasing text 45
Adobe Photoshop Album 8, 34, 91
Adobe Photoshop Element 91
Adobe Photoshop Elements 8, 34
alarms
deleting 71
setting 71
status icon 11
turning on or off 71
album covers 28
album tracks 29
alerts
calendar 51
alerts, turning on or off 78
alternate audio language 31
ambient light sensor 75
AOL free email account 53
attachments
email 57
audio, alternate language 31
audiobooks
play speed 79
syncing 5
See also music
audio file formats, supported 88
Auto-Brightness 75
auto-capitalization, turning on or off 78
auto-lock, setting time for 77
AV cables 32
B
battery
charging 22
low on power 23, 86
replacing 23
status icon 11
bookmarking
map locations 66
webpages 49
YouTube videos 61
bookmarks, syncing 9, 50
brightness, adjusting 75
browser cache, clearing 84
browsing
album covers 28
YouTube videos 60
browsing iTunes Wi-Fi Music Store 39
businesses, finding 68
C
cable, Dock Connector to USB 11
cache, clearing browser 84
Calculator 72
Calendar
about 50
settings 52
views 51
See also events
calendars, syncing 8, 50
capitalization, automatic 78
caps lock, enabling 78
Cc 82, 83
charging battery 22
cleaning iPod touch 11, 23
Clock 70
closed captioning, turning on or off 80
cloth, polishing 11
Component AV cable 32
Composite AV cable 32
computer requirements 4
connecting to Internet 21
contacts
adding and editing 59
adding email recipient 5794 Index
adding from Maps 66
assigning photo to 38
entering 58
seeing location of 66
settings 84
syncing 8
Yahoo! Address Book 8
controls, using 16
converting unprotected WMA files 88
cookies 84
Cover Flow 28
current approximate location 64, 66
cursor. See insertion point
D
date and time, setting 76
date format 77
Debug Console 84
deleting
alarms 71
all contents and settings 79
calendar events 51
clocks 70
contacts 60
email account 81
email messages 56
notes 73
songs from a playlist 29
videos 32
Yahoo! Address Book contacts 8
developer settings 84
directions, getting 66
disconnecting iPod touch from computer 9
display freezes 87
displaying playback controls 27
Dock Connector to USB cable 11
downloading songs from iTunes Wi-Fi Music
Store 43
drafts, email 55
dropped pin 65
E
editing text 20
effects sounds, turning on or off 78
email accounts
free 53
setting up 54
syncing 8
Entourage. See Microsoft Entourage
equalizer 79
erasing all content and settings 79
events, calendar 50
Exchange email accounts 54
F
file formats, supported 57, 88
forecast. See weather
formats
date, time, and telephone number 77
music and video 25
forwarding messages 55
G
general settings 75
getting help 91
getting started 4
Google
free email account 53
Google search engine 47, 83
H
headphones 11
help, getting 91
Home screen 12, 16
adding Web Clips 49
customizing 13
hybrid view 66
I
iCal 8, 91
icons
status 11
IMAP email accounts 54
information about iPod touch 75
insertion point, positioning 20
international keyboards 77, 78
Internet, connecting to 21
iPhoto 8, 34, 91
iTunes
getting help 91
iPod touch doesn’t appear in 86
iTunes Store account 44
iTunes Wi-Fi Music Store 39
browsing 39
K
keyboard dictionary, resetting 79
keyboards
international 77
typing on 19
L
landscape orientation
viewing photos 36
viewing webpages 46
legal information 76
light sensor 75
links
in email 58Index 95
links on webpages 45
location. See Maps
locking iPod touch 11, 15
lyrics, displaying 26
M
.Mac account 54
Mac OS X Address Book 58
Mac system requirements 4
Mail 53
account setup 81
adding recipient to contacts 57
attachments 57
Cc 82, 83
checking for new messages 57, 82
default email account 83
deleting email account 81
deleting messages 56
forwarding messages 55
links 58
marking messages as unread 57
organizing email 57
password settings 81
reading messages 56
replying to messages 55
resizing text column 58
saving drafts 55
seeing recipients 57
sending messages 54, 83
sending photos 55
sending webpage addresses 46
settings 81
signatures 83
storing email on iPod touch or server 81
syncing email account settings 8, 54
Yahoo! email account 9, 82
zooming in a message 58
Maps
adding location to a contact 66
bookmarking location 66
current approximate location 64, 66
dropped pin 65
finding businesses 68
finding location 64
getting directions 66
hybrid view 66
satellite view 66
seeing location of a contact 66
traffic conditions 67
zooming 65
mic button 26
Microsoft Entourage 8, 58
Microsoft Excel 83
Microsoft Internet Explorer 50
Microsoft Outlook 8, 58, 89, 91
Microsoft Word 83
model number 76
movies
rented 8
movies, rented 31
music
lyrics 26
managing manually 7
playing 25
previewing 42
purchasing 42
syncing 5, 7, 24
transferring purchased content 25
music settings 79
N
navigating. See panning, scrolling
networks 74
network settings, resetting 79
Notes 73
NTSC 80
O
on-the-go playlists 29
orientation, changing 36
Outlook. See Microsoft Outlook
Outlook Express. See Windows Address Book
P
PAL 80
panning
maps 65
panning photos 36
passcode 77, 88
PC system requirements 4
photo albums 37
Photos
assigning photos to contacts 38
changing size or orientation of photos 36
emailing photos 37
playing music during slideshow 37
sending photos in email 55
settings 37, 80
syncing 34
using photos as wallpaper 37
viewing slideshows 37
zooming photos 36
photos, syncing 8
playback controls, displaying 27
playing music and video 25
playlists, on-the-go 29
play speed, audiobooks 79
plug-ins 83
podcasts
syncing 7, 2496 Index
transferring purchased content 25
See also music
POP email accounts 54
pop-ups 83
power, low 23, 86
power adapter 11, 22
previewing music 42
problems. See troubleshooting
purchased music, syncing 43
purchasing music 39, 42
R
reading email 56
rechargeable batteries 23
rented movies 8, 31
repeating songs 27
replacing battery 23
replying to messages 55
requirements for using iPod touch 4
resetting network settings 79
resizing webpage columns 47
restoring iPod touch software 89
S
Safari
clearing cache 84
cookies 84
Debug Console 84
developer settings 84
erasing text in address field 45
Home screen Web Clips 49
navigating 46
opening webpages 45, 48
plug-ins 83
pop-ups 83
reloading webpages 46
resizing columns to fit screen 47
searching the web 47
security 83
sending webpage addresses in email 46
settings 81
stopping webpages from loading 46
typing in text fields 48
zooming webpages 46
satellite view 66
screen
adjusting brightness 75
using 16
scrolling
about 16
maps 65
webpages 47
search engine 83
searching
iTunes Wi-Fi Music Store 39
the web 47
YouTube videos 60
security
setting passcode for iPod touch 77
web 83
sending
email 54, 83
photos from Photos 37
serial number, finding 76, 91
settings
alarms 71
alerts 51
auto-lock 77
brightness 75
Calendar 51
calendar 52
contacts 84
date and time 52, 76
deleting 85
developer 84
email account 8, 54, 81
email server 81
general 75
international 76
keyboard 78
language 76
Mail 54, 81, 83
music 79
passcode lock 77
Photos 37, 80
resetting 78
restoring 84
Safari 47, 81
screen brightness 75
security 83
slideshow 37
sound 51
sound effects 78
sync 6
temperature 70
time zone 76
transferring 84
TV out 80
video 80
wallpaper 37, 76
Wi-Fi 74
shuffling songs 27
signatures, email 83
sleep. See locking iPod touch
sleep timer 32
slideshows 37
slideshow settings 80
software
getting help 91
updating and restoring 89
software version 76Index 97
songs. See music
sound
adjusting volume 26
no sound 87
setting limit 79
Sound Check 79
sound effects settings 78
sounds
calendar alert 51
turning on or off 78
SSL 81
stand 11
Starbucks, browsing and purchasing music 41
status icons 11
stock information, Yahoo! 63
Stocks, adding and deleting quotes 63
stopwatch, using 71
storage capacity 75
subtitles 31
support information 91
surfing the web 45
syncing
“Sync in progress” message 9
calendars 50
email account settings 54
photos 34
preventing 9, 88
setting up 6
webpage bookmarks 50
Yahoo! Address Book 89
syncing purchased songs 43
system requirements 4
T
telephone number format 77
temperature. See Weather
Text, typing in webpages 48
time, setting 76
time format 77
timer
setting 72
sleep 72
time zone support 52, 76
touchscreen, using 16
traffic conditions, checking 67
transferring purchased content 25, 43
transition effects, setting 80
troubleshooting
can’t remember passcode 88
display freezes 87
iPod touch doesn’t appear in iTunes 86
iPod touch doesn’t respond 87
iPod touch doesn’t turn on 87
no sound 87
preventing syncing 88
problems playing songs or other content 88
software update and restore 89
turning iPod touch on or off 15
TV out settings 80
TV signal settings 80
typing
about 19
in webpage text fields 48
U
unlocking iPod touch 15
unread messages, marking 57
unsupported audio file formats 88
updating iPod touch software 89
USB
cable 11
power adapter 11, 22
V
videos
alternate audio language 31
deleting 32
playing 25
subtitles 31
syncing 7, 24
transferring purchased content 25
watching on a TV 32
See also Music, YouTube
video settings 80
volume
adjusting 26
setting limit 79
W
waking iPod touch 15
wallpaper
choosing 76
settings 37
using photo as 37
warranty service 91
watching videos on a TV 32
Weather
adding cities 69
deleting cities 69
temperature settings 70
viewing 69
weather information, Yahoo! 70
web. See Safari
Web Clips, adding to Home screen 49
Wi-Fi
forgetting a network 75
joining networks 21, 74
settings 74
status icon 11
turning on or off 7498 Index
Wi-Fi address 76
Windows Address Book 8, 58, 91
WMA files, converting 88
World Clock 70
Y
Y! Mail account 54
Yahoo!
Address Book 8, 89
email accounts 9, 82
free email account 53
search engine 83
searching using 47
stock information 63
syncing email accounts 9
weather information 70
Y! Mail accounts 54
Yahoo! Address Book 58
YouTube
bookmarking videos 61
browsing videos 60
playing videos 61
searching for videos 60
Z
zooming
email messages 58
maps 65
photos 36
webpages 46
Apple Wireless
Mighty Mouse2 English
1 Setting Up Your Wireless Mighty Mouse
Congratulations on selecting the wireless Mighty Mouse as
your input device.
Using the Wireless Mighty Mouse
Follow the steps on the following pages to install batteries in your mouse, set up your
Mac, and use Setup Assistant to set up your mouse with your Mac.
Important: Don’t turn on your mouse until just before you are ready to start up your
Mac in step 3.
Step 1: Installing the Batteries
Follow the instructions below to install batteries in your wireless Mighty Mouse. You
can install either one or both of the nonrechargeable AA lithium batteries that came
with your mouse (see “About Your Batteries” on page 7 for more information).
To install batteries in your mouse:
1 Turn the mouse over and remove the bottom cover.
2 Slide the batteries into the battery compartment as shown in the illustration.English 3
3 Replace the bottom cover and leave the mouse turned off.
Step 2: Setting Up Your Mac
Follow the instructions in the user’s guide that came with your Mac to set it up.
Because you have a wireless mouse, skip the instructions for connecting a USB mouse.
Wait to start up your Mac until instructed to do so in step 3.
Slide the switch
up to turn the
mouse off.
Push the latch
down to remove
the bottom cover.
Insert one or both AA batteries
with the positive (+) end up.4 English
Step 3: Pairing Your Mouse
Before you can use your wireless Mighty Mouse, you have to pair it with your Mac.
Pairing allows your mouse and Mac to communicate wirelessly with each other. You
only have to pair them once.
The first time you start up your Mac, Setup Assistant guides you in setting up your
wireless Mighty Mouse and pairing it with your Mac.
To pair your mouse and your Mac:
1 Slide the switch down to turn the mouse on.
The laser used by the Mighty Mouse is not visible, but a small green indicator light on
the bottom of the mouse blinks when the mouse is on and the batteries are charged.
2 Turn on your Mac.
3 When your Mac starts up, follow the onscreen instructions in Setup Assistant.
Slide the switch down to
turn the mouse on.
The indicator light shows
that the mouse is on.English 5
Using Your Mighty Mouse
The Mighty Mouse has laser tracking technology, so you can use it on most surfaces.
The Mighty Mouse comes with left and right buttons, a scroll ball (which can be
clicked) and a button on either side. To use your Mighty Mouse:
 Click the left or right button.
 Press the side buttons.
 Click or roll the scroll ball.
Either the left or right button can function as the primary button. Use the primary
button to click, double-click, and drag items. Either button can also function as the
secondary button. Use the secondary button to display an item’s shortcut menu. You
can assign a function to the side buttons, which work together as a single button, and
to the scroll ball, which also functions as a button.
Scroll ball (button)
Left button
Side button Side button
Right button6 English
Customizing Your Mighty Mouse
Use the Mouse pane of Keyboard & Mouse preferences to change the way your Mighty
Mouse works.
To customize your mouse:
1 Choose Apple () > System Preferences.
2 Click Keyboard & Mouse.
3 Click Mouse.
Use the pop-up menu to assign an action to each button. You can set any of the
buttons to activate Dashboard, Exposé, Spotlight, switch applications, or open
applications. You can enable or disable scrolling and screen zoom, and adjust the
speed for tracking, scrolling, and double-clicking. You can also activate screen zoom by
simultaneously pressing a key on the keyboard and scrolling.
More Information
For more information about using your wireless Mighty Mouse, open Mac Help and
search for “Mighty Mouse.”
Renaming Your Mouse
Your Mac automatically gives your wireless mouse a unique name the first time it’s
paired. You can rename your mouse using Keyboard & Mouse preferences. Choose
Apple () > System Preferences and click Keyboard & Mouse. Click the Bluetooth® tab
and enter a name in the Name field.English 7
Cleaning Your Mouse and Scroll Ball
Follow these guidelines to clean the outside of your mouse and the scroll ball:
 Remove the batteries.
 Use a lint-free cloth that’s been lightly moistened with water to clean the mouse
exterior and the scroll ball.
 Don’t get moisture in any openings. Don’t use aerosol sprays, solvents, or abrasives.
If your mouse stops scrolling or if scrolling becomes rough, clean the mouse scroll ball.
Rotate the ball while cleaning for complete coverage. If scrolling feels rough, hold the
mouse upside down and roll the ball vigorously while cleaning it to help remove any
particles that may have collected.
About Your Batteries
Your Mighty Mouse comes with two nonrechargeable AA lithium batteries. Lithium
batteries provide longer battery life, but you can also use alkaline or rechargeable AA
batteries. Your mouse works with either one or two batteries installed. To reduce the
weight of your mouse, install one battery; to extend the time between battery
replacements, install two.
WARNING: When you replace the batteries, replace them all at the same time. Also,
don’t mix old batteries with new batteries or mix battery types (for example, don’t mix
alkaline and lithium batteries). Don’t open or puncture the batteries, install them
backwards, or expose them to fire, high temperatures, or water. Don’t charge the
nonrechargeable AA lithium batteries that came with your mouse. Keep batteries out
of the reach of children.8 English
Battery Disposal
Dispose of batteries according to your local environmental laws and guidelines.
Battery Indicator
You can use Keyboard & Mouse preferences to check the battery level. Choose
Apple () > System Preferences. Click Keyboard & Mouse and click Bluetooth.
Note: To conserve battery power, turn your mouse off when you are not using it. If you
are not planning to use your mouse for an extended period, remove the batteries.
Ergonomics
For information about ergonomics, health, and safety, visit the Apple ergonomics
website at www.apple.com/about/ergonomics.
Support
For support and troubleshooting information, user discussion boards, and the latest
Apple software downloads, go to www.apple.com/support.910111213141516 Français
2 Configuration de votre souris
Mighty Mouse sans fil
Félicitations pour l’acquisition de la souris Mighty Mouse
sans fil comme périphérique d’entrée.
Utilisation de la souris Mighty Mouse sans fil
Pour installer les piles dans la souris, configurer votre Mac et utiliser l’Assistant réglages
pour configurer la souris avec votre Mac, veuillez suivre les instructions des pages
suivantes.
Important : n’allumez votre souris que lorsque vous êtes sur le point d’allumer votre
Mac comme décrit à l’étape 3.
Étape 1 : Installation des piles
Pour installer les piles dans votre souris Mighty Mouse sans fil, veuillez suivre les instructions ci-dessous. Vous pouvez installer une ou deux des piles au lithium AA non
rechargeables fournies avec votre souris (consultez la rubrique « À propos des piles » à
la page 22 pour en savoir plus).
Pour installer les piles dans la souris :
1 Retournez la souris et retirez le couvercle.
2 Placez les piles dans le compartiment comme illustré.Français 17
3 Remettez le couvercle de la souris.
Étape 2 : Configuration de votre Mac
Configurez votre Mac en suivant les instructions du Guide de l’utilisateur qui l’accompagne. Étant donné que vous possédez une souris sans fil, les instructions concernant
la connexion d’une souris USB ne vous concernent pas.
Ne démarrez votre Mac que lorsque cela vous est indiqué à l’étape 3.
Faites glisser
l’interrupteur
vers le haut pour
éteindre la souris.
Poussez ce verrou vers
le bas pour retirer le
couvercle inférieur.
Insérez une ou deux piles AA.
La borne positive (+) doit être
placée vers le haut.18 Français
Étape 3 : Jumelage de votre souris
Avant d’utiliser votre souris Mighty Mouse sans fil, vous devez la jumeler avec
votre Mac. Grâce au jumelage, la souris peut communiquer sans fil avec votre Mac.
Le jumelage ne s’effectue qu’une seule fois.
La première fois que vous démarrez votre Mac, l’Assistant réglages vous guide tout
au long de la configuration de votre souris Mighty Mouse sans fil afin de la jumeler
à votre Mac.
Pour jumeler votre souris à votre Mac :
1 Faites glisser l’interrupteur vers le bas pour allumer la souris.
Le laser utilisé par la souris Mighty Mouse n’est pas visible, mais une petite lampe
témoin verte placée en bas de la souris clignote lorsque celle-ci est en marche et
que les piles sont suffisamment chargées.
Faites glisser l’interrupteur vers
le bas pour allumer la souris.
Le voyant lumineux indique
que la souris est allumée.Français 19
2 Allumez votre Mac.
3 Suivez ensuite les instructions à l’écran de l’Assistant réglages.
Utilisation de votre Mighty Mouse
La souris Mighty Mouse intègre la technologie de déplacement par laser, ce qui
permet de l’utiliser sur la plupart des surfaces.
La Mighty Mouse possède deux boutons (un gauche et un droit), une boule de défilement cliquable et un bouton sur chaque côté. Pour utiliser votre Mighty Mouse, vous
pouvez :
 cliquer sur le bouton gauche ou droit,
 appuyer sur un des boutons latéraux,
 faire défiler ou cliquer sur la boule de défilement. 20 Français
Le bouton de gauche ou celui de droite peut agir en tant que bouton principal selon
votre choix. Utilisez le bouton principal pour cliquer ou double-cliquer sur des éléments,
ou encore pour les faire glisser. Ces boutons peuvent également servir de bouton secondaire. Celui-ci permet d’afficher le menu contextuel d’un élément. Vous pouvez affecter
une fonction spécifique aux boutons latéraux, qui fonctionnent conjointement comme
un seul bouton, et à la boule de défilement, faisant également office de bouton.
Personnalisation de votre Mighty Mouse
Utilisez la sous-fenêtre Souris des préférences Clavier et souris pour modifier le mode
de fonctionnement de votre Mighty Mouse.
Pour personnaliser votre souris :
1 Sélectionnez le menu Pomme () > Préférences Système.
2 Cliquez sur Clavier et souris.
3 Cliquez ensuite sur Souris.
Boule de défilement
(bouton)
Bouton gauche
Bouton latéral
Bouton latéral
Bouton droitFrançais 21
Utilisez les menus locaux pour affecter une action à chaque bouton. Vous pouvez ainsi
régler n’importe quel bouton pour activer le Dashboard, Exposé, Spotlight, pour passer
d’une application à l’autre ou en ouvrir directement. Vous pouvez activer ou désactiver
le défilement ou le zoom de l’écran, mais aussi affiner la vitesse du déplacement du
pointeur, du défilement et du double-clic. Enfin, vous avez la possibilité d’activer le
zoom de l’écran en appuyant sur une touche du clavier tout en activant le défilement.
Informations complémentaires
Pour en savoir plus sur l’utilisation de votre Mighty Mouse sans fil, ouvrez l’Aide Mac et
lancez une recherche du terme “Mighty Mouse”.
Changement du nom de votre souris
La première fois que la souris sans fil est jumelée, votre Mac lui attribue automatiquement un nom unique. Vous pouvez changer ce nom dans les préférences Clavier et
souris. Sélectionnez le menu Pomme () > Préférences Système, puis cliquez sur
Clavier et souris. Cliquez sur l’onglet Bluetooth® et saisissez un nouveau nom dans
le champ Nom.
Entretien de votre souris et de la boule de défilement
Suivez les instructions suivantes pour assurer le nettoyage et l’entretien extérieur
de votre souris et de la boule de défilement :
 Retirez les piles.
 Servez-vous d’un chiffon légèrement humide et ne peluchant pas pour nettoyer
l’extérieur de la souris et la boule de défilement.22 Français
 Faites en sorte que l’humidité ne s’infiltre pas par une des ouvertures. N’utilisez pas
d’aérosol, de solvant ou tout autre produit abrasif.
Si votre souris ne défile plus ou si cela devient difficile, nettoyez la boule de défilement. Faites pivoter la boule tout en passant le chiffon afin de la nettoyer de toutes
parts. Si la boule semble résister, retournez la souris et faites tourner la boule vigoureusement tout en procédant au nettoyage pour contribuer à faire tomber les particules
qui s’y seraient accumulées.
À propos des piles
Votre Mighty Mouse est fournie avec deux piles au lithium AA non rechargeables. Les
piles au lithium sont caractérisées par une durée de vie plus longue, mais vous pouvez
également utiliser des piles AA alcalines ou rechargeables. Votre souris fonctionne aussi
bien avec une qu’avec deux piles. Ne placez qu’une seule pile si vous voulez réduire
le poids de votre souris mais placez-en deux si vous voulez éviter de changer les piles
fréquemment.
AVERTISSEMENT : lorsque vous changez les piles, remplacez-les toutes en même
temps. Ne mélangez pas de vieilles piles avec des nouvelles, ni différents types de
piles (par exemple, ne mettez pas de piles alcalines avec des piles au lithium).
N’ouvrez pas les piles, ne les percez pas, ne les installez pas à l’envers et ne les exposez pas au feu, à des températures élevées ou à l’eau. Ne tentez pas de recharger les
piles au lithium AA non rechargeables fournies avec votre souris. Conservez-les hors
de portée des enfants.Français 23
Mise au rebut des piles
Mettez les piles au rebut en respectant la réglementation et les directives locales en
matière d’environnement.
Témoin de charge
Vous pouvez utiliser les préférences Clavier & Souris pour connaître le niveau de charge
des piles. Sélectionnez le menu Pomme () > Préférences Système. Cliquez sur Clavier
et souris, puis sur Bluetooth.
Remarque : pour économiser les piles, éteignez votre souris dès que vous avez fini
de l’utiliser. Si vous ne comptez pas vous en servir pendant une période prolongée,
retirez les piles.
Ergonomie
Pour obtenir des informations sur l’ergonomie, la santé et la sécurité, rendez-vous sur
le site Web d’Apple concernant l’ergonomie : www.apple.com/fr/about/ergonomics
Assistance
Pour accéder à toute information supplémentaire sur l’assistance et le dépannage,
à des forums de discussion et aux derniers téléchargements de logiciels Apple
rendez-vous sur à l’adresse www.apple.com/fr/support.24 Español
3 Configuración del ratón inalámbrico
Mighty Mouse
Enhorabuena por haber elegido el ratón inalámbrico
Mighty Mouse como dispositivo de entrada.
Utilización del ratón inalámbrico Mighty Mouse
Siga los pasos que se describen en las páginas siguientes para instalar las pilas en
el ratón, configurar su Mac y utilizar el Asistente de Configuración para configurar
el ratón con su ordenador.
Importante: No encienda el ratón hasta que vaya a arrancar el ordenador en el paso 3.
Paso 1: Colocación de las pilas
Siga las instrucciones que figuran a continuación para insertar las pilas en el ratón
inalámbrico Mighty Mouse. Puede optar por instalar las dos pilas de litio AA no recargables incluidas con el ratón o solamente una (para más información al respecto,
consulte el apartado “Acerca de las pilas” en la página 30).
Para colocar las pilas en el ratón:
1 Dé la vuelta al ratón y extraiga la tapa posterior.
2 Introduzca las pilas en el compartimento tal como se muestra en la ilustración.Español 25
3 Coloque de nuevo la tapa de la parte posterior y no encienda el ratón.
Paso 2: Configuración del Mac
Para configurar su ordenador, siga las instrucciones que figuran en el manual que se
suministraba con su Mac. Puesto que tiene un ratón inalámbrico, no es necesario que
lea las instrucciones correspondientes a la conexión de un ratón USB.
No arranque el Mac hasta que no se le solicite hacerlo en el paso 3.
Desplace el interruptor
hacia arriba para
apagar el ratón.
Presione este cierre hacia
abajo para abrir la tapa
posterior del ratón.
Introduzca una o dos pilas AA con
los polos positivos (+) hacia arriba.26 Español
Paso 3: Enlace del ratón con el ordenador
Antes de poder utilizar el ratón inalámbrico Mighty Mouse, debe enlazarlo con su Mac.
El proceso de enlace permite que el ratón se comunique de forma inalámbrica con
el ordenador. Esta operación sólo deberá llevarse a cabo una vez.
La primera vez que arranca el ordenador, el Asistente de Configuración le guía a través
de los pasos necesarios para configurar el ratón inalámbrico Mighty Mouse y enlazarlo
con su Mac.
Para enlazar el ratón con el Mac:
1 Deslice el conmutador hacia abajo para encender el ratón.
El láser que utiliza el Mighty Mouse es invisible, pero en la base del ratón hay un
pequeño indicador luminoso de color verde que parpadea cuando el ratón está
encendido y las pilas están cargadas.
Desplace el interruptor
hacia abajo para
encender el ratón.
El indicador luminoso
indica que el ratón
está encendido.Español 27
2 Encienda el ordenador.
3 Cuando el sistema haya arrancado, siga las instrucciones del Asistente de
Configuración que van apareciendo en pantalla.
Uso del Mighty Mouse
Gracias a la tecnología láser del Mighty Mouse, es posible utilizarlo en la mayoría de
las superficies.
El Mighty Mouse incorpora un botón izquierdo, uno derecho, una bola de desplazamiento (que sirve para hacer clic) y dos botones laterales, uno en cada cara. Para usar
el Mighty Mouse:
 Haga clic en el botón izquierdo o derecho.
 Pulse los botones laterales.
 Haga clic o haga rodar la bola de desplazamiento. 28 Español
Tanto el botón derecho como el izquierdo pueden funcionar como botón principal.
Utilice el botón principal para hacer un clic simple o doble y para arrastrar ítems.
Ambos botones pueden hacer las veces también de botón secundario. Utilice el botón
secundario para visualizar el menú de función rápida de un ítem. Puede asignar una
función a los botones laterales, que funcionan juntos como un único botón, y también
a la bola de desplazamiento, que también funciona como un botón.
Bola de desplazamiento
(botón)
Botón izquierdo
Botón lateral Botón lateral
Botón derechoEspañol 29
Cómo personalizar su Mighty Mouse
Utilice el panel Ratón del panel de preferencias Teclado y Ratón para cambiar el modo
de funcionamiento del Mighty Mouse.
Para personalizar el ratón:
1 Seleccione Apple () > Preferencias del Sistema.
2 Haga clic en Teclado y Ratón.
3 Haga clic en Ratón.
Utilice el menú local para asignar una acción a cada botón. Puede configurar cualquiera de los botones para activar el Dashboard, Exposé y Spotlight, alternar entre aplicaciones abiertas o abrir aplicaciones. Puede activar o desactivar el desplazamiento y
el zoom de pantalla, así como ajustar la velocidad del cursor, del desplazamiento y del
doble clic. Además, puede activar la ampliación o reducción de la pantalla pulsando
simultáneamente una tecla del teclado y desplazando el ratón.
Más información
Para obtener más información acerca de cómo usar su Mighty Mouse inalámbrico,
abra la Ayuda Mac y efectúe una búsqueda por “Mighty Mouse.”
Cómo cambiar el nombre del ratón
El Mac asigna automáticamente un nombre único al ratón inalámbrico la primera vez
que se establece el enlace con el ratón. No obstante, si lo desea, puede modificar este
nombre en el panel de preferencias Teclado y Ratón. Para ello, seleccione Apple () >
Preferencias del Sistema y haga clic en Teclado y Ratón. Haga clic en la pestaña
Bluetooth® e introduzca un nuevo nombre en el campo Nombre.30 Español
Limpieza del ratón y la bola de desplazamiento
Siga estas instrucciones para limpiar el exterior y la bola de desplazamiento del ratón:
 Extraiga las pilas.
 Utilice un paño suave que no desprenda pelusa y ligeramente humedecido en agua
para limpiar la superficie exterior del ratón y la bola de desplazamiento.
 Procure que no entre agua o humedad por las aberturas. No utilice aerosoles,
disolventes ni limpiadores abrasivos.
Si la bola de desplazamiento no funciona correctamente o nota que el desplazamiento
no se realiza con suavidad, limpie la bola de desplazamiento del ratón. Para ello, límpiela haciéndola girar en todas las direcciones. Si la bola no gira suavemente, sostenga
el ratón boca abajo y hágala girar enérgicamente mientras la limpia para eliminar
cualquier partícula que pueda estar adherida.
Acerca de las pilas
Con el Mighty Mouse se incluyen dos pilas AA de litio no recargables. Las pilas de litio
duran más, pero también puede usar pilas alcalinas o pilas AA recargables. El ratón funciona indistintamente con una pila o con dos instaladas. Para reducir el peso del ratón,
instale solo una pila; si lo que le interesa es alargar el tiempo transcurrido entre la instalación de unas pilas y su sustitución, instale dos.
ADVERTENCIA: Cuando sea necesario cambiar las pilas, sustituya siempre todas y no
mezcle pilas nuevas con viejas ni tipos de pilas distintos (por ejemplo, no mezcle pilas
alcalinas con pilas de litio). No intente abrir ni perforar las pilas, no las coloque al
revés y evite que entren en contacto con el fuego, con altas temperaturas o con
el agua. No intente recargar las dos pilas de litio AA no recargables que venían con
el ratón. Mantenga las pilas fuera del alcance de los niños.Español 31
Eliminación de las pilas
Tire las pilas siguiendo la normativa ambiental aplicable en su municipio.
Indicador de carga de las pilas
Puede comprobar el nivel de carga de las pilas a través del panel de preferencias
Teclado y Ratón. Seleccione Apple () > Preferencias del Sistema. Haga clic en
Teclado y Ratón y, a continuación, en Bluetooth.
Nota: Para prolongar la duración de las pilas, apague el ratón cuando no lo utilice.
Si tiene pensado no utilizarlo durante un tiempo prolongado, es aconsejable extraer
las pilas.
Ergonomía
Para obtener más información sobre ergonomía, salud y seguridad, visite la página web
de Apple sobre ergonomía: www.apple.com/es/about/ergonomics.
Soporte
Para obtener información acerca de soporte y resolución de problemas, sobre foros
de discusión de usuarios y las últimas novedades en descargas de software de Apple,
visite la página web www.apple.com/es/support32
Regulatory Compliance Information
Compliance Statement
This device complies with part 15 of the FCC rules.
Operation is subject to the following two conditions: (1)
This device may not cause harmful interference, and (2)
this device must accept any interference received,
including interference that may cause undesired
operation. See instructions if interference to radio or
television reception is suspected.
L‘utilisation de ce dispositif est autorisée seulement aux
conditions suivantes : (1) il ne doit pas produire de
brouillage et (2) l’utilisateur du dispositif doit étre prêt à
accepter tout brouillage radioélectrique reçu, même si
ce brouillage est susceptible de compromettre le
fonctionnement du dispositif.
Radio and Television Interference
The equipment described in this manual generates,
uses, and can radiate radio-frequency energy. If it is not
installed and used properly—that is, in strict accordance
with Apple’s instructions—it may cause interference
with radio and television reception.
This equipment has been tested and found to comply
with the limits for a Class B digital device in accordance
with the specifications in Part 15 of FCC rules. These
specifications are designed to provide reasonable
protection against such interference in a residential
installation. However, there is no guarantee that
interference will not occur in a particular installation.
You can determine whether your computer system is
causing interference by turning it off. If the interference
stops, it was probably caused by the computer or one of
the peripheral devices.
If your computer system does cause interference to
radio or television reception, try to correct the
interference by using one or more of the following
measures:
 Turn the television or radio antenna until the
interference stops.
 Move the computer to one side or the other of the
television or radio.
 Move the computer farther away from the television or
radio.
 Plug the computer into an outlet that is on a different
circuit from the television or radio. (That is, make
certain the computer and the television or radio are on
circuits controlled by different circuit breakers or
fuses.)
If necessary, consult an Apple Authorized Service
Provider or Apple. See the service and support
information that came with your Apple product. Or,
consult an experienced radio or television technician for
additional suggestions.
Important: Changes or modifications to this product
not authorized by Apple Inc., could void the FCC
compliance and negate your authority to operate the
product. This product was tested for FCC compliance
under conditions that included the use of Apple
peripheral devices and Apple shielded cables and
connectors between system components. It is important
that you use Apple peripheral devices and shielded
cables and connectors between system components to 33
reduce the possibility of causing interference to radios,
television sets, and other electronic devices. You can
obtain Apple peripheral devices and the proper shielded
cables and connectors through an Apple-authorized
dealer. For non-Apple peripheral devices, contact the
manufacturer or dealer for assistance.
Responsible party (contact for FCC matters only):
Apple Inc., Product Compliance
1 Infinite Loop M/S 26-A
Cupertino, CA 95014-2084
Industry Canada Statements
Complies with the Canadian ICES-003 Class B
specifications. Cet appareil numérique de la classe B est
conforme à la norme NMB-003 du Canada. This device
complies with RSS 210 of Industry Canada.
This Class B device meets all requirements of the
Canadian interference-causing equipment regulations.
Cet appareil numérique de la Class B respecte toutes les
exigences du Règlement sur le matériel brouilleur du
Canada.
European Compliance Statement
This product complies with the requirements of
European Directives 72/23/EEC, 89/336/EEC, and
1999/5/EC.
Bluetooth Europe–EU Declaration of
Conformity
This wireless device complies with the specifications EN
300 328, EN 301-489, EN 50371, and EN 60950 following
the provisions of the R&TTE Directive.
Mighty Mouse Class 1 Laser Information
The Mighty Mouse is a Class 1 laser product in
accordance with IEC 60825-1 A1 A2 and 21 CFR 1040.10
and 1040.11 except for deviations pursuant to Laser
Notice No. 50, dated July 26, 2001.
Caution: Modification of this device may result in
hazardous radiation exposure. For your safety, have this
equipment serviced only by an Apple Authorized
Service Provider.
A Class 1 laser is safe under reasonably foreseeable
conditions per the requirements in IEC 60825-1 AND 21
CFR 1040. However, it is recommended that you do not
direct the laser beam at anyone’s eyes.
Korea MIC Statement34
Korea Statements
Singapore Wireless Certification
Taiwan Wireless Statement
Taiwan Class B Statement
VCCI Class B Statement
Apple and the Environment
Apple Inc. recognizes its responsibility to minimize the
environmental impacts of its operations and products.
More information is available on the web at:
www.apple.com/environment35
Disposal and Recycling Information
When this product reaches its end of life, please dispose
of it according to your local environmental laws and
guidelines.
For information about Apple’s recycling programs, visit:
www.apple.com/environment/recycling
Battery Disposal Information
Dispose of batteries according to your local
environmental laws and guidelines.
Deutschland: Das Gerät enthält Batterien. Diese
gehören nicht in den Hausmüll. Sie können verbrauchte
Batterien beim Handel oder bei den Kommunen
unentgeltlich abgeben. Um Kurzschlüsse zu vermeiden,
kleben Sie die Pole der Batterien vorsorglich mit einem
Klebestreifen ab.
Nederlands: Gebruikte batterijen kunnen worden
ingeleverd bij de chemokar of in een speciale
batterijcontainer voor klein chemisch afval (kca) worden
gedeponeerd.
Taiwan:
European Union—Disposal Information
The symbol above means that according to local laws
and regulations your product should be disposed of
separately from household waste. When this product
reaches its end of life, take it to a collection point
designated by local authorities. Some collection points
accept products for free. The separate collection and
recycling of your product at the time of disposal will
help conserve natural resources and ensure that it is
recycled in a manner that protects human health and
the environment.
© 2007 Apple Inc. All rights reserved. Apple, the Apple
logo, Exposé, Mac, and Mac OS are trademarks of Apple
Inc., registered in the U.S. and other countries. Spotlight
is a trademark of Apple Inc.
Mighty Mouse © Viacom International Inc. All rights
reserved. The Mighty Mouse trademark is used under
license.
The Bluetooth® word mark and logos are registered
trademarks owned by Bluetooth SIG, Inc. and any use of
such marks by Apple is under license.www.apple.com
Printed in XXXX
*1Z034-4321-A*
Advanced Memory
Management
Programming GuideContents
About Memory Management 4
At a Glance 4
Good Practices Prevent Memory-Related Problems 5
Use Analysis Tools to Debug Memory Problems 6
Memory Management Policy 7
Basic Memory Management Rules 7
A Simple Example 8
Use autorelease to Send a Deferred release 8
You Don’t Own Objects Returned by Reference 9
Implement dealloc to Relinquish Ownership of Objects 10
Core Foundation Uses Similar but Different Rules 11
Practical Memory Management 12
Use Accessor Methods to Make Memory Management Easier 12
Use Accessor Methods to Set Property Values 13
Don’t Use Accessor Methods in Initializer Methods and dealloc 14
Use Weak References to Avoid Retain Cycles 15
Avoid Causing Deallocation of Objects You’re Using 16
Don’t Use dealloc to Manage Scarce Resources 17
Collections Own the Objects They Contain 18
Ownership Policy Is Implemented Using Retain Counts 19
Using Autorelease Pool Blocks 20
About Autorelease Pool Blocks 20
Use Local Autorelease Pool Blocks to Reduce Peak Memory Footprint 21
Autorelease Pool Blocks and Threads 23
Document Revision History 24
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
2Figures
Practical Memory Management 12
Figure 1 An illustration of cyclical references 15
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
3Application memory management is the process of allocating memory during your program’s runtime, using
it, and freeing it when you are done with it. A well-written program uses as little memory as possible. In
Objective-C, it can also be seen as a way of distributing ownership of limited memory resources among many
pieces of data and code. When you have finished working through this guide, you will have the knowledge
you need to manage your application’s memory by explicitly managing the life cycle of objects and freeing
them when they are no longer needed.
Although memory management istypically considered at the level of an individual object, your goal is actually
to manage object graphs. You want to make sure that you have no more objects in memory than you actually
need.
alloc/init
Retain count = 1
Destroyed
Destroyed
Class A
retain
2
Class B
2
release
2
Class A
release
1
Class B
copy
1
release
0
0
Class C
Class C
At a Glance
Objective-C provides two methods of application memory management.
1. In the method described in this guide, referred to as“manual retain-release” or MRR, you explicitly manage
memory by keeping track of objects you own. This is implemented using a model, known as reference
counting, that the Foundation class NSObject provides in conjunction with the runtime environment.
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
4
About Memory Management2. In Automatic Reference Counting, or ARC, the system uses the same reference counting system as MRR,
but it insertsthe appropriate memory management method callsfor you at compile-time. You are strongly
encouraged to use ARC for new projects. If you use ARC, there is typically no need to understand the
underlying implementation described in this document, although it may in some situations be helpful.
For more about ARC, see Transitioning to ARC Release Notes.
If you plan on writing code for iOS, you must use explicit memory management (the subject of this guide).
Further, if you plan on writing library routines, plug-ins, orshared code—code that might be loaded into either
a garbage-collection or non-garbage-collection process—you want to write your code using the
memory-management techniques described throughout this guide. (Make sure that you then test your code
in Xcode, with garbage collection disabled and enabled.)
Good Practices Prevent Memory-Related Problems
There are two main kinds or problem that result from incorrect memory management:
● Freeing or overwriting data that is still in use
This causes memory corruption, and typically resultsin your application crashing, or worse, corrupted user
data.
● Not freeing data that is no longer in use causes memory leaks
A memory leak is where allocated memory is not freed, even though it is never used again. Leaks cause
your application to use ever-increasing amounts of memory, which in turn may result in poor system
performance or (in iOS) your application being terminated.
Thinking about memory management from the perspective of reference counting, however, is frequently
counterproductive, because you tend to consider memory management in terms of the implementation details
rather than in terms of your actual goals. Instead, you should think of memory management from the perspective
of object ownership and object graphs.
Cocoa uses a straightforward naming convention to indicate when you own an object returned by a method.
See “Memory Management Policy” (page 7).
Although the basic policy is straightforward, there are some practical steps you can take to make managing
memory easier, and to help to ensure your program remains reliable and robust while at the same time
minimizing its resource requirements.
See “Practical Memory Management” (page 12).
About Memory Management
At a Glance
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
5Autorelease pool blocks provide a mechanism whereby you can send an object a “deferred” release message.
Thisis useful in situations where you want to relinquish ownership of an object, but want to avoid the possibility
of it being deallocated immediately (such as when you return an object from a method). There are occasions
when you might use your own autorelease pool blocks.
See “Using Autorelease Pool Blocks” (page 20).
Use Analysis Tools to Debug Memory Problems
To identify problems with your code at compile time, you can use the Clang Static Analyzer that is built into
Xcode.
If memory management problems do nevertheless arise, there are other tools and techniques you can use to
identify and diagnose the issues.
● Many of the tools and techniques are described in Technical Note TN2239, iOS Debugging Magic , in
particular the use of NSZombie to help find over-released object.
● You can use Instruments to track reference counting events and look for memory leaks. See “Collecting
Data on Your App”.
About Memory Management
At a Glance
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
6The basic model used for memory management in a reference-counted environment is provided by a
combination of methods defined in the NSObject protocol and a standard method naming convention. The
NSObject class also defines a method, dealloc, that is invoked automatically when an object is deallocated.
This article describes all the basic rules you need to know to manage memory correctly in a Cocoa program,
and provides some examples of correct usage.
Basic Memory Management Rules
The memory management model is based on object ownership. Any object may have one or more owners.
Aslong as an object has at least one owner, it continuesto exist. If an object has no owners, the runtime system
destroys it automatically. To make sure it is clear when you own an object and when you do not, Cocoa sets
the following policy:
● You own any object you create
You create an object using a method whose name begins with “alloc”, “new”, “copy”, or “mutableCopy”
(for example, alloc, newObject, or mutableCopy).
● You can take ownership of an object using retain
A received object is normally guaranteed to remain valid within the method it was received in, and that
method may also safely return the object to its invoker. You use retain in two situations: (1) In the
implementation of an accessor method or an init method, to take ownership of an object you want to
store as a property value; and (2) To prevent an object from being invalidated as a side-effect of some
other operation (as explained in “Avoid Causing Deallocation of Objects You’re Using” (page 16)).
● When you no longer need it, you must relinquish ownership of an object you own
You relinquish ownership of an object by sending it a release message or an autorelease message.
In Cocoa terminology, relinquishing ownership of an object is therefore typically referred to as “releasing”
an object.
● You must not relinquish ownership of an object you do not own
This is just corollary of the previous policy rules, stated explicitly.
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
7
Memory Management PolicyA Simple Example
To illustrate the policy, consider the following code fragment:
{
Person *aPerson = [[Person alloc] init];
// ...
NSString *name = aPerson.fullName;
// ...
[aPerson release];
}
The Person object is created using the alloc method, so it is subsequently sent a release message when it
is no longer needed. The person’s name is not retrieved using any of the owning methods, so it is not sent a
release message. Notice, though, that the example uses release rather than autorelease.
Use autorelease to Send a Deferred release
You use autorelease when you need to send a deferred release message—typically when returning an
object from a method. For example, you could implement the fullName method like this:
- (NSString *)fullName {
NSString *string = [[[NSString alloc] initWithFormat:@"%@ %@",
self.firstName, self.lastName]
autorelease];
return string;
}
You own the string returned by alloc. To abide by the memory management rules, you must relinquish
ownership of the string before you lose the reference to it. If you use release, however, the string will be
deallocated before it is returned (and the method would return an invalid object). Using autorelease, you
signify that you want to relinquish ownership, but you allow the caller of the method to use the returned string
before it is deallocated.
You could also implement the fullName method like this:
- (NSString *)fullName {
NSString *string = [NSString stringWithFormat:@"%@ %@",
Memory Management Policy
Basic Memory Management Rules
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
8self.firstName, self.lastName];
return string;
}
Following the basic rules, you don’t own the string returned by stringWithFormat:,so you can safely return
the string from the method.
By way of contrast, the following implementation is wrong :
- (NSString *)fullName {
NSString *string = [[NSString alloc] initWithFormat:@"%@ %@",
self.firstName, self.lastName];
return string;
}
According to the naming convention, there is nothing to denote that the caller of the fullName method owns
the returned string. The caller therefore has no reason to release the returned string, and it will thus be leaked.
You Don’t Own Objects Returned by Reference
Some methods in Cocoa specify that an object is returned by reference (that is, they take an argument of type
ClassName ** or id *). A common pattern is to use an NSError object that contains information about an
error if one occurs, as illustrated by initWithContentsOfURL:options:error: (NSData) and
initWithContentsOfFile:encoding:error: (NSString).
In these cases, the same rules apply as have already been described. When you invoke any of these methods,
you do not create the NSError object, so you do not own it. There is therefore no need to release it, as
illustrated in this example:
NSString *fileName = <#Get a file name#>;
NSError *error;
NSString *string = [[NSString alloc] initWithContentsOfFile:fileName
encoding:NSUTF8StringEncoding error:&error];
if (string == nil) {
// Deal with error...
}
// ...
[string release];
Memory Management Policy
Basic Memory Management Rules
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
9Implement dealloc to Relinquish Ownership of Objects
The NSObject class defines a method, dealloc, that is invoked automatically when an object has no owners
and its memory is reclaimed—in Cocoa terminology it is “freed” or “deallocated.”. The role of the dealloc
method is to free the object's own memory, and to dispose of any resources it holds, including ownership of
any object instance variables.
The following example illustrates how you might implement a dealloc method for a Person class:
@interface Person : NSObject
@property (retain) NSString *firstName;
@property (retain) NSString *lastName;
@property (assign, readonly) NSString *fullName;
@end
@implementation Person
// ...
- (void)dealloc
[_firstName release];
[_lastName release];
[super dealloc];
}
@end
Memory Management Policy
Implement dealloc to Relinquish Ownership of Objects
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
10Important: Never invoke another object’s dealloc method directly.
You must invoke the superclass’s implementation at the end of your implementation.
You should not tie management of system resources to object lifetimes; see “Don’t Use dealloc to Manage
Scarce Resources” (page 17).
When an application terminates, objects may not be sent a dealloc message. Because the process’s
memory is automatically cleared on exit, it is more efficient simply to allow the operating system to clean
up resources than to invoke all the memory management methods.
Core Foundation Uses Similar but Different Rules
There are similar memory management rules for Core Foundation objects (see Memory Management
Programming Guide for Core Foundation ). The naming conventions for Cocoa and Core Foundation, however,
are different. In particular, Core Foundation’s Create Rule (see “The Create Rule” in Memory Management
Programming Guide for Core Foundation ) does not apply to methods that return Objective-C objects. For
example, in the following code fragment, you are not responsible for relinquishing ownership of myInstance:
MyClass *myInstance = [MyClass createInstance];
Memory Management Policy
Core Foundation Uses Similar but Different Rules
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
11Although the fundamental concepts described in “Memory Management Policy” (page 7) are straightforward,
there are some practical steps you can take to make managing memory easier, and to help to ensure your
program remains reliable and robust while at the same time minimizing its resource requirements.
Use Accessor Methods to Make Memory Management Easier
If your class has a property that is an object, you must make sure that any object that is set as the value is not
deallocated while you’re using it. You must therefore claim ownership of the object when it is set. You must
also make sure you then relinquish ownership of any currently-held value.
Sometimes it might seem tedious or pedantic, but if you use accessor methods consistently, the chances of
having problems with memory management decrease considerably. If you are using retain and release
on instance variables throughout your code, you are almost certainly doing the wrong thing.
Consider a Counter object whose count you want to set.
@interface Counter : NSObject
@property (nonatomic, retain) NSNumber *count;
@end;
The property declares two accessor methods. Typically, you should ask the compiler to synthesize the methods;
however, it’s instructive to see how they might be implemented.
In the “get” accessor, you just return the synthesized instance variable, so there is no need for retain or
release:
- (NSNumber *)count {
return _count;
}
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
12
Practical Memory ManagementIn the “set” method, if everyone else is playing by the same rules you have to assume the new count may be
disposed of at any time so you have to take ownership of the object—by sending it a retain message—to
ensure it won’t be. You must also relinquish ownership of the old count object here by sending it a release
message. (Sending a message to nil is allowed in Objective-C, so the implementation will still work if _count
hasn’t yet been set.) You must send this after [newCount retain] in case the two are the same object—you
don’t want to inadvertently cause it to be deallocated.
- (void)setCount:(NSNumber *)newCount {
[newCount retain];
[_count release];
// Make the new assignment.
_count = newCount;
}
Use Accessor Methods to Set Property Values
Suppose you want to implement a method to reset the counter. You have a couple of choices. The first
implementation creates the NSNumber instance with alloc, so you balance that with a release.
- (void)reset {
NSNumber *zero = [[NSNumber alloc] initWithInteger:0];
[self setCount:zero];
[zero release];
}
The second uses a convenience constructor to create a new NSNumber object. There is therefore no need for
retain or release messages
- (void)reset {
NSNumber *zero = [NSNumber numberWithInteger:0];
[self setCount:zero];
}
Note that both use the set accessor method.
The following will almost certainly work correctly for simple cases, but as tempting as it may be to eschew
accessor methods, doing so will almost certainly lead to a mistake atsome stage (for example, when you forget
to retain or release, or if the memory management semantics for the instance variable change).
Practical Memory Management
Use Accessor Methods to Make Memory Management Easier
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
13- (void)reset {
NSNumber *zero = [[NSNumber alloc] initWithInteger:0];
[_count release];
_count = zero;
}
Note also that if you are using key-value observing, then changing the variable in this way is not KVO compliant.
Don’t Use Accessor Methods in Initializer Methods and dealloc
The only places you shouldn’t use accessor methods to set an instance variable are in initializer methods and
dealloc. To initialize a counter object with a number object representing zero, you might implement an init
method as follows:
- init {
self = [super init];
if (self) {
_count = [[NSNumber alloc] initWithInteger:0];
}
return self;
}
To allow a counter to be initialized with a count other than zero, you might implement an initWithCount:
method as follows:
- initWithCount:(NSNumber *)startingCount {
self = [super init];
if (self) {
_count = [startingCount copy];
}
return self;
}
Since the Counter class has an object instance variable, you must also implement a dealloc method. Itshould
relinquish ownership of any instance variables by sending them a release message, and ultimately it should
invoke super’s implementation:
Practical Memory Management
Use Accessor Methods to Make Memory Management Easier
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
14- (void)dealloc {
[_count release];
[super dealloc];
}
Use Weak References to Avoid Retain Cycles
Retaining an object creates a strong reference to that object. An object cannot be deallocated until all of its
strong references are released. A problem, known as a retain cycle, can therefore arise if two objects may have
cyclical references—that is, they have a strong reference to each other (either directly, or through a chain of
other objects each with a strong reference to the next leading back to the first).
The object relationships shown in Figure 1 (page 15) illustrate a potential retain cycle. The Document object
has a Page object for each page in the document. Each Page object has a property that keeps track of which
document it is in. If the Document object has a strong reference to the Page object and the Page object has
a strong reference to the Document object, neither object can ever be deallocated. The Document’s reference
count cannot become zero until the Page object is released, and the Page object won’t be released until the
Document object is deallocated.
Figure 1 An illustration of cyclical references
text
parent
parent
paragraph
Paragraph
Page
page
Document
retain
don’t
retain
don’t
retain
retain
Practical Memory Management
Use Weak References to Avoid Retain Cycles
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
15The solution to the problem of retain cycles is to use weak references. A weak reference is a non-owning
relationship where the source object does not retain the object to which it has a reference.
To keep the object graph intact, however, there must be strong references somewhere (if there were only
weak references, then the pages and paragraphs might not have any owners and so would be deallocated).
Cocoa establishes a convention, therefore, that a “parent” object should maintain strong references to its
“children,” and that the children should have weak references to their parents.
So, in Figure 1 (page 15) the document object has a strong reference to (retains) its page objects, but the page
object has a weak reference to (does not retain) the document object.
Examples of weak references in Cocoa include, but are not restricted to, table data sources, outline view items,
notification observers, and miscellaneous targets and delegates.
You need to be careful about sending messages to objects for which you hold only a weak reference. If you
send a message to an object after it has been deallocated, your application will crash. You must have well-defined
conditions for when the object is valid. In most cases, the weak-referenced object is aware of the other object’s
weak reference to it, asisthe case for circular references, and isresponsible for notifying the other object when
it deallocates. For example, when you register an object with a notification center, the notification centerstores
a weak reference to the object and sends messages to it when the appropriate notifications are posted. When
the object is deallocated, you need to unregister it with the notification center to prevent the notification
center from sending any further messages to the object, which no longer exists. Likewise, when a delegate
object is deallocated, you need to remove the delegate link by sending a setDelegate: message with a nil
argument to the other object. These messages are normally sent from the object’s dealloc method.
Avoid Causing Deallocation of Objects You’re Using
Cocoa’s ownership policy specifies that received objects should typically remain valid throughout the scope
of the calling method. It should also be possible to return a received object from the current scope without
fear of it being released. It should not matter to your application that the getter method of an object returns
a cached instance variable or a computed value. What matters is that the object remains valid for the time you
need it.
There are occasional exceptions to this rule, primarily falling into one of two categories.
1. When an object is removed from one of the fundamental collection classes.
heisenObject = [array objectAtIndex:n];
[array removeObjectAtIndex:n];
// heisenObject could now be invalid.
Practical Memory Management
Avoid Causing Deallocation of Objects You’re Using
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
16When an object is removed from one of the fundamental collection classes, it is sent a release (rather
than autorelease) message. If the collection was the only owner of the removed object, the removed
object (heisenObject in the example ) is then immediately deallocated.
2. When a “parent object” is deallocated.
id parent = <#create a parent object#>;
// ...
heisenObject = [parent child] ;
[parent release]; // Or, for example: self.parent = nil;
// heisenObject could now be invalid.
In some situations you retrieve an object from another object, and then directly or indirectly release the
parent object. If releasing the parent causes it to be deallocated, and the parent was the only owner of
the child, then the child (heisenObject in the example) will be deallocated at the same time (assuming
that it is sent a release rather than an autorelease message in the parent’s dealloc method).
To protect against these situations, you retain heisenObject upon receiving it and you release it when you
have finished with it. For example:
heisenObject = [[array objectAtIndex:n] retain];
[array removeObjectAtIndex:n];
// Use heisenObject...
[heisenObject release];
Don’t Use dealloc to Manage Scarce Resources
You should typically not manage scarce resources such as file descriptors, network connections, and buffers
or caches in a dealloc method. In particular, you should not design classes so that dealloc will be invoked
when you think it will be invoked. Invocation of dealloc might be delayed or sidestepped, either because of
a bug or because of application tear-down.
Instead, if you have a class whose instances manage scarce resources, you should design your application such
that you know when you no longer need the resources and can then tell the instance to “clean up” at that
point. You would typically then release the instance, and dealloc would follow, but you will not suffer
additional problems if it does not.
Problems may arise if you try to piggy-back resource management on top of dealloc. For example:
Practical Memory Management
Don’t Use dealloc to Manage Scarce Resources
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
171. Order dependencies on object graph tear-down.
The object graph tear-down mechanism is inherently non-ordered. Although you might typically
expect—and get—a particular order, you are introducing fragility. If an object is unexpectedly autoreleased
rather than released for example, the tear-down order may change, which may lead to unexpected results.
2. Non-reclamation of scarce resources.
Memory leaks are bugsthatshould be fixed, but they are generally not immediately fatal. Ifscarce resources
are not released when you expect them to be released, however, you may run into more serious problems.
If your application runs out of file descriptors, for example, the user may not be able to save data.
3. Cleanup logic being executed on the wrong thread.
If an object is autoreleased at an unexpected time, it will be deallocated on whatever thread’s autorelease
pool block it happens to be in. This can easily be fatal for resources that should only be touched from one
thread.
Collections Own the Objects They Contain
When you add an object to a collection (such as an array, dictionary, or set), the collection takes ownership of
it. The collection will relinquish ownership when the object isremoved from the collection or when the collection
is itself released. Thus, for example, if you want to create an array of numbers you might do either of the
following:
NSMutableArray *array = <#Get a mutable array#>;
NSUInteger i;
// ...
for (i = 0; i < 10; i++) {
NSNumber *convenienceNumber = [NSNumber numberWithInteger:i];
[array addObject:convenienceNumber];
}
In this case, you didn’t invoke alloc, so there’s no need to call release. There is no need to retain the new
numbers (convenienceNumber), since the array will do so.
NSMutableArray *array = <#Get a mutable array#>;
NSUInteger i;
// ...
for (i = 0; i < 10; i++) {
Practical Memory Management
Collections Own the Objects They Contain
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
18NSNumber *allocedNumber = [[NSNumber alloc] initWithInteger:i];
[array addObject:allocedNumber];
[allocedNumber release];
}
In this case, you do need to send allocedNumber a release message within the scope of the for loop to
balance the alloc. Since the array retained the number when it was added by addObject:, it will not be
deallocated while it’s in the array.
To understand this, put yourself in the position of the person who implemented the collection class. You want
to make sure that no objects you’re given to look after disappear out from under you, so you send them a
retain message as they’re passed in. If they’re removed, you have to send a balancing release message,
and any remaining objects should be sent a release message during your own dealloc method.
Ownership Policy Is Implemented Using Retain Counts
The ownership policy is implemented through reference counting—typically called “retain count” after the
retain method. Each object has a retain count.
● When you create an object, it has a retain count of 1.
● When you send an object a retain message, its retain count is incremented by 1.
● When you send an object a release message, its retain count is decremented by 1.
When you send an object a autorelease message, its retain count is decremented by 1 at the end of
the current autorelease pool block.
●
If an object’s retain count is reduced to zero, it is deallocated.
Important: There should be no reason to explicitly ask an object what itsretain count is(see retainCount).
The result is often misleading, as you may be unaware of what framework objects have retained an object
in which you are interested. In debugging memory management issues, you should be concerned only
with ensuring that your code adheres to the ownership rules.
Practical Memory Management
Ownership Policy Is Implemented Using Retain Counts
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
19Autorelease pool blocks provide a mechanism whereby you can relinquish ownership of an object, but avoid
the possibility of it being deallocated immediately (such as when you return an object from a method). Typically,
you don’t need to create your own autorelease pool blocks, but there are some situations in which either you
must or it is beneficial to do so.
About Autorelease Pool Blocks
An autorelease pool block is marked using @autoreleasepool, as illustrated in the following example:
@autoreleasepool {
// Code that creates autoreleased objects.
}
At the end of the autorelease pool block, objects that received an autorelease message within the block
are sent a release message—an object receives a release message for each time it wassent an autorelease
message within the block.
Like any other code block, autorelease pool blocks can be “nested:”
@autoreleasepool {
// . . .
@autoreleasepool {
// . . .
}
. . .
}
(You wouldn’t normally see code exactly as above; typically code within an autorelease pool block in one
source file would invoke code in another source file that is contained within another autorelease pool block.)
For a given autorelease message, the corresponding release message issent at the end of the autorelease
pool block in which the autorelease message was sent.
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
20
Using Autorelease Pool BlocksCocoa always expects code to be executed within an autorelease pool block, otherwise autoreleased objects
do not get released and your application leaks memory. (If you send an autorelease message outside of an
autorelease pool block, Cocoa logs a suitable error message.) The AppKit and UIKit frameworks process each
event-loop iteration (such as a mouse down event or a tap) within an autorelease pool block. Therefore you
typically do not have to create an autorelease pool block yourself, or even see the code that is used to create
one. There are, however, three occasions when you might use your own autorelease pool blocks:
●
If you are writing a program that is not based on a UI framework, such as a command-line tool.
●
If you write a loop that creates many temporary objects.
You may use an autorelease pool block inside the loop to dispose of those objects before the next iteration.
Using an autorelease pool block in the loop helps to reduce the maximum memory footprint of the
application.
●
If you spawn a secondary thread.
You must create your own autorelease pool block as soon as the thread begins executing; otherwise, your
application will leak objects. (See “Autorelease Pool Blocks and Threads” (page 23) for details.)
Use Local Autorelease Pool Blocksto Reduce Peak Memory Footprint
Many programs create temporary objects that are autoreleased. These objects add to the program’s memory
footprint until the end of the block. In many situations, allowing temporary objects to accumulate until the
end of the current event-loop iteration does not result in excessive overhead; in some situations, however,
you may create a large number of temporary objects that add substantially to memory footprint and that you
want to dispose of more quickly. In these latter cases, you can create your own autorelease pool block. At the
end of the block, the temporary objects are released, which typically results in their deallocation thereby
reducing the program’s memory footprint.
The following example shows how you might use a local autorelease pool block in a for loop.
NSArray *urls = <# An array of file URLs #>;
for (NSURL *url in urls) {
@autoreleasepool {
NSError *error;
NSString *fileContents = [NSString stringWithContentsOfURL:url
encoding:NSUTF8StringEncoding error:&error];
/* Process the string, creating and autoreleasing more objects. */
}
Using Autorelease Pool Blocks
Use Local Autorelease Pool Blocks to Reduce Peak Memory Footprint
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
21}
The for loop processes one file at a time. Any object (such as fileContents)sent an autorelease message
inside the autorelease pool block is released at the end of the block.
After an autorelease pool block, you should regard any object that was autoreleased within the block as
“disposed of.” Do not send a message to that object or return it to the invoker of your method. If you must
use a temporary object beyond an autorelease pool block, you can do so by sending a retain message to
the object within the block and then send it autorelease after the block, as illustrated in this example:
– (id)findMatchingObject:(id)anObject {
id match;
while (match == nil) {
@autoreleasepool {
/* Do a search that creates a lot of temporary objects. */
match = [self expensiveSearchForObject:anObject];
if (match != nil) {
[match retain]; /* Keep match around. */
}
}
}
return [match autorelease]; /* Let match go and return it. */
}
Sending retain to match within the autorelease pool block the and sending autorelease to it after the
autorelease pool block extends the lifetime of match and allows it to receive messages outside the loop and
be returned to the invoker of findMatchingObject:.
Using Autorelease Pool Blocks
Use Local Autorelease Pool Blocks to Reduce Peak Memory Footprint
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
22Autorelease Pool Blocks and Threads
Each thread in a Cocoa application maintains its own stack of autorelease pool blocks. If you are writing a
Foundation-only program or if you detach a thread, you need to create your own autorelease pool block.
If your application or thread is long-lived and potentially generates a lot of autoreleased objects, you should
use autorelease pool blocks (like AppKit and UIKit do on the main thread); otherwise, autoreleased objects
accumulate and your memory footprint grows. If your detached thread does not make Cocoa calls, you do not
need to use an autorelease pool block.
Note: If you create secondary threads using the POSIX thread APIsinstead of NSThread, you cannot
use Cocoa unless Cocoa is in multithreading mode. Cocoa enters multithreading mode only after
detaching its first NSThread object. To use Cocoa on secondary POSIX threads, your application
must first detach at least one NSThread object, which can immediately exit. You can test whether
Cocoa is in multithreading mode with the NSThread class method isMultiThreaded.
Using Autorelease Pool Blocks
Autorelease Pool Blocks and Threads
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
23This table describes the changes to Advanced Memory Management Programming Guide .
Date Notes
2012-07-17 Updated to describe autorelease in terms of @autoreleasepool blocks.
Updated to reflect new status as a consequence of the introduction of
ARC.
2011-09-28
2011-03-24 Major revision for clarity and conciseness.
2010-12-21 Clarified the naming rule for mutable copy.
Minor rewording tomemorymanagementfundamental rule,to emphasize
simplicity. Minor additions to Practical Memory Management.
2010-06-24
Updated the description of handling memory warningsfor iOS 3.0; partially
rewrote "Object Ownership and Disposal."
2010-02-24
Augmented section on accessor methods in Practical Memory
Management.
2009-10-21
2009-08-18 Added links to related concepts.
2009-07-23 Updated guidance for declaring outlets on OS X.
2009-05-06 Corrected typographical errors.
2009-03-04 Corrected typographical errors.
2009-02-04 Updated "Nib Objects" article.
Added section on use of autorelease pools in a garbage collected
environment.
2008-11-19
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
24
Document Revision HistoryDate Notes
2008-10-15 Corrected missing image.
2008-02-08 Corrected a broken link to the "Carbon-Cocoa Integration Guide."
2007-12-11 Corrected typographical errors.
2007-10-31 Updated for OS X v10.5. Corrected minor typographical errors.
2007-06-06 Corrected minor typographical errors.
2007-05-03 Corrected typographical errors.
2007-01-08 Added article on memory management of nib files.
2006-06-28 Added a note about dealloc and application termination.
Reorganized articles in this document to improve flow; updated "Object
Ownership and Disposal."
2006-05-23
Clarified discussion of object ownership and dealloc. Moved discussion
of accessor methods to a separate article.
2006-03-08
2006-01-10 Corrected typographical errors. Updated title from"Memory Management."
2004-08-31 Changed Related Topics links and updated topic introduction.
Expanded description of what gets released when an autorelease pool is
released to include both explicitly and implicitly autoreleased objects in
“Using Autorelease Pool Blocks” (page 20).
2003-06-06
Revision history was added to existing topic. It will be used to record
changes to the content of the topic.
2002-11-12
Document Revision History
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
25Apple Inc.
© 2012 Apple Inc.
All rights reserved.
No part of this publication may be reproduced,
stored in a retrievalsystem, or transmitted, in any
form or by any means, mechanical, electronic,
photocopying, recording, or otherwise, without
prior written permission of Apple Inc., with the
following exceptions: Any person is hereby
authorized to store documentation on a single
computer for personal use only and to print
copies of documentation for personal use
provided that the documentation contains
Apple’s copyright notice.
No licenses, express or implied, are granted with
respect to any of the technology described in this
document. Apple retains all intellectual property
rights associated with the technology described
in this document. This document is intended to
assist application developers to develop
applications only for Apple-labeled computers.
Apple Inc.
1 Infinite Loop
Cupertino, CA 95014
408-996-1010
Apple, the Apple logo, Carbon, Cocoa,
Instruments, Mac, Objective-C, OS X, and Xcode
are trademarks of Apple Inc., registered in the
U.S. and other countries.
iOS is a trademark or registered trademark of
Cisco in the U.S. and other countries and is used
under license.
Even though Apple has reviewed this document,
APPLE MAKES NO WARRANTY OR REPRESENTATION,
EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS
DOCUMENT, ITS QUALITY, ACCURACY,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR
PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED
“AS IS,” AND YOU, THE READER, ARE ASSUMING THE
ENTIRE RISK AS TO ITS QUALITY AND ACCURACY.
IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT,
INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL
DAMAGES RESULTING FROM ANY DEFECT OR
INACCURACY IN THIS DOCUMENT, even if advised of
the possibility of such damages.
THE WARRANTY AND REMEDIES SET FORTH ABOVE
ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL
OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer,
agent, or employee is authorized to make any
modification, extension, or addition to this warranty.
Some states do not allow the exclusion or limitation
of implied warranties or liability for incidental or
consequential damages, so the above limitation or
exclusion may not apply to you. This warranty gives
you specific legal rights, and you may also have other
rights which vary from state to state.
iOS App Programming
GuideContents
About iOS App Programming 8
At a Glance 8
Translate Your Initial Idea into an Implementation Plan 8
UIKit Provides the Core of Your App 8
Apps Must Behave Differently in the Foreground and Background 9
iCloud Affects the Design of Your Data Model and UI Layers 9
Apps Require Some Specific Resources 9
Apps Should Restore Their Previous UI State at Launch Time 9
Many App Behaviors Can Be Customized 10
Apps Must Be Tuned for Performance 10
The iOS Environment Affects Many App Behaviors 10
How to Use This Document 10
Prerequisites 11
See Also 11
App Design Basics 12
Doing Your Initial Design 12
Learning the Fundamental iOS Design Patterns and Techniques 13
Translating Your Initial Design into an Action Plan 13
Starting the App Creation Process 14
Core App Objects 17
The Core Objects of Your App 17
The Data Model 20
Defining a Custom Data Model 21
Defining a Structured Data Model Using Core Data 24
Defining a Document-Based Data Model 24
Integrating iCloud Support Into Your App 26
The User Interface 26
Building an Interface Using UIKit Views 27
Building an Interface Using Views and OpenGL ES 29
The App Bundle 30
App States and Multitasking 33
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
2Managing App State Changes 34
The App Launch Cycle 36
Responding to Interruptions 42
Moving to the Background 44
Returning to the Foreground 48
App Termination 51
The Main Run Loop 52
Background Execution and Multitasking 54
Determining Whether Multitasking Is Available 54
Executing a Finite-Length Task in the Background 55
Scheduling the Delivery of Local Notifications 56
Implementing Long-Running Background Tasks 58
Being a Responsible Background App 63
Opting out of Background Execution 65
Concurrency and Secondary Threads 66
State Preservation and Restoration 67
The Preservation and Restoration Process 67
Flow of the Preservation Process 74
Flow of the Restoration Process 75
What Happens When You Exclude Groups of View Controllers? 78
Checklist for Implementing State Preservation and Restoration 81
Enabling State Preservation and Restoration in Your App 82
Preserving the State of Your View Controllers 82
Marking Your View Controllers for Preservation 83
Restoring Your View Controllers at Launch Time 83
Encoding and Decoding Your View Controller’s State 85
Preserving the State of Your Views 86
UIKit VIews with Preservable State 87
Preserving the State of a Custom View 88
Implementing Preservation-Friendly Data Sources 89
Preserving Your App’s High-Level State 89
Mixing UIKit’s State Preservation with Your Own Custom Mechanisms 90
Tips for Saving and Restoring State Information 91
App-Related Resources 93
App Store Required Resources 93
The Information Property List File 93
Declaring the Required Device Capabilities 94
Declaring Your App’s Supported Document Types 97
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
3
ContentsApp Icons 98
App Launch (Default) Images 100
Providing Launch Images for Different Orientations 101
Providing Device-Specific Launch Images 103
Providing Launch Images for Custom URL Schemes 103
The Settings Bundle 104
Localized Resource Files 105
Loading Resources Into Your App 106
Advanced App Tricks 108
Creating a Universal App 108
Updating Your Info.plist Settings 108
Implementing Your View Controllers and Views 109
Updating Your Resource Files 110
Using Runtime Checks to Create Conditional Code Paths 110
Supporting Multiple Versions of iOS 111
Launching in Landscape Mode 112
Installing App-Specific Data Files at First Launch 113
Protecting Data Using On-Disk Encryption 113
Tips for Developing a VoIP App 115
Configuring Sockets for VoIP Usage 116
Installing a Keep-Alive Handler 117
Configuring Your App’s Audio Session 117
Using the Reachability Interfaces to Improve the User Experience 118
Communicating with Other Apps 118
Implementing Custom URL Schemes 119
Registering Custom URL Schemes 119
Handling URL Requests 120
Showing and Hiding the Keyboard 125
Turning Off Screen Locking 126
Performance Tuning 127
Make App Backups More Efficient 127
App Backup Best Practices 127
Files Saved During App Updates 128
Use Memory Efficiently 129
Observe Low-Memory Warnings 129
Reduce Your App’s Memory Footprint 130
Allocate Memory Wisely 131
Move Work off the Main Thread 131
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
4
ContentsFloating-Point Math Considerations 132
Reduce Power Consumption 132
Tune Your Code 134
Improve File Access Times 134
Tune Your Networking Code 135
Tips for Efficient Networking 135
Using Wi-Fi 136
The Airplane Mode Alert 136
The iOS Environment 137
Specialized System Behaviors 137
The Virtual Memory System 137
The Automatic Sleep Timer 137
Multitasking Support 138
Security 138
The App Sandbox 138
Keychain Data 140
Document Revision History 141
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
5
ContentsFigures, Tables, and Listings
Core App Objects 17
Figure 2-1 Key objects in an iOS app 18
Figure 2-2 Using documents to manage the content of files 25
Figure 2-3 Building your interface using view objects 28
Figure 2-4 Building your interface using OpenGL ES 29
Table 2-1 The role of objects in an iOS app 18
Table 2-2 Data classes in the Foundation framework 21
Table 2-3 A typical app bundle 30
Listing 2-1 Definition of a custom data object 23
App States and Multitasking 33
Figure 3-1 State changes in an iOS app 35
Figure 3-2 Launching an app into the foreground 37
Figure 3-3 Launching an app into the background 38
Figure 3-4 Handling alert-based interruptions 42
Figure 3-5 Moving from the foreground to the background 45
Figure 3-6 Transitioning from the background to the foreground 48
Figure 3-7 Processing events in the main run loop 52
Table 3-1 App states 34
Table 3-2 Notifications delivered to waking apps 49
Table 3-3 Common types of events for iOS apps 53
Listing 3-1 The main function of an iOS app 39
Listing 3-2 Checking for background support in earlier versions of iOS 54
Listing 3-3 Starting a background task at quit time 55
Listing 3-4 Scheduling an alarm notification 57
State Preservation and Restoration 67
Figure 4-1 A sample view controller hierarchy 69
Figure 4-2 Adding restoration identifies to view controllers 72
Figure 4-3 High-level flow interface preservation 74
Figure 4-4 High-level flow for restoring your user interface 76
Figure 4-5 Excluding view controllers from the automatic preservation process 79
Figure 4-6 Loading the default set of view controllers 80
Figure 4-7 UIKit handles the root view controller 90
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
6Listing 4-1 Creating a new view controller during restoration 84
Listing 4-2 Encoding and decoding a view controller’s state. 86
Listing 4-3 Preserving the selection of a custom text view 88
App-Related Resources 93
Figure 5-1 Custom preferences displayed by the Settings app 104
Table 5-1 Dictionary keys for the UIRequiredDeviceCapabilities key 95
Table 5-2 Sizes for images in the CFBundleIcons key 98
Table 5-3 Typical launch image dimensions 100
Table 5-4 Launch image orientation modifiers 101
Advanced App Tricks 108
Figure 6-1 Defining a custom URL scheme in the Info.plist file 120
Figure 6-2 Launching an app to open a URL 122
Figure 6-3 Waking a background app to open a URL 123
Table 6-1 Configuring stream interfaces for VoIP usage 116
Table 6-2 Keys and values of the CFBundleURLTypes property 120
Listing 6-1 Handling a URL request based on a custom scheme 124
Performance Tuning 127
Table 7-1 Tips for reducing your app’s memory footprint 130
Table 7-2 Tips for allocating memory 131
The iOS Environment 137
Figure A-1 Sandbox directories in iOS 139
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
7
Figures, Tables, and ListingsThis document is the starting point for creating iOS apps. It describes the fundamental architecture of iOS
apps, including how the code you write fits together with the code provided by iOS. This document also offers
practical guidance to help you make better choices during your design and planning phase and guides you
to the other documents in the iOS developer library that contain more detailed information about how to
address a specific task.
The contents of this document apply to all iOS apps running on all types of iOS devices, including iPad, iPhone,
and iPod touch.
Note: Development of iOS apps requires an Intel-based Macintosh computer with the iOS SDK
installed. For information about how to get the iOS SDK, go to the iOS Dev Center.
At a Glance
The starting point for any new app isidentifying the design choices you need to make and understanding how
those choices map to an appropriate implementation.
Translate Your Initial Idea into an Implementation Plan
Every great iOS app starts with a great idea, but translating that idea into actions requires some planning.
Every iOS app relies heavily on design patterns, and those design patterns influence much of the code you
need to write. So before you write any code, take the time to explore the possible techniques and technologies
available for writing that code. Doing so can save you a lot of time and frustration.
Relevant Chapter: “App Design Basics” (page 12)
UIKit Provides the Core of Your App
The core infrastructure of an iOS app is built from objectsin the UIKit framework. The objectsin thisframework
provide all of the support for handling events, displaying content on the screen, and interacting with the rest
of the system. Understanding the role these objects play, and how you modify them to customize the default
app behavior, is therefore very important for writing apps quickly and correctly.
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
8
About iOS App ProgrammingRelevant Chapter: “Core App Objects” (page 17)
Apps Must Behave Differently in the Foreground and Background
An iOS device runs multiple apps simultaneously but only one app—the foreground app—has the user’s
attention at any given time. The current foreground app is the only app allowed to present a user interface
and respond to touch events. Other apps remain in the background, usually asleep but sometimes running
additional code. Transitioning between the foreground and background states involves changing several
aspects of your app’s behavior.
Relevant Chapter: “App States and Multitasking” (page 33)
iCloud Affects the Design of Your Data Model and UI Layers
iCloud allows you to share the user’s data among multiple instances of your app running on different iOS and
Mac OS X devices. Incorporating support for iCloud into your app involves changing many aspects of how you
manage your files. Because files in iCloud are accessible by more than just your app, all file operations must
be synchronized to prevent data corruption. And depending on your app and how it presents its data, iCloud
can also require changes to portions of your user interface.
Relevant Chapter: “Integrating iCloud Support Into Your App” (page 26)
Apps Require Some Specific Resources
There are some resources that must be present in all iOS apps. Most apps include images, sounds, and other
types of resources for presenting the app’s content but the App Store also requires some specific resources
be present. The reason is that iOS uses several specific resources when presenting your app to the user and
when coordinating interactions with other parts of the system. So these resources are there to improve the
overall user experience.
Relevant Chapter: “App-Related Resources” (page 93)
Apps Should Restore Their Previous UI State at Launch Time
At launch time, your app should restore its user interface to the state it was in when it was last used. During
normal use, the system controls when apps are terminated. Normally when this happens, the app displays its
default user interface when it is relaunched. With state restoration, UIKit helps your app restore your app’s
interface to its previous state, which promotes a consistent user experience.
About iOS App Programming
At a Glance
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
9Relevant Chapter: “State Preservation and Restoration” (page 67)
Many App Behaviors Can Be Customized
The core architecture of all apps may be the same, but there are still ways for you to tweak the high-level
design of your app. Some of these tweaks are how you add specific high-level features, such as data protection
and URL handling. Others affect the design of specific types of apps, such as VoIP apps.
Relevant Chapter: “Advanced App Tricks” (page 108)
Apps Must Be Tuned for Performance
Great apps are always tuned for the best possible performance. For iOS apps, performance means more than
just writing fast code. It often means writing better code so that your user interface remains responsive to user
input, your app does not degrade battery life significantly, and your app does not impact othersystem resources.
Before you can tune your code, though, learn about the types of changes that are likely to provide the most
benefit.
Relevant Chapter: “Performance Tuning” (page 127)
The iOS Environment Affects Many App Behaviors
There are aspects of iOS itself that impact how you design and write applications. Because iOS is built for
mobile devices, it takes a more active role in providing security for apps. Other system behaviors also affect
everything from how memory is managed to how the system responds to hardware input. All of these system
behaviors affect the way you design your apps.
Relevant Appendix: “The iOS Environment” (page 137)
How to Use This Document
This document providesimportant information about the core objects of your app and how they work together.
This document does not address the creation of any specific type of iOS app. Instead, it provides a tour of the
architecture that is common to all iOS apps and highlights key places where you can modify that architecture
to meet your needs. Whenever possible, the document also offers tips and guidance about ways to implement
features related to the core app architecture.
About iOS App Programming
How to Use This Document
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
10Prerequisites
This document is the entry-point guide for designing an iOS app. This guide also covers many of the practical
aspects involved with implementing your app. However, this book assumes that you have already installed
the iOS SDK and configured your development environment. You must perform those steps before you can
start writing and building iOS apps.
If you are new to iOS app development, read Start Developing iOS Apps Today . This document offers a
step-by-step introduction to the development process to help you get up to speed quickly. It also includes a
hands-on tutorial that walks you through the app-creation process from start to finish, showing you how to
create a simple app and get it running quickly.
See Also
For additional information related to app design, see the following documents:
● For guidance about how to design an iOS app, read iOS Human Interface Guidelines. This book provides
you with tips and guidance about how to create a great experience for users of your app. It also conveys
the basic design philosophy surrounding iOS apps.
●
If you are not sure what is possible in an iOS app, read iOS Technology Overview. This book provides a
summary of iOS technologies and the situations where you might want to use them. This book is not
required reading but is a good reference during the brainstorming phase of your project.
About iOS App Programming
Prerequisites
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
11If you are a new to developing iOS apps, you might be wondering where the app development process starts.
After devising your initial idea for an app, you need to turn that idea into an action plan for implementing your
app. From a design perspective, you need to make some high-level decisions about the best course of action
for implementing your ideas. You also need to set up your initial Xcode project in a way that makes it easy to
proceed with development.
If you are new to developing iOS apps altogether, spend some time familiarizing yourself with the basic
concepts. There are tutorials to help you jump right in if you want to start writing code, but iOS is a system
built from basic design patterns. Taking a little bit of time to learn those patterns will help you tremendously
later.
Doing Your Initial Design
There are many ways to design an app, and many of the best approaches do not involve writing any code. A
great app starts with a great idea that you then expand into a more full-featured product description. Early in
the design phase, it helps to understand just what you want your app to do. Write down the set of high-level
features that would be required to implement your idea. Prioritize those features based on what you think
your users will need. Do a little research into iOS itself so that you understand its capabilities and how you
might be able to use them to achieve your goals. And sketch out some rough interface designs on paper to
visualize how your app might look.
The goal of your initial design is to answer some very important questions about your app. The set of features
and the rough design of your interface help you think about what will be required later when you start writing
code. At some point, you need to translate the information displayed by your app into a set of data objects.
Similarly, the look of your app has an overwhelming influence on the choices you must make when implementing
your user interface code. Doing your initial design on paper (as opposed to on the computer) gives you the
freedom to come up with answers that are not limited by what is easy to do.
Of course, the most important thing you can do before starting your initial design is read iOS Human Interface
Guidelines. That book describes several strategies for doing your initial design. It also offers tips and guidance
about how to create apps that work well in iOS. You might also read iOS Technology Overview to understand
how the capabilities of iOS and how you might use those capabilities to achieve your design goals.
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
12
App Design BasicsLearning the Fundamental iOS Design Patterns and Techniques
No matter what type of app you are creating, there are a few fundamental design patterns and techniques
that you must know before you start writing code. In iOS, the system frameworks provide critical infrastructure
for your app and in most cases are the only way to access the underlying hardware. In turn, the frameworks
use many specific design patterns and assume that you are familiar with them. Understanding these design
patterns is therefore an important first step to understanding how the system can help you develop your app.
The most important design patterns you must know are:
● Model-View-Controller—This design pattern governs the overall structure of your app.
● Delegation—This design pattern facilitates the transfer information and data from one object to another.
● Target-action—This design pattern translates user interactions with buttons and controls into code that
your app can execute.
● Block objects—You use blocks to implement callbacks and asynchronous code.
● Sandboxing—All iOS apps are placed in sandboxes to protect the system and other apps. The structure of
the sandbox affects the placement of your app’s files and has implications for data backups and some
app-related features.
Accurate and efficient memory management is important for iOS apps. Because iOS apps typically have less
usable memory than a comparable desktop computer, apps need to be aggressive about deleting unneeded
objects and be lazy about creating objects in the first place. Apps use the compiler’s Automatic Reference
Counting (ARC) feature to manage memory efficiently. Although using ARC is not required, it is highly
recommended. The alternative is to manage memory yourself by explicitly retaining and releasing objects.
There are other design patterns that you might see used occasionally or use yourself in your own code. For a
complete overview of the design patterns and techniques you will use to create iOS apps, see Start Developing
iOS Apps Today .
Translating Your Initial Design into an Action Plan
iOS assumes that all apps are built using the Model-View-Controller design pattern. Therefore, the first step you
can take toward achieving this goal is to choose an approach for the data and view portions of your app.
● Choose a basic approach for your data model:
● Existing data model code—If you already have data model code written in a C-based language, you
can integrate that code directly into your iOS apps. Because iOS apps are written in Objective-C, they
work just fine with code written in other C-based languages. Of course, there is also benefit to writing
an Objective-C wrapper for any non Objective-C code.
App Design Basics
Learning the Fundamental iOS Design Patterns and Techniques
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
13● Customobjects datamodel—A custom object typically combinessome simple data (strings, numbers,
dates, URLs, and so on) with the businesslogic needed to manage that data and ensure its consistency.
Custom objects can store a combination of scalar values and pointers to other objects. For example,
the Foundation framework defines classes for many simple data types and for storing collections of
other objects. These classes make it much easier to define your own custom objects.
● Structured data model—If your data is highly structured—that is, it lends itself to storage in a
database—use Core Data (or SQLite) to store the data. Core Data provides a simple object-oriented
model for managing yourstructured data. It also provides built-in support forsome advanced features
like undo and iCloud. (SQLite files cannot be used in conjunction with iCloud.)
● Decide whether you need support for documents:
The job of a document isto manage your app’sin-memory data model objects and coordinate the storage
of that data in a corresponding file (or set of files) on disk. Documents normally connote files that the user
created but apps can use documents to manage non user facing files too. One big advantage of using
documents is that the UIDocument class makes interacting with iCloud and the local file system much
simpler. For appsthat use Core Data to store their content, the UIManagedDocument class providessimilar
support.
● Choosing an approach for your user interface:
● Building block approach—The easiest way to create your user interface isto assemble it using existing
view objects. Views represent visual elements such as tables, buttons, text fields, and so on. You use
many views as-is but you can also customize the appearance and behavior ofstandard views as needed
to meet your needs. You can also implement new visual elements using custom views and mix those
views freely with the standard views in your interface. The advantages of views are that they provide
a consistent user experience and they allow you to define complex interfaces quickly and with relatively
little code.
● OpenGL ES-based approach—If your app requiresfrequentscreen updates orsophisticated rendering,
you probably need to draw that content directly using OpenGL ES. The main use of OpenGL ES is for
games and appsthat rely heavily on sophisticated graphics, and therefore need the best performance
possible.
Starting the App Creation Process
After you formulate your action plan, it is time to start coding. If you are new to writing iOS apps, it is good to
take some time to explore the initial Xcode templates that are provided for development. These templates
greatly simplify the work you have to do and make it possible to have an app up and running in minutes. These
templates also allow you to customize your initial project to support your specific needs more precisely. To
that end, when creating your Xcode project, you should already have answers to the following questions in
mind:
App Design Basics
Starting the App Creation Process
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
14● What is the basic interface-style of your app? Different types of app require different sets of initial views
and view controllers. Knowing how you plan to organize your user interface lets you select an initial project
template that is most suited to your needs. You can always change your user interface later, but choosing
the most appropriate template first makes starting your project much easier.
● Do you want to create a Universal app or one targeted specifically for iPad or iPhone? Creating a
universal app requires specifying different sets of views and view controllers for iPad and iPhone and
dynamically selecting the appropriate set at runtime. Universal apps are preferred because they support
more iOS devices but do require you to factor your code better for each platform. For information about
how a universal app affects the code you write, see “Creating a Universal App” (page 108).
● Do you want your app to use storyboards? Storyboards simplify the design process by showing both
the views and view controllers of your user interface and the transitions between them. Storyboards are
supported in iOS 5 and later and are enabled by default for new projects. If your app must run on earlier
versions of iOS, though, you cannot use storyboards and should continue to use nib files.
● Do you want to use Core Data for your data model? Some types of apps lend themselves naturally to a
structured data model, which makes them ideal candidates for using Core Data. For more information
about Core Data and the advantages it offers, see Core Data Programming Guide .
From these questions, you can use Xcode to create your initial project files and start coding.
1. If you have not yet installed Xcode, do so and configure your iOS development team. For detailed
information about setting up your development teams and and preparing your Xcode environment, see
Developing for the App Store .
2. Create your initial Xcode project.
3. Before writing any code, build and run your new Xcode project. Target your app for iOS Simulator so that
you can see it run.
Every new Xcode project starts you with a fully functional (albeit featureless) app. The app itself should
run and display the default views found in the main storyboard or nib file, which are probably not very
interesting. The reason that the app runs at all, though, is because of the infrastructure provided to you
by UIKit. This infrastructure initializes the app, loads the initial interface file, and checks the app in with
the system so that it can start handling events. For more information about this infrastructure and the
capabilities it provides, see “The Core Objects of Your App” (page 17) and “The App Launch Cycle” (page
36).
4. Start writing your app’s primary code.
For new apps, you probably want to start creating the classes associated with your app’s data model first.
These classes usually have no dependencies on other parts of your app and should be something you can
work on initially. For information about ways to build your data model, see “The Data Model” (page 20).
App Design Basics
Starting the App Creation Process
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
15You might also want to start playing around with designs for your user interface by adding views to your
main storyboard or nib file. From these views, you can also start identifying the places in your code where
you need to respond to interface-related changes. For an overview of user interfaces and where they fit
into your app’s code, see “The User Interface” (page 26).
If your app supports iCloud, you should incorporate support for iCloud into your classes at an early stage.
For information about adding iCloud support to your app, see “Integrating iCloud Support Into Your
App” (page 26).
5. Add support for app state changes.
In iOS, the state of an app determines what it is allowed to do and when. App states are managed by
high-level objects in your app but can affect many other objects as well. Therefore, you need to consider
how the current app state affects your data model and view code and update that code appropriately.
For information about app states and how apps run in the foreground and background, see “App States
and Multitasking” (page 33)
6. Create the resources needed to support your app.
Apps submitted to the App Store are expected to have specific resources such as icons and launch images
to make the overall user experience better. Well-factored apps also make heavy use of resource files to
keep their code separate from the data that code manipulates. This factoring makes it much easier to
localize your app, tweak its appearance, and perform other tasks without rewriting any code. For information
about the types of resourcesfound in a typical iOS app and how they are used,see “The App Bundle” (page
30) and “App-Related Resources” (page 93).
7. As needed, implement any app-specific behaviors that are relevant for your app.
There are many ways to modify the way your app launches or interacts with the system. For information
about the most common types of app customizations, see “Advanced App Tricks” (page 108).
8. Add the advanced features that make your app unique.
iOS includes many other frameworksfor managing multimedia, advanced rendering, game content, maps,
contacts, location tracking, and many other advanced features. For an overview of the frameworks and
features you can incorporate into your apps, see iOS Technology Overview.
9. Do some basic performance tuning for your app.
All iOS appsshould be tuned for the best possible performance. Tuned appsrun faster but also use system
resources,such as memory and battery life, more efficiently. For information about areasto focus on during
the tuning process, see “Performance Tuning” (page 127).
10. Iterate.
App development is an iterative process. As you add new features, you might need to revisit some or all
of the preceding steps to make adjustments to your existing code.
App Design Basics
Starting the App Creation Process
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
16UIKit provides the infrastructure for all apps but it is your custom objects that define the specific behavior of
your app. Your app consists of a handful of specific UIKit objects that manage the event loop and the primary
interactions with iOS. Through a combination of subclassing, delegation, and other techniques, you modify the
default behaviors defined by UIKit to implement your app.
In addition to customizing the UIKit objects, you are also responsible for providing or defining other key sets
of objects. The largest set of objects is your app’s data objects, the definition of which is entirely your
responsibility. You must also provide a set of user interface objects, but fortunately UIKit provides numerous
classes to make defining your interface easy. In addition to code, you must also provide the resources and data
files you need to deliver a shippable app.
The Core Objects of Your App
From the time your app is launched by the user, to the time it exits, the UIKit framework manages much of
the app’s core behavior. At the heart of the app is the UIApplication object, which receives events from
the system and dispatches them to your custom code for handling. Other UIKit classes play a part in managing
your app’s behavior too, and all of these classes have similar ways of calling your custom code to handle the
details.
To understand how UIKit objects work with your custom code, it helps to understand a little about the objects
make up an iOS app. Figure 2-1 shows the objects that are most commonly found in an iOS app, and Table
2-1 describes the roles of each object. As you can see from the diagram, iOS apps are organized around the
model-view-controller design pattern. This pattern separates the data objects in the model from the views used
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
17
Core App Objectsto present that data. This separation promotes code reuse by making it possible to swap out your views as
needed and is especially useful when creating universal apps—that is, apps that can run on both iPad and
iPhone.
Figure 2-1 Key objects in an iOS app
Table 2-1 The role of objects in an iOS app
Object Description
You use the UIApplication object essentially asis—that is, withoutsubclassing.
This controller object managesthe app event loop and coordinates other high-level
app behaviors. Your own custom app-level logic resides in your app delegate
object, which works in tandem with this object.
UIApplication
object
Core App Objects
The Core Objects of Your App
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
18Object Description
The app delegate is a custom object created at app launch time, usually by the
UIApplicationMain function. The primary job of this object is to handle state
transitions within the app. For example, this object is responsible for launch-time
initialization and handling transitionsto and from the background. For information
about how you use the app delegate to manage state transitions, see “Managing
App State Changes” (page 34).
In iOS 5 and later, you can use the app delegate to handle other app-related
events. The Xcode project templates declare the app delegate as a subclass of
UIResponder. If the UIApplication object does not handle an event, it
dispatches the event to your app delegate for processing. For more information
about the types of events you can handle, see UIResponder Class Reference .
App delegate
object
Data model objects store your app’s content and are specific to your app. For
example, a banking app might store a database containing financial transactions,
whereas a painting app might store an image object or even the sequence of
drawing commands that led to the creation of that image. (In the latter case, an
image object isstill a data object because it isjust a container for the image data.)
Apps can also use document objects (custom subclasses of UIDocument) to
manage some or all of their data model objects. Document objects are not required
but offer a convenient way to group data that belongs in a single file or file
package. For more information about documents,see “Defining a Document-Based
Data Model” (page 24).
Documents and
data model objects
View controller objects manage the presentation of your app’s content on screen.
A view controller manages a single view and its collection of subviews. When
presented, the view controller makes its views visible by installing them in the
app’s window.
The UIViewController class is the base class for all view controller objects. It
provides default functionality for loading views, presenting them, rotating them
in response to device rotations, and several otherstandard system behaviors. UIKit
and other frameworks define additional view controller classes to implement
standard system interfaces such as the image picker, tab bar interface, and
navigation interface.
For detailed information about how to use view controllers, see View Controller
Programming Guide for iOS .
View controller
objects
Core App Objects
The Core Objects of Your App
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
19Object Description
A UIWindow object coordinatesthe presentation of one or more views on a screen.
Most apps have only one window, which presents content on the main screen,
but apps may have an additional window for content displayed on an external
display.
To change the content of your app, you use a view controller to change the views
displayed in the corresponding window. You never replace the window itself.
In addition to hosting views, windows work with the UIApplication object to
deliver events to your views and view controllers.
UIWindow object
Views and controls provide the visual representation of your app’s content. A view
is an object that draws content in a designated rectangular area and responds to
events within that area. Controls are a specialized type of view responsible for
implementing familiar interface objects such as buttons, text fields, and toggle
switches.
The UIKit framework provides standard views for presenting many different types
of content. You can also define your own custom views by subclassing UIView (or
its descendants) directly.
In addition to incorporating views and controls, apps can also incorporate Core
Animation layersinto their view and control hierarchies. Layer objects are actually
data objects that represent visual content. Views use layer objects intensively
behind the scenes to render their content. You can also add custom layer objects
to your interface to implement complex animations and other types of
sophisticated visual effects.
View, control, and
layer objects
What distinguishes one iOS app from another is the data it manages (and the corresponding business logic)
and how it presents that data to the user. Most interactions with UIKit objects do not define your app but help
you to refine its behavior. For example, the methods of your app delegate let you know when the app is
changing states so that your custom code can respond appropriately.
For information about the specific behaviors of a given class, see the corresponding class reference. For more
information about how events flow in your app and information about your app’s responsibilities at various
points during that flow, see “App States and Multitasking” (page 33).
The Data Model
Your app’s data model comprises your data structures and the business logic needed to keep that data in a
consistent state. You never want to design your data model in total isolation from your app’s user interface;
however, the implementation of your data model objects should be separate and not rely on the presence of
Core App Objects
The Data Model
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
20specific views or view controllers. Keeping your data separate from your user interface makes it easier to
implement a universal app—one that can run on both iPad and iPhone—and also makes it easier to reuse
portions of your code later.
If you have not yet defined your data model, the iOS frameworks provide help for doing so. The following
sections highlight some of the technologies you can use when defining specific types of data models.
Defining a Custom Data Model
When defining a custom data model, create custom objects to represent any high-level constructs but take
advantage of the system-supplied objects for simpler data types. The Foundation framework provides many
objects (most of which are listed in Table 2-2) for managing strings, numbers, and other types of simple data
in an object-oriented way. Using these objects is preferable to defining new objects both because it saves time
and because many other system routines expect you to use the built-in objects anyway.
Table 2-2 Data classes in the Foundation framework
Data Classes Description
Strings in iOS are Unicode based. The string classes
provide support for creating and manipulating strings
in a variety of ways. The attributed string classes
support stylized text and are used only in conjunction
with Core Text.
NSString
(NSMutableString)
NSAttributedString
(NSMutableAttributedString)
Strings and
text
When you want to store numerical values in a
collection, use number objects. The NSNumber class
can represent integer, floating-point values, Booleans,
and char types. The NSIndexPath class stores a
sequence of numbers and is often used to specify
multi-layer selections in hierarchical lists.
NSNumber
NSDecimalNumber
NSIndexPath
Numbers
For times when you need to store raw streams of bytes,
use data objects. Data objects are also commonly used
to store objectsin an archived form. The NSValue class
is typically extended (using categories) and used to
archive common data types such as points and
rectangles.
NSData (NSMutableData)
NSValue
Raw bytes
Use date objects to store timestamps, calendar dates,
and other time-related information.
NSDate
NSDateComponents
Dates and
times
Core App Objects
The Data Model
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
21Data Classes Description
In addition to their traditional use for referring to
network resources, URLs in iOS are the preferred way
to store paths to files. The NSURL class even provides
support for getting and setting file-related attributes.
URLs NSURL
Use collections to group related objects together in a
single place. The Foundation framework provides
several different types of collection classes
NSArray
(NSMutableArray)
NSDictionary
(NSMutableDictionary)
NSIndexSet
(NSMutableIndexSet)
NSOrderedSet
(NSMutableOrderedSet)
NSSet (NSMutableSet)
Collections
In addition to data-related objects, there are some other data types that are commonly used by the iOS
frameworks to manage familiar types of data. You are encouraged to use these data types in your own custom
objects to represent similar types of data.
● NSInteger/NSUInteger—Abstractions for scalar signed and unsigned integers that define the integer
size based on the architecture.
● NSRange—A structure used to define a contiguous portion of a series. For example, you can use ranges
to define the selected characters in a string.
● NSTimeInterval—The number of seconds (whole and partial) in a given time interval.
● CGPoint—An x and y coordinate value that defines a location.
● CGSize—Coordinate values that define a set of horizontal and vertical extents.
● CGRect—Coordinate values that define a rectangular region.
Of course, when defining custom objects, you can always incorporate scalar values directly into your class
implementations. In fact, a custom data object can include a mixture of scalar and object types for its member
variables. Listing 2-1 shows a sample class definition for a collection of pictures. The class in this instance
contains an array of images and a list of the indexes into that array representing the selected items. The class
also contains a string for the collection’s title and a scalar Boolean variable indicating whether the collection
is currently editable.
Core App Objects
The Data Model
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
22Listing 2-1 Definition of a custom data object
@interface PictureCollection : NSObject {
NSMutableOrderedSet* pictures;
NSMutableIndexSet* selection;
NSString* title;
BOOL editable;
}
@property (nonatomic, strong) NSString * title;
@property (nonatomic, readonly) NSOrderedSet* pictures;
// Method definitions...
@end
Note: When defining data objects, it is strongly recommended that you declare properties for any
member variables that you to expose to clients of the object. Synthesizing these properties in your
implementation file automatically creates appropriate accessor methods with the attributes you
require. This ensures that object relationships are maintained appropriately and that references to
objects are removed at appropriate times.
Consider how undo operations on your custom objects might be handled. Supporting undo means being able
to reverse changes made to your objects cleanly. If your objects incorporate complex business logic, you need
to factor that logic in a way that can be undone easily. Here are some tips for implementing undo support in
your custom objects:
● Define the methods you need to make sure that changes to your object are symmetrical. For example, if
you define a method to add an item, make sure you have a method for removing an item in a similar way.
● Factor out your business logic from the code you use to change the values of member variables.
● For multistep actions, use the current NSUndoManager object to group the steps together.
For more information about how to implement undo support in your app, see Undo Architecture . For more
information about the classes of the Foundation framework, see Foundation Framework Reference .
Core App Objects
The Data Model
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
23Defining a Structured Data Model Using Core Data
Core Data is a schema-driven object graph management and persistence framework. Fundamentally, Core
Data helps you to save model objects (in the sense of the model-view-controller design pattern) to a file and get
them back again. This is similar to archiving (see Archives and Serializations Programming Guide ), but Core
Data offers much more than that.
● Core Data provides an infrastructure for managing all the changes to your model objects. This gives you
automatic support for undo and redo, and for maintaining reciprocal relationships between objects.
●
It allows you to keep just a subset of your model objects in memory at any given time, which is very
important for iOS apps.
●
It uses a schema to describe the model objects. You define the principal features of your model
classes—including the relationships between them—in a GUI-based editor. This provides a wealth of basic
functionality “for free,” including setting of default values and attribute value validation.
●
It allows you to maintain disjoint sets of edits of your objects. This is useful if you want to, for example,
allow the user to make editsin one view that may be discarded without affecting data displayed in another
view.
●
It has an infrastructure for data store versioning and migration. This lets you easily upgrade an old version
of the user’s file to the current version.
●
It allows you to store your data in iCloud and access it from multiple devices.
For information about how to use Core Data, see Core Data Programming Guide .
Defining a Document-Based Data Model
A document-based data model is a convenient way to manage the files your app writes to disk. In this type of
data model, you use a document object to represent the contents of a single file (or file package) on disk. That
document object is responsible for reading and writing the contents of the file and working with your app’s
view controllers to present the document’s contents on screen. The traditional use for document objects is to
manage files containing user data. For example, an app that creates and managestext files would use a separate
document object to manage each text file. However, you can use document objects for private app data that
is also backed by a file.
Core App Objects
The Data Model
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
24Figure 2-2 illustrates the typical relationships between documents, files, and the objects in your app’s data
model. With few exceptions, each document is self-contained and does not interact directly with other
documents. The document manages a single file (or file package) and creates the in-memory representation
of any data found in that file. Because the contents of each file are unique, the data structures associated with
each document are also unique.
Figure 2-2 Using documents to manage the content of files
You use the UIDocument class to implement document objects in your iOS app. This class provides the basic
infrastructure needed to handle the file management aspects of the document. Other benefits of UIDocument
include:
●
It provides support for autosaving the document contents at appropriate times.
●
It handlesthe required file coordination for documentsstored in iCloud. It also provides hooksfor resolving
version conflicts.
●
It provides support for undoing actions.
You mustsubclass UIDocument in order to implement the specific behavior required by your app’s documents.
For detailed information about how to implement a document-based app using UIDocument, see
Document-Based App Programming Guide for iOS .
Core App Objects
The Data Model
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
25Integrating iCloud Support Into Your App
No matter how you store your app’s data, iCloud is a convenient way to make that data available to all of the
user’s devices. Supporting iCloud in your app just means changing where you store your files. Instead ofstoring
them in your app’s sandbox directory, you store them in a designated portion of the user’s iCloud storage. In
both cases, your app just works with files and directories. However, with iCloud, you have to do a little extra
work because the data is now shared and accessible to multiple processes. Fortunately, when you use iOS
frameworks to manage your data, much of the hard work needed to support iCloud is done for you.
● Document based apps get iCloud support through the UIDocument class. This class handles almost all of
the complex interactions required to manage iCloud-based files.
● Core Data apps also get iCloud support through the Core Data framework. This framework automatically
updates the data stores on all of the user’s devices to account for new and changed data objects, leaving
each device with a complete and up-to-date set of data.
●
If you implement a custom data model and manage files yourself, you can use file presenters and file
coordinators to ensure that the changes you make are done safely and in concert with the changes made
on the user’s other devices.
● For apps that want to share preferences or small quantities of infrequently changing data, you can use
the NSUbiquitousKeyValueStore object to do so. This objectsupportsthe sharing ofsimple data types
such as strings, numbers, and dates in limited quantities.
For more information about incorporating iCloud support into your apps, see iCloud Design Guide .
The User Interface
Every iOS app has at least one window and one view for presenting its content. The window provides the area
in which to display the content and is an instance of the UIWindow class. Views are responsible for managing
the drawing of your content (and handling touch events) and are instances of the UIView class. For interfaces
that you build using view objects, your app’s window naturally contains multiple view objects. For interfaces
built using OpenGL ES, you typically have a single view and use that view to render your content.
View controllers also play a very important role in your app’s user interface. A view controller is an instance of
the UIViewController class and is responsible for managing a single set of views and the interactions
between those views and other parts of your app. Because iOS apps have a limited amount of space in which
to display content, view controllers also provide the infrastructure needed to swap out the views from one
view controller and replace them with the views of another view controller. Thus, view controllers are you how
implement transitions from one type of content to another.
Core App Objects
The User Interface
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
26You should always think of a view controller object as a self-contained unit. It handles the creation and
destruction of its own views, handles their presentation on the screen, and coordinates interactions between
the views and other objects in your app.
Building an Interface Using UIKit Views
Apps that use UIKit views for drawing are easy to create because you can assemble a basic interface quickly.
The UIKit framework provides many different types of views to help present and organize data. Controls—a
special type of view—provide a built-in mechanism for executing custom code whenever the user performs
appropriate actions. For example, clicking on a button causesthe button’s associated action method to be called.
The advantage of interfaces based on UIKit views is that you can assemble them graphically using Interface
Builder—the visual interface editor built in to Xcode. Interface Builder provides a library of the standard views,
controls, and other objects that you need to build your interface. After dragging these objects from the library,
you drop them onto the work surface and arrange them in any way you want. You then use inspectors to
configure those objects before saving them in a storyboard or nib file. The process of assembling your interface
graphically is much faster than writing the equivalent code and allows you to see the results immediately,
without the need to build and run your app.
Core App Objects
The User Interface
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
27Note: You can also incorporate custom views into your UIKit view hierarchies. A custom view is a
subclass of UIView in which you handle all of the drawing and event-handling tasks yourself. For
more information about creating custom views and incorporating them into your view hierarchies,
see View Programming Guide for iOS .
Figure 2-3 shows the basic structure of an app whose interface is constructed solely using view objects. In this
instance, the main view spansthe visible area of the window (minusthe scroll bar) and provides a simple white
background. The main view also contains three subviews: an image view, a text view, and a button. Those
subviews are what the app uses to present content to the user and respond to interactions. All of the views in
the hierarchy are managed by a single view controller object.
Figure 2-3 Building your interface using view objects
In a typical view-based app, you coordinate the onscreen views using your view controller objects. An app
always has one view controller that is responsible for presenting all of the content on the screen. That view
controller has a content view, which itself may contain other views. Some view controllers can also act as
containers for content provided by other view controllers. For example, a split view controller displays the
content from two view controllers side by side. Because view controllers play a vital role in view management,
understand how they work and the benefits they provide by reading View Controller Programming Guide for
iOS . For more information about views and the role they play in apps, see View Programming Guide for iOS .
Core App Objects
The User Interface
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
28Building an Interface Using Views and OpenGL ES
Games and other appsthat need high frame rates orsophisticated drawing capabilities can add viewsspecifically
designed for OpenGL ES drawing to their view hierarchies. The simplest type of OpenGL ES app is one that has
a window object and a single view for OpenGL ES drawing and a view controller to manage the presentation
and rotation of that content. More sophisticated applications can use a mixture of both OpenGL ES views and
UIKit views to implement their interfaces.
Figure 2-4 shows the configuration of an app that uses a single OpenGL ES view to draw its interface. Unlike
a UIKit view, the OpenGL ES view is backed by a different type of layer object (a CAEAGLLayer object) instead
of the standard layer used for view-based apps. The CAEAGLLayer object provides the drawing surface that
OpenGL ES can render into. To manage the drawing environment, the app also creates an EAGLContext object
and stores that object with the view to make it easy to retrieve.
Figure 2-4 Building your interface using OpenGL ES
For information on how to configure OpenGL ES for use in your app, see OpenGL ES Programming Guide for
iOS .
Core App Objects
The User Interface
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
29The App Bundle
When you build your iOS app, Xcode packages it as a bundle. A bundle is a directory in the file system that
groups related resources together in one place. An iOS app bundle contains the app executable file and
supporting resource files such as app icons, image files, and localized content. Table 2-3 lists the contents of
a typical iOS app bundle, which for demonstration purposes is called MyApp. This example is for illustrative
purposes only. Some of the files listed in this table may not appear in your own app bundles.
Table 2-3 A typical app bundle
File Example Description
The executable file contains your app’s compiled
code. The name of your app’s executable file is the
same as your app name minusthe .app extension.
This file is required.
App MyApp
executable
The Info.plist file contains configuration data
for the app. The system usesthis data to determine
how to interact with the app.
This file is required and must be called
Info.plist. For more information, see Figure
6-1 (page 120).
The information Info.plist
property list file
Your app icon is used to represent your app on the
device’s Home screen. Other icons are used by the
system in appropriate places. Icons with @2x in
their filename are intended for devices with Retina
displays.
An app icon is required. For information about
specifying icon image files, see “App Icons” (page
98).
Icon.png
Icon@2x.png
Icon-Small.png
Icon-Small@2x.png
App icons
The system uses this file as a temporary
background while your app is launching. It is
removed as soon as your app is ready to display
its user interface.
At least one launch image is required. For
information about specifying launch images, see
“App Launch (Default) Images” (page 100).
Default.png
Default-Portrait.png
Default-Landscape.png
Launch images
Core App Objects
The App Bundle
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
30File Example Description
Storyboards contain the views and view controllers
that the app presents on screen. Views in a
storyboard are organized according to the view
controller that presents them. Storyboards also
identify the transitions (called segues) that take
the user from one set of views to another.
The name of the main storyboard file is set by
Xcode when you create your project. You can
change the name by assigning a different value to
the NSMainStoryboardFile key in the
Info.plist file.) Apps that use nib files instead
of storyboards can replace the
NSMainStoryboardFile key with the
NSMainNibFile key and use that key to specify
their main nib file.
The use of storyboards (or nib files) is optional but
recommended.
Storyboard MainBoard.storyboard
files (or nib
files)
If you are distributing your app ad hoc, include a
512 x 512 pixel version of your app icon. This icon
is normally provided by the App Store from the
materials you submit to iTunes Connect. However,
because apps distributed ad hoc do not go through
the App Store, your icon must be present in your
app bundle instead. iTunes uses this icon to
represent your app. (The file you specify should be
the same one you would have submitted to the
App Store, if you were distributing your app that
way.)
The filename of thisicon must be iTunesArtwork
and must not include a filename extension. This
file is required for ad hoc distribution but is
optional otherwise.
Ad hoc iTunesArtwork
distribution
icon
Core App Objects
The App Bundle
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
31File Example Description
If you want to expose custom app preferences
through the Settings app, you must include a
settings bundle. This bundle contains the property
list data and other resource files that define your
app preferences. The Settings app uses the
information in this bundle to assemble the
interface elements required by your app.
This bundle is optional. For more information about
preferences and specifying a settings bundle, see
Preferences and Settings Programming Guide .
Settings Settings.bundle
bundle
Nonlocalized resources include things like images,
sound files, movies, and custom data filesthat your
app uses. All of these files should be placed at the
top level of your app bundle.
sun.png
mydata.plist
Nonlocalized
resource files
Localized resources must be placed in
language-specific project directories, the names
for which consist of an ISO 639-1 language
abbreviation plusthe .lproj suffix. (For example,
the en.lproj, fr.lproj, and es.lproj
directories contain resources localized for English,
French, and Spanish.)
An iOS app should be internationalized and have a
language.lproj directory for each language it
supports. In addition to providing localized versions
of your app’s custom resources, you can also
localize your app icon, launch images, and Settings
icon by placing files with the same name in your
language-specific project directories.
For more information, see “Localized Resource
Files” (page 105).
en.lproj
fr.lproj
es.lproj
Subdirectories
for localized
resources
Formore information aboutthe structure of an iOS app bundle,see Bundle ProgrammingGuide . Forinformation
about how to load resource files from your bundle, see “Loading Resources Into Your App” (page 106).
Core App Objects
The App Bundle
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
32For iOS apps, it is crucial to know whether your app is running in the foreground or the background. Because
system resources are more limited on iOS devices, an app must behave differently in the background than in
the foreground. The operating system also limits what your app can do in the background in order to improve
battery life and to improve the user’s experience with the foreground app. The operating system notifies your
app whenever it moves between the foreground and background. These notifications are your chance to
modify your app’s behavior.
While your app isin the foreground, the system sendstouch eventsto it for processing. The UIKit infrastructure
does most of the hard work of delivering eventsto your custom objects. All you have to do is override methods
in the appropriate objectsto processthose events. For controls, UIKitsimplifiesthings even further by handling
the touch events for you and calling your custom code only when something interesting happens, such as
when the value of a text field changes.
As you implement your app, follow these guidelines:
●
(Required) Respond appropriately to the state transitions that occur. Not handling these transitions
properly can lead to data loss and a bad user experience. For a summary of how to respond to state
transitions, see “Managing App State Changes” (page 34).
●
(Required) When moving to the background, make sure your app adjusts its behavior appropriately. For
guidelines about what to do when your app movesto the background,see “Being a Responsible Background
App” (page 63).
●
(Recommended) Register for any notifications that report system changes your app needs. When an app
is suspended, the system queues key notifications and delivers them when the app resumes execution.
Apps should use these notifications to make a smooth transition back to execution. For more information,
see “Processing Queued Notifications at Wakeup Time” (page 48).
●
(Optional) If your app needsto do actual work while in the background, ask the system for the appropriate
permissions to continue running. For more information about the types of background work you can do
and how to request permission to do that work, see “Background Execution and Multitasking” (page 54).
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
33
App States and MultitaskingManaging App State Changes
At any given moment, your app is in one of the states listed in Table 3-1. The system moves your app from
state to state in response to actions happening throughout the system. For example, when the user presses
the Home button, a phone call comes in, or any of several other interruptions occurs, the currently running
apps change state in response. Figure 3-1 (page 35) shows the paths that an app takes when moving from
state to state.
Table 3-1 App states
State Description
Not running The app has not been launched or was running but was terminated by the system.
The app is running in the foreground but is currently not receiving events. (It may be
executing other code though.) An app usually stays in this state only briefly as it
transitions to a different state.
Inactive
The app is running in the foreground and is receiving events. This is the normal mode
for foreground apps.
Active
The app is in the background and executing code. Most apps enter this state briefly
on their way to being suspended. However, an app that requests extra execution time
may remain in thisstate for a period of time. In addition, an app being launched directly
into the background enters this state instead of the inactive state. For information
about how to execute code while in the background, see “Background Execution and
Multitasking” (page 54).
Background
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
34State Description
The app is in the background but is not executing code. The system moves apps to
this state automatically and does not notify them before doing so. While suspended,
an app remains in memory but does not execute any code.
When a low-memory condition occurs, the system may purge suspended apps without
notice to make more space for the foreground app.
Suspended
Figure 3-1 State changes in an iOS app
Note: Apps running in iOS 3.2 and earlier do not enter the background or suspended states. In
addition, some devices do not support multitasking or background execution at all, even when
running iOS 4 or later. Appsrunning on those devices also do not enter the background orsuspended
states. Instead, apps are terminated upon leaving the foreground.
Most state transitions are accompanied by a corresponding call to the methods of your app delegate object.
These methods are your chance to respond to state changes in an appropriate way. These methods are listed
below, along with a summary of how you might use them.
● application:willFinishLaunchingWithOptions:—This method is your app’s first chance to
execute code at launch time.
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
35● application:didFinishLaunchingWithOptions:—This method allows you to perform any final
initialization before your app is displayed to the user.
● applicationDidBecomeActive:—Lets your app know that it is about to become the foreground app.
Use this method for any last minute preparation.
● applicationWillResignActive:—Lets you know that your app is transitioning away from being the
foreground app. Use this method to put your app into a quiescent state.
● applicationDidEnterBackground:—Lets you know that your app is now running in the background
and may be suspended at any time.
● applicationWillEnterForeground:—Lets you know that your app is moving out of the background
and back into the foreground, but that it is not yet active.
● applicationWillTerminate:—Lets you know that your app is being terminated. This method is not
called if your app is suspended.
The App Launch Cycle
When your app islaunched, it movesfrom the not running state to the active or background state, transitioning
briefly through the inactive state. As part of the launch cycle, the system creates a process and main thread
for your app and calls your app’s main function on that main thread. The default main function that comes
with your Xcode project promptly hands control over to the UIKit framework, which does most of the work in
initializing your app and preparing it to run.
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
36Figure 3-2 shows the sequence of events that occurs when an app is launched into the foreground, including
the app delegate methods that are called.
Figure 3-2 Launching an app into the foreground
User taps app icon
main()
UIApplicationMain()
Load main UI file
First initialization
Restore UI state
Final initialization
Launch Time
application:
willFinishLaunchingWithOptions:
Various methods
application:
didFinishLaunchingWithOptions:
Handle events
Your Code
Switch to a different app
Running
applicationDidBecomeActive:
Event
Loop
Activate the app
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
37If your app is launched into the background instead—usually to handle some type of background event—the
launch cycle changes slightly to the one shown in Figure 3-3. The main difference is that instead of your app
being made active, it entersthe background state to handle the event and then issuspended shortly afterward.
When launching into the background, the system still loads your app’s user interface files but it does not display
the app’s window.
Figure 3-3 Launching an app into the background
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
38To determine whether your app islaunching into the foreground or background, check the applicationState
property ofthe shared UIApplication objectin your application:willFinishLaunchingWithOptions:
or application:didFinishLaunchingWithOptions: delegate method. When the app is launched into
the foreground, this property containsthe value UIApplicationStateInactive. When the app islaunched
into the background, the property contains the value UIApplicationStateBackground instead. You can
use this difference to adjust the launch-time behavior of your delegate methods accordingly.
Note: When an app is launched so that it can open a URL, the sequence of startup events is slightly
different from those shown in Figure 3-2 and Figure 3-3. For information about the startup sequences
that occur when opening a URL, see “Handling URL Requests” (page 120).
About the main Function
Like any C-based app, the main entry point for an iOS app at launch time is the main function. In an iOS app,
the main function is used only minimally. Its main job is to hand control to the UIKit framework. Therefore,
any new project you create in Xcode comes with a default main function like the one shown in Listing 3-1.
With few exceptions, you should never change the implementation of this function.
Listing 3-1 The main function of an iOS app
#import
int main(int argc, char *argv[])
{
@autoreleasepool {
return UIApplicationMain(argc, argv, nil, NSStringFromClass([MyAppDelegate
class]));
}
}
Note: An autorelease pool is used in memory management. It is a Cocoa mechanism used to defer
the release of objects created during a functional block of code. For more information about
autorelease pools, see Advanced Memory Management Programming Guide .
The UIApplicationMain function takes four parameters and uses them to initialize the app. You should
never have to change the default values passed into thisfunction. Still, it is valuable to understand their purpose
and how they start the app.
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
39● The argc and argv parameters contain any launch-time arguments passed to the app from the system.
These arguments are parsed by the UIKit infrastructure and can otherwise be ignored.
● The third parameter identifies the name of the principal app class. This is the class responsible for running
the app. It is recommend that you specify nil for this parameter, which causes UIKit to use the
UIApplication class.
● The fourth parameter identifies the class of your custom app delegate. Your app delegate is responsible
for managing the high-level interactions between the system and your code. The Xcode template projects
set this parameter to an appropriate value automatically.
Another thing the UIApplicationMain function does is load the app’s main user interface file. The main
interface file contains the initial view-related objects you plan to display in your app’s user interface. For apps
that use “Using Storyboards”, this function loads the initial view controller from your storyboard and installs it
in the window provided by your app delegate. For appsthat use nib files, the function loadsthe nib file contents
into memory but does not install them in your app’s window; you must install them in the
application:willFinishLaunchingWithOptions: method of your app delegate.
An app can have either a main storyboard file or a main nib file but it cannot have both. Storyboards are the
preferred way to specify your app’s user interface but are not supported on all versions of iOS. The name of
your app’s main storyboard file goes in the UIMainStoryboardFile key of your app’s Info.plist file. (For
nib-based apps, the name of your main nib file goes in the NSMainNibFile key instead.) Normally, Xcode
sets the value of the appropriate key when you create your project, but you can change it later if needed.
For more information about the Info.plist file and how you use it to configure your app,see “The Information
Property List File” (page 93).
What to Do at Launch Time
When your app is launched (either into the foreground or background), use your app delegate’s
application:willFinishLaunchingWithOptions: and
application:didFinishLaunchingWithOptions: methods to do the following:
● Check the contents of the launch options dictionary for information about why the app was launched,
and respond appropriately.
●
Initialize the app’s critical data structures.
● Prepare your app’s window and views for display.
Apps that use OpenGL ES should not use this method to prepare their drawing environment. Instead, they
should defer any OpenGL ES drawing calls to the applicationDidBecomeActive: method.
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
40If your app does not automatically load a main storyboard or nib file at launch time, you can use the
application:willFinishLaunchingWithOptions: method to prepare your app’s window for display.
For apps that support both portrait and landscape orientations, always set up the root view controller of your
main window in a portrait orientation. If the device is in a different orientation at launch time, the system tells
the root view controller to rotate your views to the correct orientation before displaying the window.
Your application:willFinishLaunchingWithOptions: and
application:didFinishLaunchingWithOptions: methods should always be as lightweight as possible
to reduce your app’s launch time. Apps are expected to launch and initialize themselves and start handling
events in less than 5 seconds. If an app does not finish its launch cycle in a timely manner, the system kills it
for being unresponsive. Thus, any tasks that might slow down your launch (such as accessing the network)
should be executed asynchronously on a secondary thread.
When launching into the foreground, the system also calls the applicationDidBecomeActive: method
to finish the transition to the foreground. Because this method is called both at launch time and when
transitioning from the background, use it to perform any tasks that are common to the two transitions.
When launching into the background, there should not be much for your app to do except get ready to handle
whatever event arrived.
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
41Responding to Interruptions
When an alert-based interruption occurs, such as an incoming phone call, the app moves temporarily to the
inactive state so that the system can prompt the user about how to proceed. The app remains in this state
until the user dismiss the alert. At this point, the app either returns to the active state or moves to the
background state. Figure 3-4 shows the flow of events through your app when an alert-based interruption
occurs.
Figure 3-4 Handling alert-based interruptions
In iOS 5, notificationsthat display a banner do not deactivate your app in the way that alert-based notifications
do. Instead, the banner is laid along the top edge of your app window and your app continues receive touch
events as before. However, if the user pulls down the banner to reveal the notification center, your app moves
to the inactive state just as if an alert-based interruption had occurred. Your app remains in the inactive state
until the user dismisses the notification center or launches another app. At this point, your app moves to the
appropriate active or background state. The user can use the Settings app to configure which notifications
display a banner and which display an alert.
Pressing the Sleep/Wake button is another type of interruption that causes your app to be deactivated
temporarily. When the user presses this button, the system disables touch events, moves the app to the
background butsetsthe value of the app’s applicationState property to UIApplicationStateInactive
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
42(as opposed to UIApplicationStateBackground), and finally locksthe screen. A locked screen has additional
consequences for apps that use data protection to encrypt files. Those consequences are described in “What
to Do When an Interruption Occurs” (page 43).
What to Do When an Interruption Occurs
Alert-based interruptions result in a temporary loss of control by your app. Your app continues to run in the
foreground, but it does not receive touch events from the system. (It does continue to receive notifications
and other types of events, such as accelerometer events, though.) In response to this change, your app should
do the following in its applicationWillResignActive: method:
● Stop timers and other periodic tasks.
● Stop any running metadata queries.
● Do not initiate any new tasks.
● Pause movie playback (except when playing back over AirPlay).
● Enter into a pause state if your app is a game.
● Throttle back OpenGL ES frame rates.
● Suspend any dispatch queues or operation queues executing non-critical code. (You can continue processing
network requests and other time-sensitive background tasks while inactive.)
When your app is moved back to the active state, its applicationDidBecomeActive: method should
reverse any of the steps taken in the applicationWillResignActive: method. Thus, upon reactivation,
your app should restart timers, resume dispatch queues, and throttle up OpenGL ES frame rates again. However,
games should not resume automatically; they should remain paused until the user chooses to resume them.
When the user pressesthe Sleep/Wake button, apps with files protected by the NSFileProtectionComplete
protection option must close any referencesto those files. For devices configured with an appropriate password,
pressing the Sleep/Wake button locks the screen and forces the system to throw away the decryption keys for
files with complete protection enabled. While the screen is locked, any attempts to access the corresponding
files will fail. So if you have such files, you should close any references to them in your
applicationWillResignActive: method and open new references in your
applicationDidBecomeActive: method.
Adjusting Your User Interface During a Phone Call
When the user takes a call and then returns to your app while on the call, the height of the status bar grows
to reflect the fact that the user is on a call. Similarly, when the user ends the call, the status bar height shrinks
back to its regular size.
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
43The best way to handle status bar height changes is to use view controllers to manage your views. When
installed in your interface, view controllers automatically adjust the height of their managed views when the
status bar frame size changes.
If your app does not use view controllers for some reason, you must respond to status bar frame changes
manually by registering for the UIApplicationDidChangeStatusBarFrameNotification notification.
Your handler for this notification should get the status bar height and use it to adjust the height of your app’s
views appropriately.
Moving to the Background
When the user presses the Home button, presses the Sleep/Wake button, or the system launches another app,
the foreground app transitions to the inactive state and then to the background state. These transitions result
in callsto the app delegate’s applicationWillResignActive: and applicationDidEnterBackground:
methods, as shown in Figure 3-5. After returning from the applicationDidEnterBackground: method,
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
44most apps move to the suspended state shortly afterward. Apps that request specific background tasks (such
as playing music) or that request a little extra execution time from the system may continue to run for a while
longer.
Figure 3-5 Moving from the foreground to the background
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
45Note: Apps are moved to the background only on devices that support multitasking and only if
those devices are running iOS 4.0 or later. In all other cases, the app is terminated (and thus purged
from memory) instead of moved to the background.
What to Do When Moving to the Background
Apps can use their applicationDidEnterBackground: method to prepare for moving to the background
state. When moving to the background, all apps should do the following:
● Prepare to have their picture taken. When the applicationDidEnterBackground: method returns,
the system takes a picture of your app’s user interface and usesthe resulting image for transition animations.
If any views in your interface contain sensitive information, you should hide or modify those views before
the applicationDidEnterBackground: method returns.
● Save user data and app state information. All unsaved changes should be written to disk when entering
the background. This step is necessary because your app might be quietly killed while in the background
for any number of reasons. You can perform this operation from a background thread as needed.
● Free up as much memory as possible. For more information about what to do and why this is important,
see “Memory Usage for Background Apps” (page 47).
Your app delegate’s applicationDidEnterBackground: method has approximately 5 seconds to finish
any tasks and return. In practice, this method should return as quickly as possible. If the method does not
return before time runs out, your app is killed and purged from memory. If you still need more time to perform
tasks, call the beginBackgroundTaskWithExpirationHandler: method to request background execution
time and then start any long-running tasks in a secondary thread. Regardless of whether you start any
background tasks, the applicationDidEnterBackground: method must still exit within 5 seconds.
Note: The UIApplicationDidEnterBackgroundNotification notification is also sent to let
interested parts of your app know that it is entering the background. Objects in your app can use
the default notification center to register for this notification.
Depending on the features of your app, there are other things your app should do when moving to the
background. For example, any active Bonjour services should be suspended and the app should stop calling
OpenGL ES functions. For a list of things your app should do when moving to the background, see “Being a
Responsible Background App” (page 63).
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
46Memory Usage for Background Apps
Every app should free up as much memory as is practical upon entering the background. The system tries to
keep as many apps in memory at the same time as it can, but when memory runs low it terminates suspended
apps to reclaim that memory. Apps that consume large amounts of memory while in the background are the
first apps to be terminated.
Practically speaking, your app should remove strong referencesto objects assoon asthey are no longer needed.
Removing strong references gives the compiler the ability to release the objects right away so that the
corresponding memory can be reclaimed. However, if you want to cache some objectsto improve performance,
you can wait until the app transitions to the background before removing references to them.
Some examples of objects that you should remove strong references to as soon as possible include:
●
Image objects
● Large media or data files that you can load again from disk
● Any other objects that your app does not need and can recreate easily later
To help reduce your app’s memory footprint, the system automatically purges some data allocated on behalf
of your app when your app moves to the background.
● The system purges the backing store for all Core Animation layers. This effort does not remove your app’s
layer objectsfrom memory, nor doesit change the current layer properties. Itsimply preventsthe contents
of those layersfrom appearing onscreen, which given that the app isin the background should not happen
anyway.
●
It removes any system references to cached images. (If your app does not have a strong reference to the
images, they are subsequently removed from memory.)
●
It removes strong references to some other system-managed data caches.
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
47Returning to the Foreground
Returning to the foreground is your app’s chance to restart the tasks that it stopped when it moved to the
background. The steps that occur when moving to the foreground are shown in Figure 3-6. The
applicationWillEnterForeground: method should undo anything that was done in your
applicationDidEnterBackground: method, and the applicationDidBecomeActive: method should
continue to perform the same activation tasks that it would at launch time.
Figure 3-6 Transitioning from the background to the foreground
Note: The UIApplicationWillEnterForegroundNotification notification is also available
for tracking when your app reenters the foreground. Objects in your app can use the default
notification center to register for this notification.
Processing Queued Notifications at Wakeup Time
An app in the suspended state must be ready to handle any queued notifications when it returnsto a foreground
or background execution state. A suspended app does not execute any code and therefore cannot process
notifications related to orientation changes, time changes, preferences changes, and many others that would
affect the app’s appearance orstate. To make sure these changes are not lost, the system queues many relevant
notifications and delivers them to the app as soon as it starts executing code again (either in the foreground
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
48or background). To prevent your app from becoming overloaded with notifications when it resumes, the system
coalesces events and delivers a single notification (of each relevant type) that reflects the net change since
your app was suspended.
Table 3-2 lists the notifications that can be coalesced and delivered to your app. Most of these notifications
are delivered directly to the registered observers. Some, like those related to device orientation changes, are
typically intercepted by a system framework and delivered to your app in another way.
Table 3-2 Notifications delivered to waking apps
Event Notifications
EAAccessoryDidConnectNotification
EAAccessoryDidDisconnectNotification
An accessory is connected or
disconnected.
UIDeviceOrientationDidChangeNotification
In addition to this notification, view controllers update
their interface orientations automatically.
The device orientation changes.
UIApplicationSignificantTimeChangeNotification
There is a significant time change.
UIDeviceBatteryLevelDidChangeNotification
UIDeviceBatteryStateDidChangeNotification
The battery level or battery state changes.
The proximity state changes. UIDeviceProximityStateDidChangeNotification
UIApplicationProtectedDataWillBecomeUnavailable
UIApplicationProtectedDataDidBecomeAvailable
The status of protected files changes.
UIScreenDidConnectNotification
UIScreenDidDisconnectNotification
An external display is connected or
disconnected.
The screen mode of a display changes. UIScreenModeDidChangeNotification
Preferences that your app exposes NSUserDefaultsDidChangeNotification
through the Settings app changed.
The current language or locale settings NSCurrentLocaleDidChangeNotification
changed.
The status of the user’s iCloud account NSUbiquityIdentityDidChangeNotification
changed.
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
49Queued notifications are delivered on your app’s main run loop and are typically delivered before any touch
events or other user input. Most apps should be able to handle these events quickly enough that they would
not cause any noticeable lag when resumed. However, if your app appears sluggish when it returns from the
background state, use Instruments to determine whether your notification handler code is causing the delay.
An app returning to the foreground also receives view-update notifications for any views that were marked
dirty since the last update. An app running in the background can still call the setNeedsDisplay or
setNeedsDisplayInRect: methods to request an update for its views. However, because the views are not
visible, the system coalesces the requests and updates the views only after the app returns to the foreground.
Handling iCloud Changes
If the status of iCloud changes for any reason, the system delivers a
NSUbiquityIdentityDidChangeNotification notification to your app. The state of iCloud changes when
the user logs into or out of an iCloud account or enables or disables the syncing of documents and data. This
notification is your app’s cue to update caches and any iCloud-related user interface elementsto accommodate
the change. For example, when the user logs out of iCloud, you should remove references to all iCloud–based
files or data.
If your app has already prompted the user about whether to store files in iCloud, do not prompt again when
the status of iCloud changes. After prompting the user the first time, store the user’s choice in your app’s local
preferences. You might then want to expose that preference using a Settings bundle or as an option in your
app. But do not repeat the prompt again unless that preference is not currently in the user defaults database.
Handling Locale Changes Gracefully
If a user changes the current language while your app is suspended, you can use the
NSCurrentLocaleDidChangeNotification notification to force updates to any views containing
locale-sensitive information, such as dates, times, and numbers when your app returns to the foreground. Of
course, the best way to avoid language-related issues is to write your code in ways that make it easy to update
views. For example:
● Use the autoupdatingCurrentLocale class method when retrieving NSLocale objects. This method
returns a locale object that updates itself automatically in response to changes, so you never need to
recreate it. However, when the locale changes, you still need to refresh views that contain content derived
from the current locale.
● Re-create any cached date and number formatter objects whenever the current locale information changes.
For more information about internationalizing your code to handle locale changes, see Internationalization
Programming Topics.
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
50Responding to Changes in Your App’s Settings
If your app has settings that are managed by the Settings app, it should observe the
NSUserDefaultsDidChangeNotification notification. Because the user can modify settings while your
app is suspended or in the background, you can use this notification to respond to any important changes in
those settings. In some cases, responding to this notification can help close a potential security hole. For
example, an email program should respond to changes in the user’s account information. Failure to monitor
these changes could cause privacy or security issues. Specifically, the current user might be able to send email
using the old account information, even if the account no longer belongs to that person.
Upon receiving the NSUserDefaultsDidChangeNotification notification, your app should reload any
relevant settings and, if necessary, reset its user interface appropriately. In cases where passwords or other
security-related information has changed, you should also hide any previously displayed information and force
the user to enter the new password.
App Termination
Although apps are generally moved to the background and suspended, if any of the following conditions are
true, your app is terminated and purged from memory instead:
● The app is linked against a version of iOS earlier than 4.0.
● The app is deployed on a device running a version of iOS earlier than 4.0.
● The current device does notsupportmultitasking;see “DeterminingWhether Multitasking Is Available” (page
54).
● The app includes the UIApplicationExitsOnSuspend key in its Info.plist file; see “Opting out of
Background Execution” (page 65).
If your app is running (either in the foreground or background) at termination time, the system calls your app
delegate’s applicationWillTerminate: method so that you can perform any required cleanup. You can
use this method to save user data or app state information that you would use to restore your app to its current
state on a subsequent launch. Your method has approximately 5 seconds to perform any tasks and return. If
it does not return in time, the app is killed and removed from memory.
Important: The applicationWillTerminate: method is not called if your app is currently suspended.
Even if you develop your app using iOS SDK 4 and later, you must still be prepared for your app to be killed
without any notification. The user can kill apps explicitly using the multitasking UI. In addition, if memory
becomes constrained, the system might remove apps from memory to make more room. Suspended apps are
not notified of termination but if your app is currently running in the background state (and not suspended),
the system calls the applicationWillTerminate: method of your app delegate. Your app cannot request
additional background execution time from this method.
App States and Multitasking
Managing App State Changes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
51The Main Run Loop
The main run loop of your app is responsible for processing all user-related events. The UIApplication
objectsets up the main run loop at launch time and usesit to process events and handle updatesto view-based
interfaces. As the name suggests, the main run loop executes on the app’s main thread. This behavior ensures
that user-related events are processed serially in the order in which they were received.
Figure 3-7 shows the architecture of the main run loop and how user events result in actions taken by your
app. As the user interacts with a device, events related to those interactions are generated by the system and
delivered to the app via a special port set up by UIKit. Events are queued internally by the app and dispatched
one-by-one to the main run loop for execution. The UIApplication object is the first object to receive the
event and make the decision about what needs to be done. A touch event is usually dispatched to the main
window object, which in turn dispatches it to the view in which the touch occurred. Other events might take
slightly different paths through various app objects.
Figure 3-7 Processing events in the main run loop
Many types of events can be delivered in an iOS app. The most common ones are listed in Table 3-3. Many of
these event types are delivered using the main run loop of your app, but some are not. For example,
accelerometer events are delivered directly to the accelerometer delegate object that you specify. For information
about how to handle most types of events—including touch, remote control, motion, accelerometer, and
gyroscopic events—see Event Handling Guide for iOS .
App States and Multitasking
The Main Run Loop
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
52Table 3-3 Common types of events for iOS apps
Event type Delivered to… Notes
Views are responder objects. Any touch events not
handled by the view are forwarded down the responder
chain for processing.
The view object in
which the event
occurred
Touch
Remote control events are for controlling media playback
and are generated by headphones and other accessories.
Remote First responder object
control
Motion events reflect specific motion-related events
(such as shaking a device) and are handled separately
from other accelerometer-based events. .
Motion First responder object
Events related to the accelerometer and gyroscope
hardware are delivered to the object you designate.
The object you
designate
Accelerometer
Core Motion
Redraw events do not involve an event object but are
simply calls to the view to draw itself. The drawing
architecture for iOS is described in Drawing and Printing
Guide for iOS .
The view that needs
the update
Redraw
You register to receive location events using the Core
Location framework. For more information about using
Core Location, see Location Awareness Programming
Guide .
The object you
designate
Location
Some events, such as touch and remote control events, are handled by your app’s responder objects. Responder
objects are everywhere in your app. (The UIApplication object, your view objects, and your view controller
objects are all examples of responder objects.) Most eventstarget a specific responder object but can be passed
to other responder objects (via the responder chain) if needed to handle an event. For example, a view that
does not handle an event can pass the event to its superview or to a view controller.
Touch events occurring in controls (such as buttons) are handled differently than touch events occurring in
many other types of views. There are typically only a limited number of interactions possible with a control,
and so those interactions are repackaged into action messages and delivered to an appropriate target object.
This target-action design pattern makes it easy to use controls to trigger the execution of custom code in your
app.
App States and Multitasking
The Main Run Loop
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
53Background Execution and Multitasking
In iOS 4 and later, multitasking allows appsto continue running in the background even after the userswitches
to another app while still preserving battery life as much as possible. Most apps are moved to the suspended
state shortly after entering the background. Only apps that provide important services to the user are allowed
to continue running for any amount of time.
As much as possible, you are encouraged to avoid executing in the background and let your app be suspended.
If you find you need to perform background tasks, here are some guidelines for when that is appropriate:
● You need to implement at least one of several specific user services.
● You need to perform a single finite-length task.
● You need to use notifications to alert the user to some relevant piece of information when your app is not
running.
The system keeps suspended apps in memory for as long as possible, removing them only when the amount
of free memory gets low. Remaining in memory means that subsequent launches of your app are much faster.
At the same time, being suspended means your app does not drain the device’s battery as fast.
Determining Whether Multitasking Is Available
Apps must be prepared to handle situations where multitasking (and therefore background execution) is not
available. Even if your app is specifically built for iOS 4 and later, some devices running iOS 4 may not support
multitasking. And multitasking is never available on devices running iOS 3 and earlier. If your app supports
these earlier versions of iOS, it must be prepared to run without multitasking.
If the presence or absence of multitasking changes the way your app behaves, check the
multitaskingSupported property of the UIDevice class to determine whether multitasking is available
before performing the relevant task. For apps built for iOS 4 and later, this property is always available. However,
if your app supports earlier versions of the system, you must check to see whether the property itself is available
before accessing it, as shown in Listing 3-2.
Listing 3-2 Checking for background support in earlier versions of iOS
UIDevice* device = [UIDevice currentDevice];
BOOL backgroundSupported = NO;
if ([device respondsToSelector:@selector(isMultitaskingSupported)])
backgroundSupported = device.multitaskingSupported;
App States and Multitasking
Background Execution and Multitasking
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
54Executing a Finite-Length Task in the Background
Apps that are transitioning to the background can request an extra amount of time to finish any important
last-minute tasks. To request background execution time, call the
beginBackgroundTaskWithExpirationHandler: method of the UIApplication class. If your app moves
to the background while the task is in progress, or if your app was already in the background, this method
delays the suspension of your app. This can be important if your app is performing some important task, such
as writing user data to disk or downloading an important file from a network server.
The way to use the beginBackgroundTaskWithExpirationHandler: method is to call it before starting
the task you want to protect. Every call to this method must be balanced by a corresponding call to the
endBackgroundTask: method to mark the end of the task. Because apps are given only a limited amount
of time to finish background tasks, you must call this method before time expires; otherwise the system will
terminate your app. To avoid termination, you can also provide an expiration handler when starting a task and
call the endBackgroundTask: method from there. (You can use the value in the backgroundTimeRemaining
property of the app object to see how much time is left.)
Important: An app can have any number of tasks running at the same time. Each time you start a task,
the beginBackgroundTaskWithExpirationHandler: method returns a unique identifier for the task.
You must pass this same identifier to the endBackgroundTask: method when it comes time to end the
task.
Listing 3-3 shows how to start a long-running task when your app transitionsto the background. In this example,
the request to start a background task includes an expiration handler just in case the task takes too long. The
task itself is then submitted to a dispatch queue for asynchronous execution so that the
applicationDidEnterBackground: method can return normally. The use of blocks simplifies the code
needed to maintain references to any important variables, such as the background task identifier. The bgTask
variable is a member variable of the class that stores a pointer to the current background task identifier and
is initialized prior to its use in this method.
Listing 3-3 Starting a background task at quit time
- (void)applicationDidEnterBackground:(UIApplication *)application
{
bgTask = [application beginBackgroundTaskWithExpirationHandler:^{
// Clean up any unfinished task business by marking where you.
// stopped or ending the task outright.
[application endBackgroundTask:bgTask];
bgTask = UIBackgroundTaskInvalid;
}];
App States and Multitasking
Background Execution and Multitasking
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
55// Start the long-running task and return immediately.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0),
^{
// Do the work associated with the task, preferably in chunks.
[application endBackgroundTask:bgTask];
bgTask = UIBackgroundTaskInvalid;
});
}
Note: Always provide an expiration handler when starting a task, but if you want to know how
much time your app has left to run, get the value of the backgroundTimeRemaining property of
UIApplication.
In your own expiration handlers, you can include additional code needed to close out your task. However, any
code you include must not take too long to execute because, by the time your expiration handler is called,
your app is already very close to its time limit. For this reason, perform only minimal cleanup of your state
information and end the task.
Scheduling the Delivery of Local Notifications
Notifications are a way for an app that is suspended, is in the background, or is not running to get the user’s
attention. Apps can use local notificationsto display alerts, play sounds, badge the app’sicon, or a combination
of the three. For example, an alarm clock app might use local notifications to play an alarm sound and display
an alert to disable the alarm. When a notification is delivered to the user, the user must decide if the information
warrants bringing the app back to the foreground. (If the app is already running in the foreground, local
notifications are delivered quietly to the app and not to the user.)
To schedule the delivery of a local notification, create an instance of the UILocalNotification class, configure
the notification parameters, and schedule it using the methods of the UIApplication class. The local
notification object contains information about the type of notification to deliver (sound, alert, or badge) and
the time (when applicable) at which to deliver it. The methods of the UIApplication class provide options
for delivering notifications immediately or at the scheduled time.
App States and Multitasking
Background Execution and Multitasking
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
56Listing 3-4 shows an example that schedules a single alarm using a date and time that is set by the user. This
example configures only one alarm at a time and cancels the previous alarm before scheduling a new one.
(Your own apps can have no more than 128 local notifications active at any given time, any of which can be
configured to repeat at a specified interval.) The alarm itself consists of an alert box and a sound file that is
played if the app is not running or is in the background when the alarm fires. If the app is active and therefore
running in the foreground, the app delegate’s application:didReceiveLocalNotification: method
is called instead.
Listing 3-4 Scheduling an alarm notification
- (void)scheduleAlarmForDate:(NSDate*)theDate
{
UIApplication* app = [UIApplication sharedApplication];
NSArray* oldNotifications = [app scheduledLocalNotifications];
// Clear out the old notification before scheduling a new one.
if ([oldNotifications count] > 0)
[app cancelAllLocalNotifications];
// Create a new notification.
UILocalNotification* alarm = [[UILocalNotification alloc] init];
if (alarm)
{
alarm.fireDate = theDate;
alarm.timeZone = [NSTimeZone defaultTimeZone];
alarm.repeatInterval = 0;
alarm.soundName = @"alarmsound.caf";
alarm.alertBody = @"Time to wake up!";
[app scheduleLocalNotification:alarm];
}
}
App States and Multitasking
Background Execution and Multitasking
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
57Sound files used with local notifications have the same requirements as those used for push notifications.
Custom sound files must be located inside your app’s main bundle and support one of the following formats:
Linear PCM, MA4, µ-Law, or a-Law. You can also specify the sound name default to play the default alert
sound for the device. When the notification is sent and the sound is played, the system also triggers a vibration
on devices that support it.
You can cancel scheduled notifications or get a list of notifications using the methods of the UIApplication
class. For more information about these methods,see UIApplication Class Reference . For additional information
about configuring local notifications, see Local and Push Notification Programming Guide .
Implementing Long-Running Background Tasks
For tasks that require more execution time to implement, you must request specific permissions to run them
in the background without their being suspended. In iOS, only specific app types are allowed to run in the
background:
● Apps that play audible content to the user while in the background, such as a music player app
● Apps that keep users informed of their location at all times, such as a navigation app
● Apps that support Voice over Internet Protocol (VoIP)
● Newsstand apps that need to download and process new content
● Apps that receive regular updates from external accessories
Apps that implement these services must declare the services they support and use system frameworks to
implement the relevant aspects of those services. Declaring the services lets the system know which services
you use, but in some cases it is the system frameworks that actually prevent your application from being
suspended.
Declaring Your App’s Supported Background Tasks
Support for some types of background execution must be declared in advance by the app that uses them. An
app declares support for a service using its Info.plist file. Add the UIBackgroundModes key to your
Info.plist file and set its value to an array containing one or more of the following strings:
● audio—The app plays audible content to the user while in the background. (This content includes
streaming audio or video content using AirPlay.)
● location—The app keeps users informed of their location, even while it is running in the background.
● voip—The app provides the ability for the user to make phone calls using an Internet connection.
● newsstand-content—The app is aNewsstand app that downloads and processesmagazine or newspaper
content in the background.
App States and Multitasking
Background Execution and Multitasking
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
58● external-accessory—The app works with a hardware accessory that needs to deliver updates on a
regular schedule through the External Accessory framework.
● bluetooth-central—The app works with a Bluetooth accessory that needs to deliver updates on a
regular schedule through the Core Bluetooth framework.
● bluetooth-peripheral—The app supports Bluetooth communication in peripheral mode through the
Core Bluetooth framework.
Each of the preceding values lets the system know that your app should be woken up at appropriate times to
respond to relevant events. For example, an app that begins playing music and then moves to the background
still needs execution time to fill the audio output buffers. Including the audio key tells the system frameworks
that they should continue playing and make the necessary callbacks to the app at appropriate intervals. If the
app does not include this key, any audio being played by the app stops when the app movesto the background.
Tracking the User’s Location
There are several ways to track the user’s location in the background, most of which do not actually require
your app to run continuously in the background:
● The significant-change location service (Recommended)
● Foreground-only location services
● Background location services
The significant-change location service is highly recommended for apps that do not need high-precision
location data. With this service, location updates are generated only when the user’s location changes
significantly; thus, it is ideal for social apps or apps that provide the user with noncritical, location-relevant
information. If the app is suspended when an update occurs, the system wakes it up in the background to
handle the update. If the app starts this service and is then terminated, the system relaunches the app
automatically when a new location becomes available. This service is available in iOS 4 and later, and it is
available only on devices that contain a cellular radio.
The foreground-only and background location services both use the standard location Core Location service
to retrieve location data. The only difference is that the foreground-only location services stop delivering
updates if the app is ever suspended, which is likely to happen if the app does not support other background
services or tasks. Foreground-only location services are intended for apps that only need location data while
they are in the foreground.
An app that provides continuous location updates to the user (even when in the background) can enable
background location services by including the UIBackgroundModes key (with the location value) in its
Info.plist file. The inclusion of this value in the UIBackgroundModes key does not preclude the system
App States and Multitasking
Background Execution and Multitasking
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
59from suspending the app, but it does tell the system that it should wake up the app whenever there is new
location data to deliver. Thus, this key effectively letsthe app run in the background to processlocation updates
whenever they occur.
Important: You are encouraged to use the standard servicessparingly or use the significant location change
service instead. Location servicesrequire the active use of an iOS device’s onboard radio hardware. Running
this hardware continuously can consume a significant amount of power. If your app does not need to
provide precise and continuous location information to the user, it is best to minimize the use of location
services.
For information about how to use each of the different location services in your app, see Location Awareness
Programming Guide .
Playing Background Audio
An app that plays audio continuously (even while the app is running in the background) can register as a
background audio app by including the UIBackgroundModes key (with the value audio) in its Info.plist
file. Apps that include this key must play audible content to the user while in the background.
Typical examples of background audio apps include:
● Music player apps
● Apps that support audio or video playback over AirPlay
● VoIP apps
When the UIBackgroundModes key contains the audio value, the system’s media frameworks automatically
prevent the corresponding app from being suspended when it moves to the background. As long as it is
playing audio or video content, the app continues to run in the background. However, if the app stops playing
the audio or video, the system suspends it.
You can use any of the system audio frameworks to initiate the playback of background audio, and the process
for using those frameworks is unchanged. (For video playback over AirPlay, you can use the Media Player or
AV Foundation framework to present your video.) Because your app is not suspended while playing media
files, callbacks operate normally while your app is in the background. In your callbacks, though, you should do
only the work necessary to provide data for playback. For example, a streaming audio app would need to
download the music stream data from its server and push the current audio samples out for playback. You
should not perform any extraneous tasks that are unrelated to playback.
Because more than one app may support audio, the system limits which apps can play audio at any given time.
The foreground app always has permission to play audio. In addition, one or more background apps may also
be allowed to play some audio content depending on the configuration of their audio session objects. You
App States and Multitasking
Background Execution and Multitasking
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
60should always configure your app’s audio session object appropriately and work carefully with the system
frameworks to handle interruptions and other types of audio-related notifications. For information on how to
configure audio session objects for background execution, see Audio Session Programming Guide .
Implementing a VoIP App
A Voice over Internet Protocol (VoIP) app allows the user to make phone calls using an Internet connection
instead of the device’s cellular service. Such an app needs to maintain a persistent network connection to its
associated service so that it can receive incoming calls and other relevant data. Rather than keep VoIP apps
awake all the time, the system allowsthem to be suspended and providesfacilitiesfor monitoring theirsockets
for them. When incoming traffic is detected, the system wakes up the VoIP app and returns control of itssockets
to it.
To configure a VoIP app, you must do the following:
1. Add the UIBackgroundModes key to your app’s Info.plist file. Set the value of this key to an array
that includes the voip value.
2. Configure one of the app’s sockets for VoIP usage.
3. Before moving to the background, call the setKeepAliveTimeout:handler: method to install a handler
to be executed periodically. Your app can use this handler to maintain its service connection.
4. Configure your audio session to handle transitions to and from active use.
Including the voip value in the UIBackgroundModes key lets the system know that it should allow the app
to run in the background as needed to manage its network sockets. An app with this key is also relaunched in
the background immediately after system boot to ensure that the VoIP services are always available.
Most VoIP apps also need to be configured as background audio appsto deliver audio while in the background.
Therefore, you should include both the audio and voip values to the UIBackgroundModes key. If you do
not do this, your app cannot play audio while it is in the background. For more information about the
UIBackgroundModes key, see Information Property List Key Reference .
For specific information about the steps you must take to implement a VoIP app, see “Tips for Developing a
VoIP App” (page 115).
Downloading Newsstand Content in the Background
A Newsstand app that downloads new magazine or newspaper issues can register to perform those downloads
in the background. When your server sends a push notification to indicate that a new issue is available, the
system checks to see whether your app has the UIBackgroundModes key with the newsstand-content
value. If it does, the system launches your app, if it is not already running,so that it can initiate the downloading
of the new issue.
App States and Multitasking
Background Execution and Multitasking
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
61When you use the Newsstand Kit framework to initiate a download, the system handles the download process
for your app. The system continues to download the file even if your app is suspended or terminated. When
the download operation is complete, the system transfers the file to your app sandbox and notifies your app.
If the app is not running, this notification wakes it up and gives it a chance to process the newly downloaded
file. If there are errors during the download process, your app is similarly woken up to handle them.
For information about how to download content using the Newsstand Kit framework, see Newsstand Kit
Framework Reference .
Communicating with an External Accessory
Apps that work with external accessories can ask to be woken up if the accessory delivers an update when the
app is suspended. This support is important for some types of accessories that deliver data at regular intervals,
such as heart-rate monitors. When an app includes the UIBackgroundModes key with the
external-accessory value in its Info.plist file, the external accessory framework keeps open any active
sessions for the corresponding accessories. (In iOS 4 and earlier, these sessions are closed automatically when
the app is suspended.) In addition, new data arriving from the accessory causes the system to wake up the
app to processthat data. The system also wakes up the app to process accessory connection and disconnection
notifications.
Any app that supports the background processing of accessory updates must follow a few basic guidelines:
● Apps must provide an interface that allows the user to start and stop the delivery of accessory update
events. That interface should then open or close the accessory session as appropriate.
● Upon being woken up, the app has around 10 seconds to process the data. Ideally, it should process the
data as fast as possible and allow itself to be suspended again. However, if more time is needed, the app
can use the beginBackgroundTaskWithExpirationHandler: method to request additional time; it
should do so only when absolutely necessary, though.
Communicating with a Bluetooth Accessory
Apps that work with Bluetooth peripherals can ask to be woken up if the peripheral delivers an update when
the app issuspended. Thissupport isimportant for Bluetooth-le accessoriesthat deliver data at regular intervals,
such as a Bluetooth heart rate belt. When an app includes the UIBackgroundModes key with the
bluetooth-central value in its Info.plist file, the Core Bluetooth framework keeps open any active
sessionsfor the corresponding peripheral. In addition, new data arriving from the peripheral causesthe system
to wake up the app so that it can process the data. The system also wakes up the app to process accessory
connection and disconnection notifications.
App States and Multitasking
Background Execution and Multitasking
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
62In iOS 6, an app can also operate in peripheral mode with Bluetooth accessories. If the app wants to respond
to accessory-related changes using peripheral mode in the background, it must link against the Core Bluetooth
framework and include the UIBackgroundModes key with the bluetooth-peripheral value in its
Info.plist file. This key lets the Core Bluetooth framework wake the app up briefly in the background so
that it can handle accessory-related requests. Apps woken up for these events should process them and return
as quickly as possible so that the app can be suspended again.
Any app that supports the background processing of Bluetooth data must be session-based and follow a few
basic guidelines:
● Apps must provide an interface that allows the user to start and stop the delivery of Bluetooth events.
That interface should then open or close the session as appropriate.
● Upon being woken up, the app has around 10 seconds to process the data. Ideally, it should process the
data as fast as possible and allow itself to be suspended again. However, if more time is needed, the app
can use the beginBackgroundTaskWithExpirationHandler: method to request additional time; it
should do so only when absolutely necessary, though.
Being a Responsible Background App
The foreground app always has precedence over background apps when it comesto the use ofsystem resources
and hardware. Apps running in the background need to be prepared for this discrepancy and adjust their
behavior when running in the background. Specifically, apps moving to the background should follow these
guidelines:
● Do not make any OpenGL ES calls from your code. You must not create an EAGLContext object or issue
any OpenGL ES drawing commands of any kind while running in the background. Using these calls causes
your app to be killed immediately. Apps must also ensure that any previously submitted commands have
completed before moving to the background. For information about how to handle OpenGL ES when
moving to and from the background,see “Implementing a Multitasking-awareOpenGL ES Application” in OpenGL
ES Programming Guide for iOS .
● Cancel any Bonjour-related services before being suspended. When your app movesto the background,
and before it is suspended, it should unregister from Bonjour and close listening sockets associated with
any network services. A suspended app cannot respond to incoming service requests anyway. Closing out
those services prevents them from appearing to be available when they actually are not. If you do not
close out Bonjour services yourself, the system closes out those services automatically when your app is
suspended.
App States and Multitasking
Background Execution and Multitasking
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
63● Be prepared to handle connection failures in your network-based sockets. The system may tear down
socket connections while your app is suspended for any number of reasons. As long as your socket-based
code is prepared for other types of network failures, such as a lost signal or network transition, this should
not lead to any unusual problems. When your app resumes, if it encounters a failure upon using a socket,
simply reestablish the connection.
● Save your app state before moving to the background. During low-memory conditions, background
apps may be purged from memory to free up space. Suspended apps are purged first, and no notice is
given to the app before it is purged. As a result, apps should take advantage of the state preservation
mechanism in iOS 6 and later to save their interface state to disk. For information about how to support
this feature, see “State Preservation and Restoration” (page 67).
● Remove strong references to unneeded objects whenmoving to the background. If your app maintains
a large in-memory cache of objects(especially images), remove allstrong referencesto those caches when
moving to the background. For more information, see “Memory Usage for Background Apps” (page 47).
● Stop using shared system resources before being suspended. Apps that interact with shared system
resources such as the Address Book or calendar databases should stop using those resources before being
suspended. Priority for such resources always goes to the foreground app. When your app is suspended,
if it is found to be using a shared resource, the app is killed.
● Avoid updating your windows and views. While in the background, your app’s windows and views are
not visible, so you should not try to update them. Although creating and manipulating window and view
objects in the background does not cause your app to be killed, consider postponing this work until you
return to the foreground.
● Respond to connect and disconnect notifications for external accessories. For apps that communicate
with external accessories, the system automatically sends a disconnection notification when the app moves
to the background. The app must register for this notification and use it to close out the current accessory
session. When the app moves back to the foreground, a matching connection notification is sent, giving
the app a chance to reconnect. For more information on handling accessory connection and disconnection
notifications, see External Accessory Programming Topics.
● Clean up resources for active alerts when moving to the background. In order to preserve context when
switching between apps, the system does not automatically dismiss action sheets (UIActionSheet) or
alert views (UIAlertView) when your app moves to the background. It is up to you to provide the
appropriate cleanup behavior prior to moving to the background. For example, you might want to cancel
the action sheet or alert view programmatically orsave enough contextual information to restore the view
later (in cases where your app is terminated).
For apps linked against a version of iOS earlier than 4.0, action sheets and alerts are still dismissed at quit
time so that your app’s cancellation handler has a chance to run.
App States and Multitasking
Background Execution and Multitasking
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
64● Remove sensitive information from views before moving to the background. When an app transitions
to the background, the system takes a snapshot of the app’s main window, which it then presents briefly
when transitioning your app back to the foreground. Before returning from your
applicationDidEnterBackground: method, you should hide or obscure passwords and othersensitive
personal information that might be captured as part of the snapshot.
● Do minimal work while running in the background. The execution time given to background apps is
more constrained than the amount of time given to the foreground app. If your app plays background
audio or monitors location changes, you should focus on that task only and defer any nonessential tasks
until later. Apps that spend too much time executing in the background can be throttled back by the
system or killed.
If you are implementing a background audio app, or any other type of app that is allowed to run in the
background, your app responds to incoming messages in the usual way. In other words, the system may notify
your app of low-memory warnings when they occur. And in situations where the system needs to terminate
apps to free even more memory, the app calls its delegate’s applicationWillTerminate: method to
perform any final tasks before exiting.
Opting out of Background Execution
If you do not want your app to run in the background at all, you can explicitly opt out of background by adding
the UIApplicationExitsOnSuspend key (with the value YES) to your app’s Info.plist file. When an app
opts out, it cycles between the not-running, inactive, and active states and never enters the background or
suspended states. When the user pressesthe Home button to quit the app, the applicationWillTerminate:
method of the app delegate is called and the app has approximately 5 seconds to clean up and exit before it
is terminated and moved back to the not-running state.
Opting out of background execution is strongly discouraged but may be the preferred option under certain
conditions. Specifically, if coding for the background adds significant complexity to your app, terminating the
app might be a simpler solution. Also, if your app consumes a large amount of memory and cannot easily
release any of it, the system might kill your app quickly anyway to make room for other apps. Thus, opting to
terminate, instead of switching to the background, might yield the same results and save you development
time and effort.
Note: Explicitly opting out of background execution is necessary only if your app is linked against
iOS SDK 4 and later. Apps linked against earlier versions of the SDK do not support background
execution as a rule and therefore do not need to opt out explicitly.
For more information about the keys you can include in your app’s Info.plist file, see Information Property
List Key Reference .
App States and Multitasking
Background Execution and Multitasking
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
65Concurrency and Secondary Threads
The system creates your app’s main thread but your app can create additional threads as needed to perform
other tasks. The preferred way to create threadsisto let the system do it for you by using Grand Central Dispatch
queues and operation queues. Both types of queue provide an asynchronous execution model for tasks that
you define. When you submit a task to a queue, the system spins up a thread and executes your task on that
thread. Letting the system manage the threads simplifies your code and allows the system to manage the
threads in the most efficient way available.
You should use queues whenever possible to move work off of your app’s main thread. Because the main
thread is responsible for processing touch and drawing events, you should never perform lengthy tasks on it.
For example, you should never wait for a network response on your app’s main thread. It is much better to
make the request asynchronously using a queue and process the results when they arrive.
Another good time to move tasks to secondary threads is launch time. Launched apps have a limited amount
of time (around 5 seconds) to do their initialization and start processing events. If you have launch-time tasks
that can be deferred or executed on a secondary thread, you should move them off the main thread right away
and use the main thread only to present your user interface and start handling events.
Formore information about using dispatch and operation queuesto execute tasks,see Concurrency Programming
Guide .
App States and Multitasking
Concurrency and Secondary Threads
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
66Even if your app supports background execution, it cannot run forever. At some point, the system might need
to terminate your app to free up memory for the current foreground app. However, the user should never
have to care if an app is already running or wasterminated. From the user’s perspective, quitting an app should
just seem like a temporary interruption. When the user returns to an app, that app should always return the
user to the last point of use, so that the user can continue with whatever task was in progress. This behavior
provides a better experience for the user and with the state restoration support built in to UIKit is relatively
easy to achieve.
The state preservation system in UIKit provides a simple but flexible infrastructure for preserving and restoring
the state of your app’s view controllers and views. The job of the infrastructure is to drive the preservation and
restoration processes at the appropriate times. To do that, UIKit needs help from your app. Only you understand
the content of your app, and so only you can write the code needed to save and restore that content. And
when you update your app’s UI, only you know how to map older preserved content to the newer objects in
your interface.
There are three places where you have to think about state preservation in your app:
● Your app delegate object, which manages the app’s top-level state
● Your app’s view controller objects, which manage the overall state for your app’s user interface
● Your app’s custom views, which might have some custom data that needs to be preserved
UIKit allows you to choose which parts of your user interface you want to preserve. And if you already have
custom code for handling state preservation, you can continue to use that code and migrate portions to the
UIKit state preservation system as needed.
The Preservation and Restoration Process
State preservation and restoration is an opt-in feature and requires help from your app to work. Your app
essentially provides UIKit with a list of objects and lets UIKit handle the tedious aspects of preserving and
restoring those objects at appropriate times. Because UIKit handlesso much of the process, it helpsto understand
what it does behind the scenes so that you know how your custom code fits into the overall scheme.
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
67
State Preservation and RestorationWhen thinking about state preservation and restoration, it helps to separate the two processes first. State
preservation occurs when your app moves to the background. At that time, UIKit queries your app’s views and
view controllersto see which onesshould be preserved and which onesshould not. For each object thatshould
be preserved, UIKit writes preservation-related data to an on-disk file. The next time your app launches from
scratch, UIKit looks for that file and, if it is present, uses it to try and restore your app’s state. During the
restoration process, UIKit uses the preserved data to reconstitute your interface. The creation of actual objects
is handled by your code. Because your app might load objects from a storyboard file automatically, only your
code knows which objects need to be created and which might already exist and can simply be returned. After
those objects are created, UIKit uses the on-disk data to restore the objects to their previous state.
During the preservation and restoration process, your app has a handful of responsibilities.
● During preservation, your app is responsible for:
● Telling UIKit that it supports state preservation.
● Telling UIKit which view controllers and views should be preserved.
● Encoding relevant data for any preserved objects.
● During restoration, your app is responsible for:
● Telling UIKit that it supports state restoration.
● Providing (or creating) the objects that are requested by UIKit.
● Decoding the state of your preserved objects and using it to return the object to its previous state.
Of your app’s responsibilities, the most significant are telling UIKit which objects to preserve and providing
those objects during subsequent launches. Those two behaviors are where you should spend most of your
time when designing your app’s preservation and restoration code. They are also where you have the most
control over the actual process. To understand why that is the case, it helps to look at an example.
Figure 4-1 shows the view controller hierarchy of a tab bar interface after the user has interacted with several
of the tabs. As you can see, some of the view controllers are loaded automatically as part of the app’s main
storyboard file butsome of the view controllers were presented or pushed onto the view controllersin different
State Preservation and Restoration
The Preservation and Restoration Process
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
68tabs. Without state restoration, only the view controllers from the main storyboard file would be restored
during subsequent launches. By adding support for state restoration to your app, you can preserve all of the
view controllers.
Figure 4-1 A sample view controller hierarchy
Loaded at launch time from the main storyboard file
Added by the app after launch
UINavigation
Controller
MyViewController
MyPresented
Controller
UINavigation
Controller
Root
Nav 2
Nav 1
Root
Nav 1
UITabBar
Controller
MainStoryboard.storyboard
State Preservation and Restoration
The Preservation and Restoration Process
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
69UIKit preserves only those objects that have a restoration identifier. A restoration identifier is a string that
identifies the view or view controller to UIKit and your app. The value of this string is significant only to your
code but the presence of thisstring tells UIKit that it needsto preserve the tagged object. During the preservation
process, UIKit walks your app’s view controller hierarchy and preserves all objects that have a restoration
State Preservation and Restoration
The Preservation and Restoration Process
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
70identifier. If a view controller does not have a restoration identifier, that view controller and all of its views and
child view controllers are not preserved. Figure 4-2 shows an updated version of the previous view hierarchy,
now with restoration identifies applied to most (but not all) of the view controllers.
State Preservation and Restoration
The Preservation and Restoration Process
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
71Figure 4-2 Adding restoration identifies to view controllers
*
* * *
* * *
*
*
Loaded at launch time from the main storyboard file
Added by the app after launch
Has restoration identifier
X Does not have restoration identifier (not restored)
*
UINavigation
Controller
MyViewController
MyPresented
Controller
UINavigation
Controller
Root
Nav 2
Nav 1
Root
Nav 1
UITabBar
Controller
MainStoryboard.storyboard
X
State Preservation and Restoration
The Preservation and Restoration Process
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
72Depending on your app, it might or might not make sense to preserve every view controller. If a view controller
presents transitory information, you might not want to return to that same point on restore, opting instead
to return the user to a more stable point in your interface.
For each view controller you choose to preserve, you also need to decide on how you want to restore it later.
UIKit offerstwo waysto recreate objects. You can let your app delegate recreate it or you can assign a restoration
class to the view controller and let that class recreate it. A restoration class implements the
UIViewControllerRestoration protocol and is responsible for finding or creating a designated object at
restore time. Here are some tips for when to use each one:
●
If the view controller is always loaded from your app’s main storyboard file at launch time, do not
assign a restoration class. Instead, let your app delegate find the object or take advantage of UIKit’s
support for implicitly finding restored objects.
● For view controllers that are not loaded from your main storyboard file at launch time, assign a
restoration class. The simplest option is to make each view controller its own restoration class.
During the preservation process, UIKit identifies the objects to save and writes each affected object’s state to
disk. Each view controller object is given a chance to write out any data it wants to save. For example, a tab
view controller saves the identity of the selected tab. UIKit also saves information such as the view controller’s
restoration class to disk. And if any of the view controller’s views has a restoration identifier, UIKit asks them
to save their state information too.
The next time the app is launched, UIKit loads the app’s main storyboard or nib file as usual, calls the app
delegate’s application:willFinishLaunchingWithOptions: method, and then tries to restore the
app’s previous state. The first thing it does is ask your app to provide the set of view controller objects that
match the ones that were preserved. If a given view controller had an assigned restoration class, that class is
asked to provide the object; otherwise, the app delegate is asked to provide it.
State Preservation and Restoration
The Preservation and Restoration Process
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
73Flow of the Preservation Process
Figure 4-3 shows the high-level events that happen during state preservation and shows how the objects of
your app are affected. Before preservation even occurs, UIKit asks your app delegate if itshould occur by calling
the application:shouldSaveApplicationState: method. If that method returns YES, UIKit begins
gathering and encoding your app’s views and view controllers. When it is finished, it writes the encoded data
to disk.
Figure 4-3 High-level flow interface preservation
UIKit
Save UI
Done
No
Yes
App supports
save?
application:
shouldSaveApplicationState:
application:
willEncodeRestorableState:
App Delegate
@property restorationIdentifier
encodeRestorableStateWithCoder:
View /View Controller Objects
@property restorationClass
View Controller Only
Start app preservation
Gather restorable objects
Encode restorable objects
Write state to disk
The next time your app launches, the system automatically looks for a preserved state file, and if present, uses
it to restore your interface. Because this state information is only relevant between the previous and current
launch cycles of your app, the file istypically discarded after your app finisheslaunching. The file is also discarded
any time there is an error restoring your app. For example, if your app crashes during the restoration process,
the system automatically throws away the state information during the next launch cycle to avoid another
crash.
State Preservation and Restoration
The Preservation and Restoration Process
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
74Flow of the Restoration Process
Figure 4-4 shows the high-level events that happen during state restoration and shows how the objects of
your app are affected. After the standard initialization and UI loading is complete, UIKit asks your app delegate
if state restoration should occur at all by calling the application:shouldRestoreApplicationState:
State Preservation and Restoration
The Preservation and Restoration Process
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
75method. Thisis your app delegate’s opportunity to examine the preserved data and determine ifstate restoration
is possible. If it is, UIKit uses the app delegate and restoration classes to obtain references to your app’s view
controllers. Each object is then provided with the data it needs to restore itself to its previous state.
Figure 4-4 High-level flow for restoring your user interface
UIKit
App launches
Load initial UI
First app initialization
Run
app
No
Yes
App supports
restore?
application:
willFinishLaunchingWithOptions:
application:
shouldRestoreApplicationState:
application:
viewControllerWithRestoration
IdentifierPath:coder:
App Delegate
viewControllerWithRestoration
IdentifierPath:coder:
Restoration Classes
decodeRestorableStateWithCoder:
View /View Controller Object
application:
didDecodeRestorableState:
application:
didFinishLaunchingWithOptions:
App Delegate
Obtain view controllers
Decode restorable objects
Finish app restoration
Finish app initialization
State Preservation and Restoration
The Preservation and Restoration Process
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
76Although UIKit helps restore the individual view controllers, it does not automatically restore the relationships
between those view controllers. Instead, each view controller is responsible for encoding enough state
information to return itself to its previous state. For example, a navigation controller encodes information
about the order of the view controllers on its navigation stack. It then uses this information later to return
those view controllers to their previous positions on the stack. Other view controllers that have embedded
child view controllers are similarly responsible for encoding any information they need to restore their children
later.
Note: Not all view controllers need to encode their child view controllers. For example, tab bar
controllers do not encode information about their child view controllers. Instead, it is assumed that
your app followsthe usual pattern of creating the appropriate child view controllers prior to creating
the tab bar controller itself.
Because you are responsible for recreating your app’s view controllers, you have some flexibility to change
your interface during the restoration process. For example, you could reorder the tabs in a tab bar controller
and still use the preserved data to return each tab to its previousstate. Of course, if you make dramatic changes
to your view controller hierarchy, such as during an app update, you might not be able to use the preserved
data.
State Preservation and Restoration
The Preservation and Restoration Process
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
77What Happens When You Exclude Groups of View Controllers?
When the restoration identifier of a view controller is nil, that view controller and any child view controllers
it manages are not preserved automatically. For example, in Figure 4-6, because a navigation controller did
not have a restoration identifier, it and all of its child view controllers and views are omitted from the preserved
data.
State Preservation and Restoration
The Preservation and Restoration Process
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
78Figure 4-5 Excluding view controllers from the automatic preservation process
*
* *
* * *
X
* *
*
Preserved automatically
Not preserved automatically
UINavigation
Controller
MyViewController
MyPresented
Controller
UINavigation
Controller
UITabBar
Controller
MainStoryboard.storyboard
Has restoration identifier
X Does not have restoration identifier
*
State Preservation and Restoration
The Preservation and Restoration Process
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
79Even if you decide not to preserve view controllers, that does not mean all of those view controllers disappear
from the view hierarchy altogether. At launch time, your app might still create the view controllers as part of
its default setup. For example, if any view controllers are loaded automatically from your app’s storyboard file,
they would still appear, albeit in their default configuration, as shown in Figure 4-6.
Figure 4-6 Loading the default set of view controllers
Loaded with the main storyboard file
Created during the restoration process
UINavigation
Controller
MyViewController
MyPresented
Controller
UINavigation
Controller
UITabBar
Controller
MainStoryboard.storyboard
State Preservation and Restoration
The Preservation and Restoration Process
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
80Something else to realize is that even if a view controller is not preserved automatically, you can still encode
a reference to that view controller and preserve it manually. In Figure 4-5 (page 79), the three child view
controllers of the first navigation controller have restoration identifiers, even though there parent navigation
controller does not. If your app delegate (or any preserved object) encodes a reference to those view controllers,
their state is preserved. Even though their order in the navigation controller is not saved, you could still use
those referencesto recreate the view controllers and install them in the navigation controller during subsequent
launch cycles.
Checklist for Implementing State Preservation and Restoration
Supporting state preservation and restoration requires modifying your app delegate and view controller objects
to encode and decode the state information. If your app has any custom views that also have preservable state
information, you need to modify those objects too.
When adding state preservation and restoration to your code, use the following list to remind you of the code
you need to write.
●
(Required) Implement the application:shouldSaveApplicationState: and
application:shouldRestoreApplicationState: methods in your app delegate; see “Enabling
State Preservation and Restoration in Your App” (page 82).
●
(Required) Assign restoration identifiers to each view controller you want to preserve. a non empty string
to their restorationIdentifier property; see “Marking Your View Controllers for Preservation” (page
83).
If you want to save the state of specific views too, assign non empty strings to their
restorationIdentifier properties; see “Preserving the State of Your Views” (page 86).
●
(Required) Assign restoration classes to the appropriate view controllers. (If you do not do this, your app
delegate is asked to provide the corresponding view controller at restore time.) See “Restoring Your View
Controllers at Launch Time” (page 83).
●
(Recommended) Encode and decode the state of your views and view controllers using the
encodeRestorableStateWithCoder: and decodeRestorableStateWithCoder: methods of those
objects; see “Encoding and Decoding Your View Controller’s State” (page 85).
● Encode and decode any version information or additional state information for your app using the
application:willEncodeRestorableStateWithCoder: and
application:didDecodeRestorableStateWithCoder:methods of your app delegate;see “Preserving
Your App’s High-Level State” (page 89).
State Preservation and Restoration
Checklist for Implementing State Preservation and Restoration
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
81● Objects that act as data sources for table views and collection views should implement the
UIDataSourceModelAssociation protocol. Although not required, this protocol helps preserve the
selected and visible items in those types of views. See “Implementing Preservation-Friendly Data
Sources” (page 89).
Enabling State Preservation and Restoration in Your App
State preservation and restoration is not an automatic feature and apps must opt-in to use it. Apps indicate
their support for the feature by implementing the following methods in their app delegate:
application:shouldSaveApplicationState:
application:shouldRestoreApplicationState:
Normally, your implementations of these methods just return YES to indicate that state preservation and
restoration can occur. However, apps that want to preserve and restore their state conditionally can return NO
in situations where the operations should not occur. For example, after releasing an update to your app, you
might want to return NO from your application:shouldRestoreApplicationState: method if your
app is unable to usefully restore the state from a previous version.
Preserving the State of Your View Controllers
Preserving the state of your app’s view controllers should be your main goal. View controllers define the
structure of your user interface. They manage the views needed to present that interface and they coordinate
the getting and setting of the data that backs those views. To preserve the state of a single view controller,
you must do the following:
●
(Required) Assign a restoration identifier to the view controller; see “Marking Your View Controllers for
Preservation” (page 83).
●
(Required) Provide code to create or locate new view controller objects at launch time; see “Restoring
Your View Controllers at Launch Time” (page 83).
●
(Optional) Implement the encodeRestorableStateWithCoder: and
decodeRestorableStateWithCoder: methodsto encode and restore any state information that cannot
be recreated during a subsequent launch;see “Encoding and Decoding Your View Controller’s State” (page
85).
State Preservation and Restoration
Enabling State Preservation and Restoration in Your App
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
82Marking Your View Controllers for Preservation
UIKit preserves only those view controllers whose restorationIdentifier property contains a valid string
object. For view controllers that you know you want to preserve, set the value of this property when you
initialize the view controller object. If you load the view controller from a storyboard or nib file, you can set
the restoration identifier there.
Choosing an appropriate value for restoration identifiers is important. During the restoration process, your
code uses the restoration identifier to determine which view controller to retrieve or create. If every view
controller object is based on a different class, you can use the class name for the restoration identifier. However,
if your view controller hierarchy contains multiple instances of the same class, you might need to choose
different names based on each view usage.
When it asks you to provide a view controller, UIKit provides you with the restoration path of the view controller
object. A restoration path is the sequence of restoration identifiers starting at the root view controller and
walking down the view controller hierarchy to the current object. For example, imagine you have a tab bar
controller whose restoration identifier is TabBarControllerID, and the first tab contains a navigation
controller whose identifier is NavControllerID and whose root view controller’s identifier is
MyViewController. The full restoration path for the root view controller would be
TabBarControllerID/NavControllerID/MyViewController.
The restoration path for every object must be unique. If a view controller has two child view controllers, each
child must have a different restoration identifier. However, two view controllers with different parent objects
may use the same restoration identifier because the rest of the restoration path providesthe needed uniqueness.
Some UIKit view controllers, such as navigation controllers, automatically disambiguate their child view
controllers, allowing you to use the same restoration identifiers for each child. For more information about the
behavior of a given view controller, see the corresponding class reference.
At restore time, you use the provided restoration path to determine which view controller to return to UIKit.
For more information on how you use restoration identifiers and restoration paths to restore view controllers,
see “Restoring Your View Controllers at Launch Time” (page 83).
Restoring Your View Controllers at Launch Time
During the restoration process, UIKit asks your app to create (or locate) the view controller objectsthat comprise
your preserved user interface. UIKit adheres to the following process when trying to locate view controllers:
1. If the view controller had a restoration class, UIKit asks that class to provide the view controller. UIKit
calls the viewControllerWithRestorationIdentifierPath:coder: method of the associated
restoration classto retrieve the view controller. If that method returns nil, it is assumed that the app does
not want to recreate the view controller and UIKit stops looking for it.
State Preservation and Restoration
Preserving the State of Your View Controllers
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
832. If the view controller did not have a restoration class, UIKit asks the app delegate to provide the view
controller.UIKit callsthe application:viewControllerWithRestorationIdentifierPath:coder:
method of your app delegate to look for view controllers without a restoration class. If that method returns
nil, UIKit tries to find the view controller implicitly.
3. If a view controller with the correct restoration path already exists, UIKit uses that object. If your app
creates view controllers at launch time (either programmatically or by loading them from a resource file)
and assigns restoration identifiers to them, UIKit finds them implicitly through their restoration paths.
4. If the view controller was originally loaded from a storyboard file, UIKit uses the saved storyboard
information to locate and create it. UIKit saves information about a view controller’s storyboard inside
the restoration archive. At restore time, it uses that information to locate the same storyboard file and
instantiate the corresponding view controller if the view controller was not found by any other means.
It is worth noting that if you specify a restoration class for a view controller, UIKit does not try to find your view
controller implicitly. If the viewControllerWithRestorationIdentifierPath:coder: method of your
restoration class returns nil, UIKit stops trying to locate your view controller. This gives you control over
whether you really want to create the view controller. If you do not specify a restoration class, UIKit does
everything it can to find the view controller for you, creating it as necessary from your app’s storyboard files.
If you choose to use a restoration class, the implementation of your
viewControllerWithRestorationIdentifierPath:coder: method should create a new instance of
the class, perform some minimal initialization, and return the resulting object. Listing 4-1 shows an example
of how you might use this method to load a view controller from a storyboard. Because the view controller
was originally loaded from a storyboard, this method uses the
UIStateRestorationViewControllerStoryboardKey key to get the name of the original storyboard
file from the archive. Note that this method does not try to configure the view controller’s data fields. That
step occurs later when the view controller’s state is decoded.
Listing 4-1 Creating a new view controller during restoration
+ (UIViewController*) viewControllerWithRestorationIdentifierPath:(NSArray
*)identifierComponents
coder:(NSCoder *)coder {
MyViewController* vc;
NSString* storyboardName = [coder
decodeObjectForKey:UIStateRestorationViewControllerStoryboardKey];
UIStoryboard* sb = [UIStoryboard storyboardWithName:storyboardName bundle:nil];
if (storyboardName) {
vc = (MyViewController*)[sb
instantiateViewControllerWithIdentifier:@"MyViewController"];
State Preservation and Restoration
Preserving the State of Your View Controllers
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
84thePush.restorationIdentifier = [identifierComponents lastObject];
thePush.restorationClass = [MyViewController class];
}
return vc;
}
Reassigning the restoration identifier and restoration class, as in the preceding example, is a good habit to
adopt when creating new view controllers. The simplest way to restore the restoration identifier is to grab the
last item in the identifierComponents array and assign it to your view controller.
For objects that were already loaded from your app’s main storyboard file at launch time, do not create a new
instance of each object. Instead, implement the
application:viewControllerWithRestorationIdentifierPath:coder:method of your app delegate
and use it to return the appropriate objects or let UIKit find those objects implicitly.
Encoding and Decoding Your View Controller’s State
For each objectslated for preservation, UIKit callsthe object’s encodeRestorableStateWithCoder: method
to give it a chance to save its state. During the decode process, a matching call to the
decodeRestorableStateWithCoder: method is made to decode that state and apply it to the object. The
implementation of these methods is optional, but recommended, for your view controllers. You can use them
to save and restore the following types of information:
● References to any data being displayed (not the data itself)
● For a container view controller, references to its child view controllers
●
Information about the current selection
● For view controllers with a user-configurable view, information about the current configuration of that
view.
In your encode and decode methods, you can encode any values supported by the coder, including other
objects. For all objects except views and view controllers, the object must adopt the NSCoding protocol and
use the methods of that protocol to write its state. For views and view controllers, the coder does not use the
methods of the NSCoding protocol to save the object’sstate. Instead, the codersavesthe restoration identifier
of the object and adds it to the list of preservable objects, which results in that object’s
encodeRestorableStateWithCoder: method being called.
State Preservation and Restoration
Preserving the State of Your View Controllers
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
85The encodeRestorableStateWithCoder: and decodeRestorableStateWithCoder: methods of your
view controllers should always call super at some point in their implementation. Calling super gives the
parent class a chance to save and restore any additional information. Listing 4-2 shows a sample implementation
of these methods that save a numerical value used to identify the specified view controller.
Listing 4-2 Encoding and decoding a view controller’s state.
- (void)encodeRestorableStateWithCoder:(NSCoder *)coder {
[super encodeRestorableStateWithCoder:coder];
[coder encodeInt:self.number forKey:MyViewControllerNumber];
}
- (void)decodeRestorableStateWithCoder:(NSCoder *)coder {
[super decodeRestorableStateWithCoder:coder];
self.number = [coder decodeIntForKey:MyViewControllerNumber];
}
Coder objects are notshared during the encode and decode process. Each object with preservable state receives
its own coder that it can use to read or write data. The use of unique coders means that you do not have to
worry about key namespace collisions among your own objects. However, you must still avoid using some
special key names that UIKit provides. Specifically, each coder contains the
UIApplicationStateRestorationBundleVersionKey and
UIApplicationStateRestorationUserInterfaceIdiomKey keys, which provide information about the
bundle version and current user interface idiom. Coders associated with view controllers may also contain the
UIStateRestorationViewControllerStoryboardKey key, which identifies the storyboard from which
that view controller originated.
For more information about implementing your encode and decode methods for your view controllers, see
UIViewController Class Reference .
Preserving the State of Your Views
If a view has state information worth preserving, you can save that state with the rest of your app’s view
controllers. Because they are usually configured by their owning view controller, most views do not need to
save state information. The only time you need to save a view’s state is when the view itself can be altered by
State Preservation and Restoration
Preserving the State of Your Views
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
86the user in a way that is independent of its data or the owning view controller. For example, scroll views save
the current scroll position, which is information that is not interesting to the view controller but which does
affect how the view presents itself.
To designate that a view’s state should be saved, you do the following:
● Assign a valid string to the view’s restorationIdentifier property.
● Use the view from a view controller that also has a valid restoration identifier.
● For table views and collection views, assign a data source that adopts the
UIDataSourceModelAssociation protocol.
As with view controllers, assigning a restoration identifier to a view tells the system that the view object has
state that your app wants to save. The restoration identifier can also be used to locate the view later.
Like view controllers, views define methods for encoding and decoding their custom state. If you create a view
with state worth saving, you can use these methods to read and write any relevant data.
UIKit VIews with Preservable State
In order to save the state of any view, including both custom and standard system views, you must assign a
restoration identifier to the view. Views without a restoration identifier are not added to the list of preservable
objects by UIKit.
The following UIKit views have state information that can be preserved:
● UICollectionView
● UIImageView
● UIScrollView
● UITableView
● UITextField
● UITextView
● UIWebView
Other frameworks may also have views with preservable state. For information about whether a view saves
state information and what state it saves, see the reference for the corresponding class.
State Preservation and Restoration
Preserving the State of Your Views
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
87Preserving the State of a Custom View
If you are implementing a custom view that has restorable state, implement the
encodeRestorableStateWithCoder: and decodeRestorableStateWithCoder:methods and use them
to encode and decode that state. Use those methods to save only the data that cannot be easily reconfigured
by other means. For example, use these methods to save data that is modified by user interactions with the
view. Do not use these methods to save the data being presented by the view or any data that the owning
view controller can configure easily.
Listing 4-3 shows an example of how to preserve and restore the selection for a custom view that contains
editable text. In the example, the range is accessible using the selectionRange and setSelectionRange:
methods, which are custom methods the view uses to manage the selection. Encoding the data only requires
writing it to the provided coder object. Restoring the data requires reading it and applying it to the view.
Listing 4-3 Preserving the selection of a custom text view
// Preserve the text selection
- (void) encodeRestorableStateWithCoder:(NSCoder *)coder {
[super encodeRestorableStateWithCoder:coder];
NSRange range = [self selectionRange];
[coder encodeInt:range.length forKey:kMyTextViewSelectionRangeLength];
[coder encodeInt:range.location forKey:kMyTextViewSelectionRangeLocation];
}
// Restore the text selection.
- (void) decodeRestorableStateWithCoder:(NSCoder *)coder {
[super decodeRestorableStateWithCoder:coder];
if ([coder containsValueForKey:kMyTextViewSelectionRangeLength] &&
[coder containsValueForKey:kMyTextViewSelectionRangeLocation]) {
NSRange range;
range.length = [coder decodeIntForKey:kMyTextViewSelectionRangeLength];
range.location = [coder decodeIntForKey:kMyTextViewSelectionRangeLocation];
if (range.length > 0)
[self setSelectionRange:range];
}
}
State Preservation and Restoration
Preserving the State of Your Views
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
88Implementing Preservation-Friendly Data Sources
Because the data displayed by a table or collection view can change, both classes save information about the
currentselection and visible cells only if their data source implementsthe UIDataSourceModelAssociation
protocol. This protocol provides a way for a table or collection view to identify the content it contains without
relying on the index path of that content. Thus, regardless of where the data source places an item during the
next launch cycle, the view still has all the information it needs to locate that item.
In order to implement the UIDataSourceModelAssociation protocol successfully, your data source object
must be able to identify items between subsequent launches of the app. This means that any identification
scheme you devise must be invariant for a given piece of data. This is essential because the data source must
be able to retrieve the same piece of data for the same identifier each time it is requested. Implementing the
protocol itself is a matter of mapping from a data item to its unique ID and back again.
Apps that use Core Data can implement the protocol by taking advantage of object identifiers. Each object in
a Core Data store has a unique object identifier that can be converted into a URI and used to locate the object
later. If your app does not use Core Data, you need to devise your own form of unique identifiers if you want
to support state preservation for your views.
Note: Remember that implementing the UIDataSourceModelAssociation protocol is only
necessary to preserve attributes such as the current selection in a table or collection view. This
protocol is not used to preserve the actual data managed by your data source. It is your app’s
responsibility to ensure that its data is saved at appropriate times.
Preserving Your App’s High-Level State
In addition to the data preserved by your app’s view controllers and views, UIKit provides hooks for you to
save any miscellaneous data needed by your app. Specifically, the UIApplicationDelegate protocol includes
the following methods for you to override:
● application:willEncodeRestorableStateWithCoder:
● application:didDecodeRestorableStateWithCoder:
If your app contains state that does not live in a view controller, but that needs to be preserved, you can use
the precedingmethodsto save and restore it. The application:willEncodeRestorableStateWithCoder:
method is called at the very beginning of the preservation process so that you can write out any high-level
app state, such as the current version of your user interface. The
application:didDecodeRestorableStateWithCoder: method is called at the end of the restoration
state so that you can decode any data and perform any final cleanup that your app requires.
State Preservation and Restoration
Preserving Your App’s High-Level State
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
89Mixing UIKit’s State Preservation with YourOwn Custom Mechanisms
If your app already implementsits own custom state preservation and restoration mechanism, you can continue
to use that mechanism and migrate your code to use UIKit’ssupport over time. The design of UIKit’s preservation
mechanism allows you to pick and choose which view controllers you want to preserve. Thus, you can designate
that only portions of your interface should be restored by UIKit, leaving the rest to be handled by your app’s
current process.
Figure 4-7 shows a sample view hierarchy containing a tab bar controller and the view controllersin its assorted
tabs. In this sample, because the tab bar controller has a restoration identifier associated with it, UIKit saves
the state of the tab bar controller and all other child view controllers that also have a restoration identifier.
Your app’s custom code would then need to preserve the state of the remaining view controllers. During
restoration, a similar process occurs. UIKit restores all of the view controllersthat it preserved while your custom
code restores the rest.
Figure 4-7 UIKit handles the root view controller
Tab Bar Controller
Navigation Controller View Controller View Controller
Presented
View Controller
View Controller
Presented
View Controller
View Controller
Presented
View Controller
View Controller
Presented
View Controller
UIKit restoration
Custom restoration
If you prefer to have your own code manage the root view controller of your app, the save and restore process
differs slightly. Because UIKit would not automatically save any view controllers, you need to encode them
manually in the application:willEncodeRestorableStateWithCoder: method of your app delegate.
State Preservation and Restoration
Mixing UIKit’s State Preservation with Your Own Custom Mechanisms
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
90When you use the encodeObject:forKey: method of the coder to encode a view controller object, the
coder uses the view controller’s encodeRestorableStateWithCoder: method to do the encoding. This
process allows you to write arbitrary view controllers to the state preservation archive managed by UIKit.
When you decode archived view controllers during the next launch cycle, you muststill be prepared to provide
an instance of each view controller to UIKit. When you call the decodeObjectForKey: method to decode
your view controller, UIKit calls the
application:viewControllerWithRestorationIdentifierPath:coder:method of your app delegate
to retrieve the view controller object first. Only after UIKit has the view controller object does it call the
decodeRestorableStateWithCoder: method to return the view controller to its previous state. Your code
can use the application:viewControllerWithRestorationIdentifierPath:coder: method to
create the view controller and install it in your app’s view controller hierarchy.
Tips for Saving and Restoring State Information
As you add support for state preservation and restoration to your app, consider the following guidelines:
● Encode version information along with the rest of your app’s state. During the preservation process,
it is recommended that you encode a version string or number that identifies the current revision of your
app’s user interface. You can encode this state in the
application:willEncodeRestorableStateWithCoder: method of your app delegate. When your
app delegate’s application:shouldRestoreApplicationState: method is called, you can retrieve
this information from the provided coder and use it to determine if state preservation is possible.
● Do not include objects from your data model in your app’s state. Apps should continue to save their
data separately in iCloud or to local files on disk. Never use the state restoration mechanism to save that
data. Preserved interface data may be deleted if problems occur during a restore operation. Therefore,
any preservation-related data you write to disk should be considered purgeable.
● The state preservation system expects you to use view controllers in the ways they were designed to
be used. The view controller hierarchy is created through a combination of view controller containment
and by presenting one view controller from another. If your app displays the view of a view controller by
another means—for example, by adding it to another view without creating a containment relationship
between the corresponding view controllers—the preservation system will not be able to find your view
controller to preserve it.
● Remember that you might not want to preserve all view controllers. In some cases, it might not make
sense to preserve a view controller. For example, if the user left your app while it was displaying a view
controller to change the user’s password, you might want to cancel the operation and restore the app to
the previous screen. In such a case, you would not preserve the view controller that asks for the new
password information.
State Preservation and Restoration
Tips for Saving and Restoring State Information
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
91● Avoid swapping view controller classes during the restoration process. The state preservation system
encodes the class of the view controllers it preserves. During restoration, if your app returns an object
whose class does not match (or is not a subclass of) the original object, the system does not ask the view
controller to decode any state information. Thus, swapping out the old view controller for a completely
different one does not restore the full state of the object.
● Be aware that the system automatically deletes an app’s preserved state when the user force quits
the app. Deleting the preserved state information when the app is killed is a safety precaution. (The system
also deletes preserved state if the app crashes at launch time as a similar safety precaution.) If you want
to test your app’s ability to restore its state, you should not use the multitasking bar to kill the app during
debugging. Instead, use Xcode to kill the app or kill the app programmatically by installing a temporary
command or gesture to call exit on demand.
State Preservation and Restoration
Tips for Saving and Restoring State Information
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
92Aside from the images and media files your app presents on screen, there are some specific resources that iOS
itself requires your app to provide. The system uses these resources to determine how to present your app on
the user’s home screen and, in some cases, how to facilitate interactions with other parts of the system.
App Store Required Resources
There are several things that you are required to provide in your app bundle before submitting it to the App
Store:
● Your app must have an Info.plist file. This file contains information that the system needs to interact
with your app. Xcode creates a version of this file automatically but most apps need to modify this file in
some way. For information on how to configure this file, see “The Information Property List File” (page
93).
● Your app’s Info.plist file must include the UIRequiredDeviceCapabilities key. The App Store
uses this key to determine whether or not a user can run your app on a specific device. For information
on how to configure this key, see “Declaring the Required Device Capabilities” (page 94).
● You must include one or more icons in your app bundle. The system uses these icons when presenting
your app on the device’s home screen. For information about how to specify app icons, see “App
Icons” (page 98).
● Your app must include at least one image to be displayed while your app islaunching. The system displays
this image to provide the user with immediate feedback that your app is launching. For information about
launch images, see “App Launch (Default) Images” (page 100).
The Information Property List File
The information property list (Info.plist) file contains critical information about your app’s configuration and
must be included in your app bundle. Every new project you create in Xcode has a default Info.plist file
configured with some basic information about your project. You can modify this file to specify additional
configuration details for your app.
Your app’s Info.plist file must include the following keys:
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
93
App-Related Resources● UIRequiredDeviceCapabilities—The App Store uses this key to determine the capabilities of your
app and to prevent it from being installed on devices that do not support features your app requires. For
more information about this key, see “Declaring the Required Device Capabilities” (page 94).
● CFBundleIcons—Thisisthe preferred key forspecifying your app’sicon files. Older projects might include
the CFBundleIconFiles key instead. Both keys have essentially the same purpose but the
CFBundleIcons key is preferred because it allows you to organize your icons more efficiently. (The
CFBundleIcons key is also required for Newsstand apps.)
● UISupportedInterfaceOrientations—This key is included by Xcode automatically and is set to an
appropriate set of default values. However, you should add or remove values based on the orientations
that your app actually supports.
You might also want to include the following keys in your app’s Info.plist file, depending on the behavior
of your app:
● UIBackgroundModes—Include this key if your app supports executing in the background using one of
the defined modes; see “Implementing Long-Running Background Tasks” (page 58).
● UIFileSharingEnabled—Include this key if you want to expose the contents of your sandbox’s
Documents directory in iTunes.
● UIRequiresPersistentWiFi—Include this key if your app requires a Wi-Fi connection.
● UINewsstandApp—Include this key if your app presents content from the Newsstand app.
The Info.plist file itself is a property list file that you can edit manually or using Xcode. Each new Xcode
project contains a file called-Info.plist, where isthe name of your Xcode
project. This file is the template that Xcode uses to generate the actual Info.plist file at build time. When
you select this file, Xcode displays the property list editor that you can use to add or remove keys or change
the value of a key. For information about how to configure the contents of this file, see Property List Editor
Help .
For details about the keys you can include in the Info.plist file, see Information Property List Key Reference .
Declaring the Required Device Capabilities
The UIRequiredDeviceCapabilities key lets you declare the hardware or specific capabilities that your
app needs in order to run. All apps are required to have this key in their Info.plist file. The App Store uses
the contents of this key to prevent users from downloading your app onto a device that cannot possibly run
it.
App-Related Resources
The Information Property List File
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
94The value of the UIRequiredDeviceCapabilities key is either an array or a dictionary that contains additional
keys identifying features your app requires (or specifically prohibits). If you specify the value of the key using
an array, the presence of a key indicates that the feature is required; the absence of a key indicates that the
feature is not required and that the app can run without it. If you specify a dictionary instead, each key in the
dictionary must have a Boolean value that indicates whether the feature is required or prohibited. A value of
true indicates the feature is required and a value of false indicates that the feature must not be present on
the device. If a given capability is optional for your app, do not include the corresponding key in the dictionary.
Table 5-1 liststhe keysthat you can include in the array or dictionary for the UIRequiredDeviceCapabilities
key. You should include keys only for the featuresthat your app absolutely requires. If your app can run without
a specific feature, do not include the corresponding key.
Table 5-1 Dictionary keys for the UIRequiredDeviceCapabilities key
Key Description
Include this key if your app requires(orspecifically prohibits) the presence
of accelerometers on the device. Apps use the Core Motion framework
to receive accelerometer events. You do not need to include this key if
your app detects only device orientation changes.
accelerometer
Include this key if your app is compiled only for the armv6 instruction set.
(iOS 3.1 and later)
armv6
Include this key if your app is compiled only for the armv7 instruction set.
(iOS 3.1 and later)
armv7
Include this key if your app requires (or specifically prohibits) autofocus
capabilities in the device’s still camera. Although most developers should
not need to include this key, you might include it if your app supports
macro photography or requires sharper images in order to perform some
sort of image processing.
auto-focus-camera
Include this key if your app requires(orspecifically prohibits) the presence
of Bluetooth low-energy hardware on the device. (iOS 5 and later.)
bluetooth-le
Include this key if your app requires(orspecifically prohibits) the presence
of a camera flash for taking pictures or shooting video. Apps use the
UIImagePickerController interface to control the enabling of this
feature.
camera-flash
Include this key if your app requires(orspecifically prohibits) the presence
of a forward-facing camera. Apps use the UIImagePickerController
interface to capture video from the device’s camera.
front-facing-camera
App-Related Resources
The Information Property List File
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
95Key Description
Include this key if your app requires(orspecifically prohibits) Game Center.
(iOS 4.1 and later)
gamekit
Include this key if your app requires(orspecifically prohibits) the presence
of GPS (or AGPS) hardware when tracking locations. (You should include
this key only if you need the higher accuracy offered by GPS hardware.)
If you include this key, you should also include the location-services
key. You should require GPS only if your app needs location data more
accurate than the cellular or Wi-fi radios might otherwise provide.
gps
Include this key if your app requires(orspecifically prohibits) the presence
of a gyroscope on the device. Apps use the Core Motion framework to
retrieve information from gyroscope hardware.
gyroscope
Include this key if your app requires (or specifically prohibits) the ability
to retrieve the device’s current location using the Core Location framework.
(This key refers to the general location services feature. If you specifically
need GPS-level accuracy, you should also include the gps key.)
location-services
Include this key if your app requires(orspecifically prohibits) the presence
of magnetometer hardware. Apps use this hardware to receive
heading-related events through the Core Location framework.
magnetometer
Include this key if your app uses the built-in microphone or supports
accessories that provide a microphone.
microphone
Include this key if your app requires(orspecifically prohibits) the presence
of the OpenGL ES 1.1 interfaces.
opengles-1
Include this key if your app requires(orspecifically prohibits) the presence
of the OpenGL ES 2.0 interfaces.
opengles-2
Include this key if your app requires(orspecifically prohibits) peer-to-peer
connectivity over a Bluetooth network. (iOS 3.1 and later)
peer-peer
Include this key if your app requires(orspecifically prohibits) the presence
of the Messages app. You might require this feature if your app opens
URLs with the sms scheme.
sms
Include this key if your app requires(orspecifically prohibits) the presence
of a camera on the device. Apps use the UIImagePickerController
interface to capture images from the device’s still camera.
still-camera
App-Related Resources
The Information Property List File
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
96Key Description
Include this key if your app requires(orspecifically prohibits) the presence
of the Phone app. You might require this feature if your app opens URLs
with the tel scheme.
telephony
Include this key if your app requires(orspecifically prohibits) the presence
of a camera with video capabilities on the device. Apps use the
UIImagePickerController interface to capture video from the device’s
camera.
video-camera
Include this key if your app requires (or specifically prohibits) access to
the networking features of the device.
wifi
For detailed information on how to create and edit property lists, see Information Property List Key Reference .
Declaring Your App’s Supported Document Types
If your app is able to open existing or custom file types, your Info.plist file should include information
about those types. Declaring file types is how you let the system know that your app is able to open files of
the corresponding type. The system uses this information to direct file requests to your app at appropriate
times. For example, if the Mail app receives an attachment, the system can direct that attachment to your app
to open.
When declaring your app’s supported file types, you typically do not configure keys in your Info.plist file
directly. In the Info tab of your target settings, there is a Document Settings section that you can use to specify
your app’s supported types. Each document type that you add to this section can represent one file type or
several file types. For example, you can define a single document type that represents only PNG images or one
that represents PNG,JPG, and GIF images. The decision to represent one file type or multiple file types depends
on how your app presents the files. If it presents all of the files in the same way—that is, with the same icon
and with the same basic code path—then you can use one document type for multiple file types. If the code
paths or icons are different for each file type, you should declare different document types for each.
For each document type, you must provide the following information at a minimum:
● A name. This is a localizable string that can be displayed to the user if needed.
● An icon. All files associated with a document type share the same icon.
● The file types. These are uniform type identifier (UTI) strings that identify the supported file types. For
example, to specify the PNG file type, you would specify the public.png UTI. UTIs are the preferred way
to specify file types because they are less fragile than filename extensions and other techniques used to
identify files.
App-Related Resources
The Information Property List File
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
97Ultimately, Xcode converts your document type information into a set of keys and adds them to the
CFBundleDocumentTypes key in your app’s Info.plist file. The CFBundleDocumentTypes key contains
an array of dictionaries, where each dictionary represents one of your declared document types and includes
the name, icon, file type, and other information you specified.
For more information on the keys you use to declare your app’s document types, see Information Property List
Key Reference . For information about how to open files passed to your app by the system, see “Handling URL
Requests” (page 120).
App Icons
Every app must provide an icon to be displayed on a device’s Home screen and in the App Store. An app may
actually specify several different icons for use in different situations. For example, an app can provide a small
icon to use when displaying search results and can provide a high-resolution icon for devices with Retina
displays.
Regardless of how many different icons your app has, you specify them using the CFBundleIcons key in the
Info.plist file. The value of that key is an array of strings, each of which contains the filename of one of
your icons. The filenames can be anything you want, but all image files must be in the PNG format and must
reside in the top level of your app bundle. (Avoid using interlaced PNGs.) When the system needs an icon, it
choose the image file whose size most closely matches the intended usage.
Table 5-2 lists the dimensions of the icons you can include with your app. For apps that run on devices with
Retina displays, two versions of each icon should be provided, with the second one being a high-resolution
version of the original. The names of the two icons should be the same except for the inclusion of the string
@2x in the filename of the high-resolution image. You can find out more about specifying and loading
high-resolution image resources in Drawing and Printing Guide for iOS . For a complete list of app-related icons
and detailed information about the usage and preparation of your icons, see iOS Human Interface Guidelines.
Table 5-2 Sizes for images in the CFBundleIcons key
Icon Idiom Size Usage
Thisisthe main icon for appsrunning
on iPhone and iPod touch.
57 x 57 pixels
114 x 114 pixels
(@2x)
App icon (required) iPhone
Thisisthe main icon for appsrunning
on iPad.
72 x 72 pixels
144 x 144 pixels
(@2x)
App icon (required) iPad
App-Related Resources
App Icons
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
98Icon Idiom Size Usage
iTunes uses this icon when
presenting your app for distribution.
These files must be included at the
top level of your app bundle and the
names of the files must be
iTunesArtwork and
iTunesArtwork@2x (no filename
extension).
512 x 512
1024 x 1024 (@2x)
App icon for the App iPhone/iPad
Store (required)
This is the icon displayed in
conjunction with search results on
iPhone and iPod touch. This icon is
also used by the Settings app on all
devices.
29 x 29 pixels
58 x 58 pixels
(@2x)
Small icon for iPhone
Spotlight search
results and Settings
(recommended)
This is the icon displayed in
conjunction with search results on
iPad.
50 x 50 pixels
100 x 100 pixels
(@2x)
Small icon for iPad
Spotlight search
results and Settings
(recommended)
When specifying icon files using the CFBundleIcons key, it is best to omit the filename extensions of your
image files. If you include a filename extension, you must explicitly add the names of all image files (including
any high-resolution variants). When you omit the filename extension, the system automatically detects
high-resolution variants of your file, even if they are not included in the array.
If your iPhone app is running in iOS 3.1.3 or earlier, the system does not look for icons using your Info.plist
file. The CFBundleIcons key was introduced in iOS 5.0 and the CFBundleIconFiles key was introduced
in iOS 3.2. Instead of using these keys, the system looks for icon files with specific filenames. Although the sizes
of the icons are the same as those in Table 5-2 (page 98), if your app supports deployment on iOS 3.1.3 and
earlier, you must use the following filenames when naming your icons:
● Icon.png. The name for the app icon on iPhone or iPod touch.
● Icon-72.png. The name for the app icon on iPad.
● Icon-Small.png. The name for the search results icon on iPhone and iPod touch. This file is also used
for the Settings icon on all devices.
● Icon-Small-50.png. The name of the search results icon on iPad.
App-Related Resources
App Icons
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
99Important: The use of fixed filenames for your app icons is for compatibility with earlier versions of iOS
only. Even if you use these fixed icon filenames, your app should continue to include the CFBundleIcons
or CFBundleIconFiles key in your app’s Info.plist file.
For more information about the CFBundleIcons key, see Information Property List Key Reference . For
information about creating your app icons, see iOS Human Interface Guidelines.
App Launch (Default) Images
When the system launches an app, it temporarily displays a static launch image on the screen. Your app provides
this image, with the image contents usually containing a prerendered version of your app’s default user
interface. The purpose of thisimage isto give the user immediate feedback that the app launched. It also gives
your app time to initialize itself and prepare its initial set of views for display. When your app is ready to run,
the system removes the image and displays your app’s windows and views.
Every app must provide at least one launch image. This image is typically in a file named Default.png that
displays your app’s initial screen in a portrait orientation. However, you can also provide other launch images
to be used under different launch conditions. All launch images must be PNG files and must reside in the top
level of your app’s bundle directory. (Avoid using interlaced PNGs.) The name of each launch image indicates
its purpose and how it is used. The format for launch image filenames is as follows:
.png
The portion of the filename is either the string Default or a custom string that you specify using
the UILaunchImageFile key in your app’s Info.plist file. The portion is the optional
string @2x and should be included only for imagesintended for use on Retina displays. Other optional modifiers
may also be included in the name, and several standard modifiers are discussed in the sections that follow.
Table 5-3 lists the dimensions for launch images in iOS apps. For all dimensions, the image width is listed first,
followed by the image height. For precise information about which size launch image to use and how to
prepare your launch images, see iOS Human Interface Guidelines.
Table 5-3 Typical launch image dimensions
Device Portrait Landscape
320 x 480 pixels Not supported
640 x 960 pixels (@2x)
iPhone and iPod touch
iPhone 5 and iPod touch (5th generation) 640 x 1136 pixels (@2x) Not supported
App-Related Resources
App Launch (Default) Images
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
100Device Portrait Landscape
1024 x 748 pixels
2048 x 1496 pixels (@2x)
768 x 1004 pixels
1536 x 2008 pixels (@2x)
iPad
To demonstrate the naming conventions, suppose your iOS app’s Info.plist file included the
UILaunchImageFile key with the value MyLaunchImage. The standard resolution version of the launch
image would be named MyLaunchImage.png and would be in a portrait orientation (320 x 480). The
high-resolution version of the same launch image would be named MyLaunchImage@2x.png. If you did not
specify a customlaunch image name, these files would need to be named Default.png and Default@2x.png,
respectively.
To specify default launch images for iPhone 5 and iPod touch (5th generation) devices, include the modifier
string -568h immediately after the portion of the filename. Because these devices have Retina
displays, the @2x modifier must always be included with launch imagesfor the devices. For example, the default
launch image name for a device is Default-568h@2x.png. (If your app has the UILaunchImageFile key
in its Info.plist file, replace the Default portion of the string with your custom string.) The -568h modifier
should always be the first one in the list. You can also insert other modifiers after the -568h string as described
below.
For more information about the UILaunchImageFile key, see Information Property List Key Reference .
Providing Launch Images for Different Orientations
In iOS 3.2 and later, an iPad app can provide both landscape and portrait versions of its launch images. Each
orientation-specific launch image must include a special modifier string in its filename. The format for
orientation-specific launch image filenames is as follows:
.png
Table 5-4 lists the possible modifiers you can specify for the value in your image
filenames. As with all launch images, each file must be in the PNG format. These modifiers are supported for
launch images used in iPad apps only; they are notsupported for appsrunning on iPhone or iPod touch devices.
Table 5-4 Launch image orientation modifiers
Modifier Description
Specifies an upside-down portrait version of the launch image. A file with
this modifier takes precedence over a file with the -Portrait modifier
for this specific orientation.
-PortraitUpsideDown
App-Related Resources
App Launch (Default) Images
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
101Modifier Description
Specifies a left-oriented landscape version of the launch image. A file with
this modifier takes precedence over a file with the -Landscape modifier
for this specific orientation.
-LandscapeLeft
Specifies a right-oriented landscape version of the launch image. A file
with this modifier takes precedence over a file with the -Landscape
modifier for this specific orientation.
-LandscapeRight
Specifies the generic portrait version of the launch image. This image is
used for right-side up portrait orientations and takes precedence over the
Default.png image file (or your custom-named replacement for that
file). If a file with the -PortraitUpsideDown modifier is not specified,
this file is also used for upside-down portrait orientations as well.
-Portrait
Specifies the generic landscape version of the launch image. If a file with
the -LandscapeLeft or -LandscapeRight modifier is not specified,
this image is used instead. This image takes precedence over the
Default.png image file (or your custom-named replacement for that
file).
-Landscape
If you provide a launch image file with no orientation modifier, that file
is used when no other orientation-specific launch image is available. For
apps running on systems earlier than iOS 3.2, you must name this file
Default.png.
(none)
For example, if you specify the value MyLaunchImage in the UILaunchImageFile key, the custom landscape
and portrait launch images for your iPad app would be named MyLaunchImage-Landscape.png and
MyLaunchImage-Portrait.png. If you do not specify a custom launch image filename, you would use the
names Default-Landscape.png and Default-Portrait.png.
No matter which launch image is displayed by the system, your app always launches in a portrait orientation
initially and then rotates as needed to the correct orientation. Therefore, your
application:didFinishLaunchingWithOptions: method should always assume a portrait orientation
when setting up your window and views. Shortly after the
application:didFinishLaunchingWithOptions: method returns, the system sends any necessary
orientation-change notifications to your app’s window, giving it and your app’s view controllers a chance to
reorient views using the standard process.
For more information about how your view controllers manage the rotation process, see “Creating Custom
Content View Controllers” in View Controller Programming Guide for iOS .
App-Related Resources
App Launch (Default) Images
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
102Providing Device-Specific Launch Images
Universal apps must provide launch images for both the iPhone and iPad idioms. Because iPhone apps require
only one launch image (Default.png), whereas iPad apps typically require different images for portrait and
landscape orientations, you can usually do without device-specific modifiers. However, if you create multiple
launch images for each idiom, the names of device-specific image files are likely to collide. In that situation,
you can append a device modifier to filenames to indicate that they are for a specific platform only. The
following device modifiers are recognized for launch images in iOS 4.0 and later:
● ~ipad. The launch image should be loaded on iPad devices only.
● ~iphone. The launch image should be loaded on iPhone or iPod touch devices only.
Because device modifiers are notsupported in iOS 3.2, the minimalset of launch images needed for a universal
app (running in iOS 3.2 and later) would need to be named Default.png and Default~iphone.png. In
that case, the Default.png file would contain the iPad launch image (for all orientations) and the
Default~iphone.png file would contain the iPhone version of the image. (To support high-resolution displays,
you would also need to include a Default@2x~iphone.png launch image.)
Note: If you are using the UILaunchImageFile key in your Info.plist file to specify a custom
base name for your launch image files, add device-specific versions as needed to differentiate the
launch images on different devices. For example, specify a UILaunchImageFile~ipad key to
specify a different base name for iPad launch images. Specifying different base nameslets a universal
app avoid naming conflicts among its launch images. For more information on how to apply device
modifiers to keys in the Info.plist file, see Information Property List Key Reference .
Providing Launch Images for Custom URL Schemes
If your app supports one or more custom URL schemes, it can also provide a custom launch image for each
URL scheme. When the system launches your app to handle a URL, it displays the launch image associated
with the scheme of the given URL. In this case, the format for your launch image filenames are as follows:
-.png
The modifier is a string representing the name of your URL scheme name. For example, if your
app supports a URL scheme with the name myscheme, the system looks for an image with the name
Default-myscheme.png (or Default-myscheme@2x.png for Retina displays) in the app’s bundle. If the
app’s Info.plist file includesthe UILaunchImageFile key, the base name portion changesfrom Default
to the custom string you provide in that key.
App-Related Resources
App Launch (Default) Images
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
103Note: You can combine a URL scheme modifier with orientation modifiers. If you do this, the format
for the filename is
-.png.For
more information about the launch orientation modifiers,see “Providing Launch Imagesfor Different
Orientations” (page 101).
In addition to including the launch images at the top level of your bundle, you can also include localized
versions of your launch images in your app’s language-specific project subdirectories. For more information on
localizing resources in your app, see “Localized Resource Files” (page 105).
The Settings Bundle
Apps that want to display preferences in the Settings app must include a Settings bundle resource. A Settings
bundle is a specially formatted bundle that sits at the top of your app’s bundle directory and contains the data
needed to display your app’s preferences. Figure 5-1 shows an example of custom preferences displayed for
an app.
Figure 5-1 Custom preferences displayed by the Settings app
App-Related Resources
The Settings Bundle
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
104Note: Because changing preferences in the Settings app requires leaving your app, you should use
a Settings bundle only for preferences that the user changes infrequently. Frequently changed
settings should be included directly inside your app.
Xcode provides support for creating a Settings bundle resource and adding it to your app. Inside the Settings
bundle, you place one or more property list files and any images associated with your preferences. Each
property-list file contains special keys and values that tell the Settings app how to display different pages of
your preferences. Changesto your app’s preferences are stored in the user defaults database and are accessible
to your app using an NSUserDefaults object.
For detailed information about how to create a Settings bundle, see Preferences and Settings Programming
Guide .
Localized Resource Files
Because iOS apps are distributed in many countries, localizing your app’s content can help you reach many
more customers. Users are much more likely to use an app when it is localized for their native language. When
you factor your user-facing content into resource files, localizing that content is a relatively simple process.
Before you can localize your content, you must internationalize your app in order to facilitate the localization
process. Internationalizing your app involves factoring out any user-facing content into localizable resource
files and providing language-specific project (.lproj) directories for storing that content. It also means using
appropriate technologies (such as date and number formatters) when working with language-specific and
locale-specific content.
For a fully internationalized app, the localization process creates new sets of language-specific resource files
for you to add to your project. A typical iOS app requires localized versions of the following types of resource
files:
● Storyboard files (or nib files)—Storyboards can contain text labels and other content that need to be
localized. You might also want to adjust the position of interface items to accommodate changes in text
length. (Similarly, nib files can contain text that needs to be localized or layout that needs to be updated.)
● Strings files—Strings files (so named because of their .strings filename extension) contain localized
text that you plan to display in your app.
●
Image files—You should avoid localizing images unless the images contain culture-specific content. And
you should never store text directly in your image files. Instead, store text in a strings file and composite
that text with your image-based content at runtime..
App-Related Resources
Localized Resource Files
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
105● Video and audio files—You should avoid localizing multimedia files unless they contain language-specific
or culture-specific content. For example, you would want to localize a video file that contained a voice-over
track.
For information about the internationalization and localization process, see Internationalization Programming
Topics. For information about the proper way to use resource files in your app, see Resource Programming
Guide .
Loading Resources Into Your App
Putting resources into your bundle is the first step but at runtime, you need to be able to load those resources
into memory and use them. Resource management is broken down basically into two steps:
1. Locate the resource.
2. Load the resource.
3. Use the resource.
To locate resources, you use an NSBundle object. A bundle object understands the structure of your app’s
bundle and knows how to locate resources inside it. Bundle objects even use the current language settings to
choose an appropriately localized version of the resource. The pathForResource:ofType: method is one
of several NSBundle methods that you can use to retrieve the location of resource files.
Once you have the location of a resource file, you have to decide the most appropriate way to load it into
memory. Common resource types usually have a corresponding class that you use to load the resource:
● To load view controllers (and their corresponding views) from a storyboard, use the UIStoryboard class.
● To load an image, use the methods of the UIImage class.
● To load string resources, use the NSLocalizedString and related macros defined in Foundation
framework.
● To load the contents of a property list, use the dictionaryWithContentsOfURL: method of
NSDictionary, or use the NSPropertyListSerialization class.
● To load binary data files, use the methods of the NSData class.
● To load audio and video resources, use the classes of the Assets Library, Media Player, or AV Foundation
frameworks.
App-Related Resources
Loading Resources Into Your App
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
106The following example shows how to load an image stored in a resource file in the app’s bundle. The first line
gets the location of the file in the app’s bundle (also known as the main bundle here). The second line creates
a UIImage object using the data in the file at that location.
NSString* imagePath = [[NSBundle mainBundle] pathForResource:@"sun" ofType:@"png"];
UIImage* sunImage = [[UIImage alloc] initWithContentsOfFile:imagePath];
For more information about resources and how to access them from your app, see Resource Programming
Guide .
App-Related Resources
Loading Resources Into Your App
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
107Many app-related tasks depend on the type of app you are trying to create. This chapter shows you how to
implement some of the common behaviors found in iOS apps.
Creating a Universal App
A universal app is a single app that is optimized for iPhone, iPod touch, and iPad devices. Providing a single
binary that adapts to the current device offers the best user experience but, of course, involves extra work on
your part. Because of the differences in device screen sizes, most of your window, view, and view controller
code for iPad is likely to be very different from the code for iPhone and iPod touch. In addition, there are things
you must do to ensure your app runs correctly on each device type.
Xcode provides built-in support for configuring universal apps. When you create a new project, you can select
whether you want to create a device-specific project or a universal project. After you create your project, you
can change the supported set of devices for your app target using the Summary pane. When changing from
a single-device project to a universal project, you must fill in the information for the device type for which you
are adding support.
The following sections highlight the changes you must make to an existing app to ensure that it runssmoothly
on any type of device.
Updating Your Info.plist Settings
Most of the existing keys in a universal app’s Info.plist file should remain the same. However, for any keys
that require different values on iPhone versus iPad devices, you can add device modifiers to the key name.
When reading the keys of your Info.plist file, the system interprets each key using the following format:
key_root-~
In this format, the key_root portion represents the original name of the key. The and
portions are both optional endings that you can use for keys that are specific to a platform or device. For apps
that run only on iOS, you can omit the platform string. (The iphoneos platform string is used to distinguish
apps written for iOS from those written for Mac OS X.) To apply a key to a specific device, use one of the
following values:
● iphone—The key applies to iPhone devices.
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
108
Advanced App Tricks● ipod—The key applies to iPod touch devices.
● ipad—The key applies to iPad devices.
For example, to indicate that you want your app to launch in a portrait orientation on iPhone and iPod touch
devices but in landscape-right on iPad, you would configure your Info.plist with the following keys:
UIInterfaceOrientation
UIInterfaceOrientationPortrait
UIInterfaceOrientation~ipad
UIInterfaceOrientationLandscapeRight
Notice that in the preceding example, there is an iPad-specific key and a default key without any device
modifiers. Continue to use the default key to specify the most common (or default) value and add a specific
version with a device-specific modifier when you need to change that value. This guarantees that there is
always a value available for the system to examine. For example, if you were to replace the default key with
an iPhone-specific and iPad-specific version of the UIInterfaceOrientation key, the system would not
know the preferred starting orientation for iPod devices.
For more information about the keys you can include in your Info.plist file, see Information Property List
Key Reference
Implementing Your View Controllers and Views
The largest amount of effort that goes into creating universal apps is designing your user interface. Because
of the different screen sizes, apps often need completely separate versions of their interface for each device
idiom. This means creating new view hierarchies but might also mean creating completely different view
controller objects to manage those views.
For views, the main modification is to redesign your view hierarchies to support the larger screen. Simply
scaling existing views may work but often yields poor results. Your new interface should make use of the
available space and take advantage of new interface elements where appropriate. Doing so is better because
it results in an interface that feels more natural to the user, and does not just feel like an iPhone app on a larger
screen.
For view controllers, follow these guidelines:
● Consider defining separate view controller classes for iPhone and iPad devices. Using separate view
controllers is often easier than trying to create one view controller that supports both platforms. If there
is a significant amount of shared code, you could always put the shared code in a base class and then
implement custom subclasses to address device-specific issues.
Advanced App Tricks
Creating a Universal App
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
109●
If you use a single view controller class for both platforms, your code must support both iPhone and iPad
screen sizes. (For an app that uses nib files, this might mean choosing which nib file to load based on the
current device idiom.) Similarly, your view controller code must be able to handle differences between
the two platforms.
For views, follow these guidelines:
● Consider using separate sets of views for iPhone and iPad devices. For custom views, this means defining
different versions of your class for each device.
●
If you choose to use the same custom view for both devices, make sure your drawRect: and
layoutSubviews methods especially work properly on both devices.
For information about the view controllers you can use in your apps, see View Controller Programming Guide
for iOS .
Updating Your Resource Files
Because resource files are often used to implement portions of your app’s user interface, you need to make
the following changes:
●
In addition to the Default.png file displayed when your app launches on iPhone devices, you must add
new launch images for iPad devices as described in “Providing Launch Images for Different
Orientations” (page 101).
●
If you use images, you may need to add larger (or higher-resolution) versions to support iPad devices.
●
If you use storyboard or nib files, you need to provide a new set of files for iPad devices.
● You must size your app icons appropriately for iPad, as described in “App Icons” (page 98).
When using different resource files for each platform, you can conditionally load those resources just as you
would conditionally execute code. For more information about how to use runtime checks,see “Using Runtime
Checks to Create Conditional Code Paths” (page 110).
Using Runtime Checks to Create Conditional Code Paths
If your code needs to follow a different path depending on the underlying device type, use the
userInterfaceIdiom property of UIDevice to determine which path to take. This property provides an
indication of the style of interface to create: iPad or iPhone. Because this property is available only in iOS 3.2
and later, apps that support earlier versions of iOS need to check for the availability of this property before
accessing it. Of course, the simplest way to check this property is to use the UI_USER_INTERFACE_IDIOM
macro, which performs the necessary runtime checks for you.
Advanced App Tricks
Creating a Universal App
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
110if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {
// The device is an iPad running iOS 3.2 or later.
}
else {
// The device is an iPhone or iPod touch.
}
Supporting Multiple Versions of iOS
Any app that supports a range of iOS versions must use runtime checks to prevent the use of newer APIs on
older versions of iOS that do not support them. For example, if you build your app using new features in iOS
6 but your app still supports iOS 5, runtime checks allow you to use recently introduced features when they
are available and to follow alternate code paths when they are not. Failure to include such checks will cause
your app to crash when it tries to use new symbols that are not available on the older operating system.
There are several types of checks that you can make:
● To determine whether amethod is available on an existing class, use the instancesRespondToSelector:
class method or the respondsToSelector: instance method.
● Apps that link against iOS SDK 4.2 and later can use the weak linking support introduced in that version
of the SDK. This support lets you check for the existence of a given Class object to determine whether
you can use that class. For example:
if ([UIPrintInteractionController class]) {
// Create an instance of the class and use it.
}
else {
// The print interaction controller is not available.
}
To use this feature, you must build your app using LLVM and Clang and the app’s deployment target must
be set to iOS 3.1 or later.
● Appsthat link against iOS SDK 4.1 and earlier must use the NSClassFromString function to see whether
a class is defined. If the function returns a value other than nil, you may use the class. For example:
Class splitVCClass = NSClassFromString(@"UISplitViewController");
Advanced App Tricks
Supporting Multiple Versions of iOS
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
111if (splitVCClass)
{
UISplitViewController* mySplitViewController = [[splitVCClass alloc]
init];
// Configure the split view controller.
}
● To determine whether a C-based function is available, perform a Boolean comparison of the function name
to NULL. If the symbol is not NULL, you can use the function. For example:
if (UIGraphicsBeginPDFPage != NULL)
{
UIGraphicsBeginPDFPage();
}
For more information and examples of how to write code that supports multiple deployment targets, see SDK
Compatibility Guide .
Launching in Landscape Mode
Apps that uses only landscape orientations for their interface must explicitly ask the system to launch the app
in that orientation. Normally, apps launch in portrait mode and rotate their interface to match the device
orientation as needed. For apps that support both portrait and landscape orientations, always configure your
views for portrait mode and then let your view controllers handle any rotations. If, however, your app supports
landscape but not portrait orientations, perform the following tasks to make it launch in landscape mode
initially:
● Add the UIInterfaceOrientation key to your app’s Info.plist file and set the value of this key to
either UIInterfaceOrientationLandscapeLeft or UIInterfaceOrientationLandscapeRight.
● Lay out your views in landscape mode and make sure that their layout or autosizing options are set
correctly.
● Override your view controller’s shouldAutorotateToInterfaceOrientation: method and return
YES for the left or right landscape orientations and NO for portrait orientations.
Advanced App Tricks
Launching in Landscape Mode
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
112Important: Apps should always use view controllers to manage their window-based content.
The UIInterfaceOrientation key in the Info.plist file tells iOS that it should configure the orientation
of the app status bar (if one is displayed) as well as the orientation of views managed by any view controllers
at launch time. In iOS 2.1 and later, view controllers respect this key and set their view’s initial orientation to
match. Using this key is equivalent to calling the setStatusBarOrientation:animated: method of
UIApplication early in the execution of your applicationDidFinishLaunching: method.
Note: To launch a view controller–based app in landscape mode in versions of iOS before 2.1, you
need to apply a 90-degree rotation to the transform of the app’s root view in addition to all the
preceding steps.
Installing App-Specific Data Files at First Launch
You can use your app’s first launch cycle to set up any data or configuration files required to run. App-specific
data files should be created in the Library/Application Support// directory of your app
sandbox, where is your app’s bundle identifier. You can furthersubdivide this directory to organize
your data files as needed. You can also create files in other directories, such as to your app’s iCloud container
directory or to the local Documents directory, depending on your needs.
If your app’s bundle contains data filesthat you plan to modify, you must copy those files out of the app bundle
and modify the copies. You must not modify any files inside your app bundle. Because iOS apps are code
signed, modifying files inside your app bundle invalidates your app’s signature and prevents your app from
launching in the future. Copying those files to the Application Support directory (or another writable
directory in your sandbox) and modifying them there is the only way to use such files safely.
For more information about the directories of the iOS app sandbox and the proper location for files, see File
System Programming Guide .
Protecting Data Using On-Disk Encryption
In iOS 4 and later, apps can use the data protection feature to add a level of security to their on-disk data. Data
protection uses the built-in encryption hardware present on specific devices (such as the iPhone 3GS and
iPhone 4) to store files in an encrypted format on disk. While the user’s device is locked, protected files are
inaccessible even to the app that created them. The user must explicitly unlock the device (by entering the
appropriate passcode) at least once before your app can access one of its protected files.
Data protection is available on most iOS devices and is subject to the following requirements:
Advanced App Tricks
Installing App-Specific Data Files at First Launch
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
113● The file system on the user’s device must support data protection. This is true for newer devices, but for
some earlier devices, the user might have to reformat the device’s disk and restore any content from a
backup.
● The user must have an active passcode lock set for the device.
To protect a file, your app must add an attribute to the file indicating the desired level of protection. Add this
attribute using either the NSData class or the NSFileManager class. When writing new files, you can use the
writeToFile:options:error: method of NSData with the appropriate protection value as one of the
write options. For existing files, you can use the setAttributes:ofItemAtPath:error: method of
NSFileManager to set or change the value of the NSFileProtectionKey. When using these methods, your
app can specify one of the following protection levels for the file:
● No protection—The file is not encrypted on disk. You can use this option to remove data protection from
an accessible file. Specify the NSDataWritingFileProtectionNone option (NSData) or the
NSFileProtectionNone attribute (NSFileManager).
● Complete—The file is encrypted and inaccessible while the device is locked. Specify the
NSDataWritingFileProtectionComplete option (NSData) or the NSFileProtectionComplete
attribute (NSFileManager).
● Complete unless already open—The file is encrypted. A closed file isinaccessible while the device islocked.
After the user unlocks the device, your app can open the file and use it. If the user locks the device while
the file is open, though, your app can continue to access it. Specify the
NSDataWritingFileProtectionCompleteUnlessOpen option (NSData) or the
NSFileProtectionCompleteUnlessOpen attribute (NSFileManager).
● Complete until first login—The file is encrypted and inaccessible until after the device has booted and the
user has unlocked it once. Specify the
NSDataWritingFileProtectionCompleteUntilFirstUserAuthentication option (NSData) or
the NSFileProtectionCompleteUntilFirstUserAuthentication attribute (NSFileManager).
If you protect a file, your app must be prepared to lose access to that file. When complete file protection is
enabled, even your app loses the ability to read and write the file’s contents when the user locks the device.
Your app has several options for tracking when access to protected files might change, though:
● The app delegate can implement the applicationProtectedDataWillBecomeUnavailable: and
applicationProtectedDataDidBecomeAvailable: methods.
● Any object can register for the UIApplicationProtectedDataWillBecomeUnavailable and
UIApplicationProtectedDataDidBecomeAvailable notifications.
● Any object can check the value of the protectedDataAvailable property of the shared UIApplication
object to determine whether files are currently accessible.
Advanced App Tricks
Protecting Data Using On-Disk Encryption
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
114For new files, it is recommended that you enable data protection before writing any data to them. If you are
using the writeToFile:options:error: method to write the contents of an NSData object to disk, this
happens automatically. For existing files, adding data protection replaces an unprotected file with a new
protected version.
Tips for Developing a VoIP App
A Voice over Internet Protocol (VoIP) app allows the user to make phone calls using an Internet connection
instead of the device’s cellular service. Such an app needs to maintain a persistent network connection to its
associated service so that it can receive incoming calls and other relevant data. Rather than keep VoIP apps
awake all the time, the system allowsthem to be suspended and providesfacilitiesfor monitoring theirsockets
for them. When incoming traffic is detected, the system wakes up the VoIP app and returns control of itssockets
to it.
There are several requirements for implementing a VoIP app:
1. Add the UIBackgroundModes key to your app’s Info.plist file. Set the value of this key to an array
that includes the voip string.
2. Configure one of the app’s sockets for VoIP usage.
3. Before moving to the background, call the setKeepAliveTimeout:handler: method to install a handler
to be executed periodically. Your app can use this handler to maintain its service connection.
4. Configure your audio session to handle transitions to and from active use.
5. To ensure a better user experience on iPhone, use the Core Telephony framework to adjust your behavior
in relation to cell-based phone calls; see Core Telephony Framework Reference .
6. To ensure good performance for your VoIP app, use the System Configuration framework to detect network
changes and allow your app to sleep as much as possible.
Including the voip value in the UIBackgroundModes key lets the system know that it should allow the app
to run in the background as needed to manage its network sockets. This key also permits your app to play
background audio (although including the audio value for the UIBackgroundModes key is still encouraged).
An app with this key is also relaunched in the background immediately after system boot to ensure that the
VoIP services are always available. For more information about the UIBackgroundModes key, see Information
Property List Key Reference .
Advanced App Tricks
Tips for Developing a VoIP App
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
115Configuring Sockets for VoIP Usage
In order for your app to maintain a persistent connection while it is in the background, you must tag your app’s
main communication socket specifically for VoIP usage. Tagging this socket tells the system that it should take
over management of the socket when your app is suspended. The handoff itself is totally transparent to your
app. And when new data arrives on the socket, the system wakes up the app and returns control of the socket
so that the app can process the incoming data.
You need to tag only the socket you use for communicating with your VoIP service. This is the socket you use
to receive incoming calls or other data relevant to maintaining your VoIP service connection. Upon receipt of
incoming data, the handler for this socket needs to decide what to do. For an incoming call, you likely want
to post a local notification to alert the user to the call. For other noncritical data, though, you might just process
the data quietly and allow the system to put your app back into the suspended state.
In iOS, most sockets are managed using streams or other high-level constructs. To configure a socket for VoIP
usage, the only thing you have to do beyond the normal configuration is add a special key that tags the
interface as being associated with a VoIP service. Table 6-1 lists the stream interfaces and the configuration
for each.
Table 6-1 Configuring stream interfaces for VoIP usage
Interface Configuration
For Cocoa streams, use the setProperty:forKey: method to add the
NSStreamNetworkServiceType property to the stream. The value of
this property should be set to NSStreamNetworkServiceTypeVoIP.
NSInputStream and
NSOutputStream
When using the URL loading system, use the setNetworkServiceType:
method of your NSMutableURLRequest object to set the network service
type of the request. The service type should be set to
NSURLNetworkServiceTypeVoIP.
NSURLRequest
For Core Foundation streams, use the CFReadStreamSetProperty or
CFWriteStreamSetProperty function to add the
kCFStreamNetworkServiceType property to the stream. The value for
this property should be set to kCFStreamNetworkServiceTypeVoIP.
CFReadStreamRef and
CFWriteStreamRef
Advanced App Tricks
Tips for Developing a VoIP App
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
116Note: When configuring your sockets, you need to configure only your main signaling channel with
the appropriate service type key. You do not need to include this key when configuring your voice
channels.
Because VoIP apps need to stay running in order to receive incoming calls, the system automatically relaunches
the app if it exits with a nonzero exit code. (This type of exit could happen when there is memory pressure
and your app is terminated as a result.) However, terminating the app also releases all of its sockets, including
the one used to maintain the VoIP service connection. Therefore, when the app is launched, it always needs
to create its sockets from scratch.
Formore information about configuring Cocoa streamobjects,see StreamProgrammingGuide . Forinformation
about using URL requests,see URL Loading System Programming Guide . And for information about configuring
streams using the CFNetwork interfaces, see CFNetwork Programming Guide .
Installing a Keep-Alive Handler
To prevent the loss of its connection, a VoIP app typically needs to wake up periodically and check in with its
server. To facilitate this behavior, iOS lets you install a special handler using the
setKeepAliveTimeout:handler: method of UIApplication. You typically install this handler in the
applicationDidEnterBackground: method of your app delegate. Once installed, the system calls your
handler at least once before the timeout interval expires, waking up your app as needed to do so.
Your keep-alive handler executes in the background and should return as quickly as possible. Handlers are
given a maximum of 10 seconds to perform any needed tasks and return. If a handler has not returned after
10 seconds, or has not requested extra execution time before that interval expires, the system suspends the
app.
When installing your handler, specify the largest timeout value that is practical for your app’s needs. The
minimum allowable interval for running your handler is 600 seconds, and attempting to install a handler with
a smaller timeout value will fail. Although the system promises to call your handler block before the timeout
value expires, it does not guarantee the exact call time. To improve battery life, the system typically groups
the execution of your handler with other periodic system tasks, thereby processing all tasks in one quick burst.
As a result, your handler code must be prepared to run earlier than the actual timeout period you specified.
Configuring Your App’s Audio Session
As with any background audio app, the audio session for a VoIP app must be configured properly to ensure
the app works smoothly with other audio-based apps. Because audio playback and recording for a VoIP app
are not used all the time, it is especially important that you create and configure your app’s audio session
Advanced App Tricks
Tips for Developing a VoIP App
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
117object only when it is needed. For example, you would create the audio session to notify the user of an incoming
call or while the user was actually on a call. As soon as the call ends, you would then remove strong references
to the audio session and give other audio apps the opportunity to play their audio.
For information about how to configure and manage an audio session for a VoIP app, see Audio Session
Programming Guide .
Using the Reachability Interfaces to Improve the User Experience
Because VoIP apps rely heavily on the network, they should use the reachability interfaces of the System
Configuration framework to track network availability and adjust their behavior accordingly. The reachability
interfaces allow an app to be notified whenever network conditions change. For example, a VoIP app could
close its network connections when the network becomes unavailable and recreate them when it becomes
available again. The app could also use those kinds of changes to keep the user apprised about the state of
the VoIP connection.
To use the reachability interfaces, you must register a callback function with the framework and use it to track
changes. To register a callback function:
1. Create a SCNetworkReachabilityRef structure for your target remote host.
2. Assign a callback function to yourstructure (using the SCNetworkReachabilitySetCallback function)
that processes changes in your target’s reachability status.
3. Add that target to an active run loop of your app (such as the main run loop) using the
SCNetworkReachabilityScheduleWithRunLoop function.
Adjusting your app’s behavior based on the availability of the network can also help improve the battery life
of the underlying device. Letting the system track the network changes means that your app can let itself go
to sleep more often.
For more information about the reachability interfaces, see System Configuration Framework Reference .
Communicating with Other Apps
Apps that support custom URL schemes can use those schemes to receive messages. Some apps use URL
schemes to initiate specific requests. For example, an app that wants to show an address in the Maps app can
use a URL to launch that app and display the address. You can implement your own URL schemes to facilitate
similar types of communications in your apps.
Advanced App Tricks
Communicating with Other Apps
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
118Apple provides built-in support for the http, mailto, tel, and sms URL schemes. It also supports http–based
URLs targeted at the Maps, YouTube, and iPod apps. The handlers for these schemes are fixed and cannot be
changed. If your URL type includes a scheme that is identical to one defined by Apple, the Apple-provided app
is launched instead of your app.
Note: If more than one third-party app registers to handle the same URL scheme, there is currently
no process for determining which app will be given that scheme.
To communicate with an app using a custom URL, create an NSURL object with some properly formatted
content and pass that object to the openURL: method of the shared UIApplication object. The openURL:
method launches the app that registered to receive URLs of that type and passes it the URL. At that point,
control passes to the new app.
The following code fragment illustrates how one app can request the services of another app (“todolist” in this
example is a hypothetical custom scheme registered by an app):
NSURL *myURL = [NSURL
URLWithString:@"todolist://www.acme.com?Quarterly%20Report#200806231300"];
[[UIApplication sharedApplication] openURL:myURL];
If your app defines a custom URL scheme, it should implement a handler for that scheme as described in
“Implementing Custom URL Schemes” (page 119). For more information about the system-supported URL
schemes, including information about how to format the URLs, see Apple URL Scheme Reference .
Implementing Custom URL Schemes
If your app can receive specially formatted URLs, you should register the corresponding URL schemes with the
system. A custom URL scheme is a mechanism through which third-party apps can communicate with each
other. Apps often use custom URL schemesto vend servicesto other apps. For example, the Maps app supports
URLs for displaying specific map locations.
Registering Custom URL Schemes
To register a URL type for your app, include the CFBundleURLTypes key in your app’s Info.plist file. The
CFBundleURLTypes key contains an array of dictionaries, each of which defines a URL scheme the app
supports. Table 6-2 describes the keys and values to include in each dictionary.
Advanced App Tricks
Implementing Custom URL Schemes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
119Table 6-2 Keys and values of the CFBundleURLTypes property
Key Value
A string containing the abstract name of the URL scheme. To ensure
uniqueness, it is recommended that you specify a reverse-DNS style of
identifier, for example, com.acme.myscheme.
The string you specify is also used as a key in your app’s
InfoPlist.strings file. The value of the key is the human-readable
scheme name.
CFBundleURLName
An array of strings containing the URL scheme names—for example, http,
mailto, tel, and sms.
CFBundleURLSchemes
Figure 6-1 shows the Info.plist file of an app that supports a custom scheme for creating “to-do” items.
The URL types entry corresponds to the CFBundleURLTypes key added to the Info.plist file. Similarly,
the “URL identifier” and “URL Schemes” entries correspond to the CFBundleURLName and
CFBundleURLSchemes keys.
Figure 6-1 Defining a custom URL scheme in the Info.plist file
Handling URL Requests
An app that has its own custom URL scheme must be able to handle URLs passed to it. All URLs are passed to
your app delegate, either at launch time or while your app isrunning or in the background. To handle incoming
URLs, your delegate should implement the following methods:
Advanced App Tricks
Implementing Custom URL Schemes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
120● Use the application:willFinishLaunchingWithOptions: and
application:didFinishLaunchingWithOptions: methods to retrieve information about the URL
and decide whether you want to open it. If either method returns NO, your app’s URL handling code is not
called.
●
In iOS 4.2 and later, use the application:openURL:sourceApplication:annotation: method to
open the file.
●
In iOS 4.1 and earlier, use the application:handleOpenURL: method to open the file.
If your app is not running when a URL request arrives, it is launched and moved to the foreground so that it
can open the URL. The implementation of your application:willFinishLaunchingWithOptions: or
application:didFinishLaunchingWithOptions: method should retrieve the URL from its options
dictionary and determine whether the app can open it. If it can, return YES and let your
application:openURL:sourceApplication:annotation: (or application:handleOpenURL:)
Advanced App Tricks
Implementing Custom URL Schemes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
121method handle the actual opening of the URL. (If you implement both methods, both must return YES before
the URL can be opened.) Figure 6-2 shows the modified launch sequence for an app that is asked to open a
URL.
Figure 6-2 Launching an app to open a URL
If your app is running but is in the background or suspended when a URL request arrives, it is moved to the
foreground to open the URL. Shortly thereafter, the system calls the delegate’s
application:openURL:sourceApplication:annotation: to check the URL and open it. If your delegate
Advanced App Tricks
Implementing Custom URL Schemes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
122does not implement this method (or the current system version is iOS 4.1 or earlier), the system calls your
delegate’s application:handleOpenURL: method instead. Figure 6-3 shows the modified process for
moving an app to the foreground to open a URL.
Figure 6-3 Waking a background app to open a URL
Note: Apps that support custom URL schemes can specify different launch images to be displayed
when launching the app to handle a URL. For more information about how to specify these launch
images, see “Providing Launch Images for Custom URL Schemes” (page 103).
All URLs are passed to your app in an NSURL object. It is up to you to define the format of the URL, but the
NSURL class conforms to the RFC 1808 specification and therefore supports most URL formatting conventions.
Specifically, the class includes methods that return the various parts of a URL as defined by RFC 1808, including
the user, password, query, fragment, and parameter strings. The “protocol” for your custom scheme can use
these URL parts for conveying various kinds of information.
In the implementation of application:handleOpenURL: shown in Listing 6-1, the passed-in URL object
conveys app-specific information in its query and fragment parts. The delegate extracts this information—in
this case, the name of a to-do task and the date the task is due—and with it creates a model object of the app.
Advanced App Tricks
Implementing Custom URL Schemes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
123This example assumesthat the user is using a Gregorian calendar. If your app supports non-Gregorian calendars,
you need to design your URL scheme accordingly and be prepared to handle those other calendar types in
your code.
Listing 6-1 Handling a URL request based on a custom scheme
- (BOOL)application:(UIApplication *)application handleOpenURL:(NSURL *)url {
if ([[url scheme] isEqualToString:@"todolist"]) {
ToDoItem *item = [[ToDoItem alloc] init];
NSString *taskName = [url query];
if (!taskName || ![self isValidTaskString:taskName]) { // must have a task
name
return NO;
}
taskName = [taskName
stringByReplacingPercentEscapesUsingEncoding:NSUTF8StringEncoding];
item.toDoTask = taskName;
NSString *dateString = [url fragment];
if (!dateString || [dateString isEqualToString:@"today"]) {
item.dateDue = [NSDate date];
} else {
if (![self isValidDateString:dateString]) {
return NO;
}
// format: yyyymmddhhmm (24-hour clock)
NSString *curStr = [dateString substringWithRange:NSMakeRange(0, 4)];
NSInteger yeardigit = [curStr integerValue];
curStr = [dateString substringWithRange:NSMakeRange(4, 2)];
NSInteger monthdigit = [curStr integerValue];
curStr = [dateString substringWithRange:NSMakeRange(6, 2)];
NSInteger daydigit = [curStr integerValue];
curStr = [dateString substringWithRange:NSMakeRange(8, 2)];
NSInteger hourdigit = [curStr integerValue];
curStr = [dateString substringWithRange:NSMakeRange(10, 2)];
NSInteger minutedigit = [curStr integerValue];
Advanced App Tricks
Implementing Custom URL Schemes
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
124NSDateComponents *dateComps = [[NSDateComponents alloc] init];
[dateComps setYear:yeardigit];
[dateComps setMonth:monthdigit];
[dateComps setDay:daydigit];
[dateComps setHour:hourdigit];
[dateComps setMinute:minutedigit];
NSCalendar *calendar = [s[NSCalendar alloc]
initWithCalendarIdentifier:NSGregorianCalendar];
NSDate *itemDate = [calendar dateFromComponents:dateComps];
if (!itemDate) {
return NO;
}
item.dateDue = itemDate;
}
[(NSMutableArray *)self.list addObject:item];
return YES;
}
return NO;
}
Be sure to validate the input you get from URLs passed to your app; see “Validating Input and Interprocess
Communication” in Secure Coding Guide to find out how to avoid problems related to URL handling. To learn
about URL schemes defined by Apple, see Apple URL Scheme Reference .
Showing and Hiding the Keyboard
The appearance of the keyboard is tied to the responder status of views. If a view is able to become the first
responder, the system shows the keyboard whenever that view actually becomes the first responder. When
the user taps another view that does not support becoming the first responder, the system hides the keyboard
if it is currently visible. In UIKit, only views that support text entry can become the first responder by default.
Other views must override the canBecomeFirstResponder method and return YES if they want the keyboard
to be shown.
Advanced App Tricks
Showing and Hiding the Keyboard
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
125When a view becomes the first responder, the keyboard is shown by default, but you can replace the keyboard
for viewsthatsupport custom forms of input. Every responder object has an inputView property that contains
the view to be displayed when the responder becomes the first responder. When this property is nil, the
system displaysthe standard keyboard. When this property is not nil, the system displaysthe view you provide
instead.
Normally, user taps dictate which view becomes the first responder in your app, but you can force a view to
become the first responder too. Calling the becomeFirstResponder method any responder object causes
that object to try to become the first responder. If that responder object is able to become the first responder,
the custom input view (or the standard keyboard) is shown automatically.
For more information about using the keyboard, see Text, Web, and Editing Programming Guide for iOS .
Turning Off Screen Locking
If an iOS-based device does not receive touch events for a specified period of time, the system turns off the
screen and disables the touch sensor. Locking the screen is an important way to save power. As a result, you
should generally leave this feature enabled. However, for an app that does not rely on touch events, such as
a game that uses the accelerometers for input, disable screen locking to prevent the screen from going dark
while the app is running. However, even in this case, disable screen locking only while the user is actively
engaged with the app. For example, if the user pauses a game, reenable screen locking to allow the screen to
turn off.
To disable screen locking, set the idleTimerDisabled property of the shared UIApplication object to
YES. Be sure to reset this property to NO when your app does not need to prevent screen locking.
Advanced App Tricks
Turning Off Screen Locking
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
126At each step in the development of your app, you should consider the implications of your design choices on
the overall performance of your app. The operating environment for iOS apps is more constrained than that
for Mac OS X apps. The following sections describe the factors you should consider throughout the development
process.
Make App Backups More Efficient
Backups occur wirelessly via iCloud or when the user syncs the device with iTunes. During backups, files are
transferred from the device to the user’s computer or iCloud account. The location of files in your app sandbox
determines whether or not those files are backed up and restored. If your application creates many large files
that change regularly and putsthem in a location that is backed up, backups could be slowed down as a result.
As you write your file-management code, you need to be mindful of this fact.
App Backup Best Practices
You do not have to prepare your app in any way for backup and restore operations. Devices with an active
iCloud account have their app data backed up to iCloud at appropriate times. And for devices that are plugged
into a computer, iTunes performs an incremental backup of the app’s data files. However, iCloud and iTunes
do not back up the contents of the following directories:
● /AppName.app
● /Library/Caches
● /tmp
To prevent the syncing process from taking a long time, be selective about where you place files inside your
app’s home directory. Apps that store large files can slow down the process of backing up to iTunes or iCloud.
These apps can also consume a large amount of a user's available storage, which may encourage the user to
delete the app or disable backup of that app's data to iCloud. With this in mind, you should store app data
according to the following guidelines:
● Critical data should be stored in the /Documents directory. Critical data is any data
that cannot be recreated by your app, such as user documents and other user-generated content.
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
127
Performance Tuning● Support files include files your application downloads or generates and that your application can recreate
as needed. The location for storing your application’s support files depends on the current iOS version.
●
In iOS 5.1 and later,store supportfilesin the /Library/Application Support
directory and add the NSURLIsExcludedFromBackupKey attribute to the corresponding NSURL
object using the setResourceValue:forKey:error: method. (If you are using Core Foundation,
add the kCFURLIsExcludedFromBackupKey key to your CFURLRef object using the
CFURLSetResourcePropertyForKey function.) Applying this attribute preventsthe filesfrom being
backed up to iTunes or iCloud. If you have a large number of support files, you may store them in a
custom subdirectory and apply the extended attribute to just the directory.
●
In iOS 5.0 and earlier, store support files in the /Library/Caches directory to
prevent them from being backed up. If you are targeting iOS 5.0.1, see How do I prevent files from
being backed up to iCloud and iTunes? for information about how to exclude files from backups.
● Cached data should be stored in the /Library/Caches directory. Examples of files
you should put in the Caches directory include (but are not limited to) database cache files and
downloadable content, such as that used by magazine, newspaper, and map apps. Your app should be
able to gracefully handle situations where cached data is deleted by the system to free up disk space.
● Temporary data should be stored in the /tmp directory. Temporary data comprises
any data that you do not need to persist for an extended period of time. Remember to delete those files
when you are done with them so that they do not continue to consume space on the user's device.
Although iTunes backs up the app bundle itself, it does not do this during every sync operation. Apps purchased
directly from a device are backed up when that device is next synced with iTunes. Apps are not backed up
during subsequent sync operations, though, unless the app bundle itself has changed (because the app was
updated, for example).
For additional guidance about how you should use the directories in your app, see File System Programming
Guide .
Files Saved During App Updates
When a user downloads an app update, iTunes installs the update in a new app directory. It then moves the
user’s data files from the old installation over to the new app directory before deleting the old installation.
Files in the following directories are guaranteed to be preserved during the update process:
● /Documents
● /Library
Although files in other user directories may also be moved over, you should not rely on them being present
after an update.
Performance Tuning
Make App Backups More Efficient
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
128Use Memory Efficiently
Because the iOS virtual memory model does not include disk swap space, apps are more limited in the amount
of memory they have available for use. Using large amounts of memory can seriously degrade system
performance and potentially cause the system to terminate your app. In addition, apps running under
multitasking must share system memory with all other running apps. Therefore, make it a high priority to
reduce the amount of memory used by your app.
There is a direct correlation between the amount of free memory available and the relative performance of
your app. Less free memory means that the system is more likely to have trouble fulfilling future memory
requests. If that happens, the system can always remove suspended apps, code pages, or other nonvolatile
resources from memory. However, removing those apps and resources from memory may be only a temporary
fix, especially if they are needed again a short time later. Instead, minimize your memory use in the first place,
and clean up the memory you do use in a timely manner.
The following sections provide more guidance on how to use memory efficiently and how to respond when
there is only a small amount of available memory.
Observe Low-Memory Warnings
When the system dispatches a low-memory warning to your app, respond immediately. iOS notifies all running
apps whenever the amount of free memory dips below a safe threshold. (It does not notify suspended apps.)
If your app receives this warning, it must free up as much memory as possible. The best way to do this is to
remove strong references to caches, image objects, and other data objects that can be recreated later.
UIKit provides several ways to receive low-memory warnings, including the following:
●
Implement the applicationDidReceiveMemoryWarning: method of your app delegate.
● Override the didReceiveMemoryWarning method in your custom UIViewController subclass.
● Register to receive the UIApplicationDidReceiveMemoryWarningNotificationnotification.
Upon receiving any of these warnings, your handler method should respond by immediately freeing up any
unneeded memory. For example, the default behavior of the UIViewController class is to purge its view if
that view is not currently visible; subclasses can supplement the default behavior by purging additional data
structures. An app that maintains a cache of images might respond by releasing any images that are not
currently onscreen.
If your data model includes known purgeable resources, you can have a corresponding manager object register
forthe UIApplicationDidReceiveMemoryWarningNotification notification and remove strong references
to its purgeable resources directly. Handling this notification directly avoids the need to route all memory
warning calls through the app delegate.
Performance Tuning
Use Memory Efficiently
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
129Note: You can test your app’s behavior under low-memory conditions using the Simulate Memory
Warning command in iOS Simulator.
Reduce Your App’s Memory Footprint
Starting off with a low footprint gives you more room for expanding your app later. Table 7-1 lists some tips
on how to reduce your app’s overall memory footprint.
Table 7-1 Tips for reducing your app’s memory footprint
Tip Actions to take
Because memory is a critical resource in iOS, your app should have no memory
leaks. You can use the Instruments app to track down leaks in your code, both
in Simulator and on actual devices. For more information on using Instruments,
see Instruments User Guide .
Eliminate memory
leaks.
Files reside on disk but must be loaded into memory before they can be used.
Property list files and images can be made smaller with some very simple
actions. To reduce the space used by property list files, write those files out
in a binary format using the NSPropertyListSerialization class. For
images, compress all image files to make them as small as possible. (To
compress PNG images—the preferred image format for iOS apps—use the
pngcrush tool.)
Make resource files as
small as possible.
If your app manipulates large amounts of structured data, store it in a Core
Data persistent store or in a SQLite database instead of in a flat file. Both Core
Data and SQLite provides efficient ways to manage large data sets without
requiring the entire set to be in memory all at once.
The Core Data framework was introduced in iOS 3.0.
Use Core Data or
SQLite for large data
sets.
You should never load a resource file until it is actually needed. Prefetching
resource files may seem like a way to save time, but this practice actually slows
down your app right away. In addition, if you end up not using the resource,
loading it wastes memory for no good purpose.
Load resources lazily.
Adding the -mthumb compiler flag can reduce the size of your code by up to
35%. However, if your app contains floating-point–intensive code modules
and you are building your app for ARMv6, you should disable the Thumb
option. If you are building your code for ARMv7, you should leave Thumb
enabled.
Build your program
using the Thumb
option.
Performance Tuning
Use Memory Efficiently
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
130Allocate Memory Wisely
Table 7-2 lists tips for improving memory usage in your app.
Table 7-2 Tips for allocating memory
Tip Actions to take
With automatic reference counting (ARC), it is better to alloc/init objects and
let the compiler release them for you at the appropriate time. This is true even
for temporary objects that in the past you might have autoreleased to prevent
them from living past the scope of the current method.
Reduce your use of
autoreleased
objects.
Avoid loading a large resource file when a smaller one will do. Instead of using
a high-resolution image, use one that is appropriately sized for iOS-based
devices. If you must use large resource files, find ways to load only the portion
of the file that you need at any given time. For example, rather than load the
entire file into memory, use the mmap and munmap functions to map portions
of the file into and out of memory. For more information about mapping files
into memory, see File-System Performance Guidelines.
Impose size limits on
resources.
Unbounded problem sets might require an arbitrarily large amount of data to
compute. If the set requires more memory than is available, your app may be
unable to complete the calculations. Your appsshould avoid such sets whenever
possible and work on problems with known memory limits.
Avoid unbounded
problem sets.
For detailed information about ARC and memory management, see Transitioning to ARC Release Notes.
Move Work off the Main Thread
Be sure to limit the type of work you do on the main thread of your app. The main thread is where your app
handlestouch events and other user input. To ensure that your app is alwaysresponsive to the user, you should
never use the main thread to perform long-running or potentially unbounded tasks, such as tasks that access
the network. Instead, you should always move those tasks onto background threads. The preferred way to do
so is to use Grand Central Dispatch (GCD) or operation objects to perform tasks asynchronously.
Moving tasks into the background leaves your main thread free to continue processing user input, which is
especially important when your app is starting up or quitting. During these times, your app is expected to
respond to events in a timely manner. If your app’s main thread is blocked at launch time, the system could
kill the app before it even finishes launching. If the main thread is blocked at quitting time, the system could
similarly kill the app before it has a chance to write out crucial user data.
For more information about using GCD, operation objects, and threads, see Concurrency Programming Guide .
Performance Tuning
Move Work off the Main Thread
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
131Floating-Point Math Considerations
The processors found in iOS-based devices are capable of performing floating-point calculations in hardware.
If you have an existing program that performs calculations using a software-based fixed-point math library,
you should consider modifying your code to use floating-point math instead. Hardware-based floating-point
computations are typically much faster than their software-based fixed-point equivalents.
Important: If you build your app for ARMv6 and your code uses floating-point math extensively, compile
that code without the -mthumb compiler option. The Thumb option can reduce the size of code modules,
but it can also degrade the performance of floating-point code. If you build your app for ARMv7, you should
always enable the Thumb option.
In iOS 4 and later, you can also use the functions of the Accelerate framework to perform complex mathematical
calculations. Thisframework contains high-performance vector-accelerated librariesfor digitalsignal processing
and linear algebra mathematics. You can apply these librariesto problemsinvolving audio and video processing,
physics, statistics, cryptography, and complex algebraic equations.
Reduce Power Consumption
Power consumption on mobile devices is always an issue. The power management system in iOS conserves
power by shutting down any hardware featuresthat are not currently being used. You can help improve battery
life by optimizing your use of the following features:
● The CPU
● Wi-Fi, Bluetooth, and baseband (EDGE, 3G) radios
● The Core Location framework
● The accelerometers
● The disk
The goal of your optimizations should be to do the most work you can in the most efficient way possible. You
should always optimize your app’s algorithms using Instruments. But even the most optimized algorithm can
still have a negative impact on a device’s battery life. You should therefore consider the following guidelines
when writing your code:
● Avoid doing work that requires polling. Polling prevents the CPU from going to sleep. Instead of polling,
use the NSRunLoop or NSTimer classes to schedule work as needed.
Performance Tuning
Floating-Point Math Considerations
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
132● Leave the idleTimerDisabled property of the shared UIApplication object set to NO whenever
possible. The idle timer turns off the device’s screen after a specified period of inactivity. If your app does
not need the screen to stay on, let the system turn it off. If your app experiences side effects as a result of
the screen being turned off, you should modify your code to eliminate the side effects rather than disable
the idle timer unnecessarily.
● Coalesce work whenever possible to maximize idle time. It generally takes less power to perform a set of
calculations all at once than it does to perform them in small chunks over an extended period of time.
Doing small bits of work periodically requires waking up the CPU more often and getting it into a state
where it can perform your tasks.
● Avoid accessing the disk too frequently. For example, if your app saves state information to the disk, do
so only when that state information changes, and coalesce changes whenever possible to avoid writing
small changes at frequent intervals.
● Do not draw to the screen faster than is needed. Drawing is an expensive operation when it comes to
power. Do not rely on the hardware to throttle your frame rates. Draw only as many frames as your app
actually needs.
●
If you use the UIAccelerometer class to receive regular accelerometer events, disable the delivery of
those events when you do not need them. Similarly, set the frequency of event delivery to the smallest
value that is suitable for your needs. For more information, see Event Handling Guide for iOS .
The more data you transmit to the network, the more power must be used to run the radios. In fact, accessing
the network is the most power-intensive operation you can perform. You can minimize that time by following
these guidelines:
● Connect to external network servers only when needed, and do not poll those servers.
● When you must connect to the network, transmit the smallest amount of data needed to do the job. Use
compact data formats, and do not include excess content that simply is ignored.
● Transmit data in bursts rather than spreading out transmission packets over time. The system turns off
the Wi-Fi and cell radios when it detects a lack of activity. When it transmits data over a longer period of
time, your app uses much more power than when it transmitsthe same amount of data in a shorter amount
of time.
● Connect to the network using the Wi-Fi radios whenever possible. Wi-Fi uses less power and is preferred
over cellular radios.
●
If you use the Core Location framework to gather location data, disable location updates as soon as you
can and set the distance filter and accuracy levels to appropriate values. Core Location uses the available
GPS, cell, and Wi-Fi networks to determine the user’s location. Although Core Location works hard to
minimize the use of these radios, setting the accuracy and filter values gives Core Location the option to
turn off hardware altogether in situations where it is not needed. For more information, see Location
Awareness Programming Guide .
Performance Tuning
Reduce Power Consumption
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
133The Instruments app includes several instruments for gathering power-related information. You can use these
instruments to gather general information about power consumption and to gather specific measurements
for hardware such as the Wi-Fi and Bluetooth radios, GPS receiver, display, and CPU. For more information
about using these instruments, see Instruments User Guide .
Tune Your Code
iOS comes with several apps for tuning the performance of your app. Most of these tools run on Mac OS X and
are suitable for tuning some aspects of your code while it runs in iOS Simulator. For example, you can use
Simulator to eliminate memory leaks and make sure your overall memory usage is as low as possible. You can
also remove any computational hotspots in your code that might be caused by an inefficient algorithm or a
previously unknown bottleneck.
After you have tuned your code in Simulator, you should then use the Instruments app to further tune your
code on a device. Running your code on an actual device is the only way to tune your code fully. Because
Simulator runs in Mac OS X, it has the advantage of a faster CPU and more usable memory, so its performance
is generally much better than the performance on an actual device. And using Instruments to trace your code
on an actual device may point out additional performance bottlenecks that need tuning.
For more information on using Instruments, see Instruments User Guide .
Improve File Access Times
Minimize the amount of data you write to the disk. File operations are relatively slow and involve writing to
the flash drive, which has a limited lifespan. Some specific tips to help you minimize file-related operations
include:
● Write only the portions of the file that changed, and aggregate changes when you can. Avoid writing out
the entire file just to change a few bytes.
● When defining your file format, group frequently modified content together to minimize the overall
number of blocks that need to be written to disk each time.
●
If your data consists of structured content that is randomly accessed, store it in a Core Data persistent
store or a SQLite database, especially if the amount of data you are manipulating could grow to more than
a few megabytes.
Avoid writing cache files to disk. The only exception to this rule is when your app quits and you need to write
state information that can be used to put your app back into the same state when it is next launched.
Performance Tuning
Tune Your Code
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
134Tune Your Networking Code
The networking stack in iOS includes several interfaces for communicating over the radio hardware of iOS
devices. The main programming interface is the CFNetwork framework, which builds on top of BSD sockets and
opaque types in the Core Foundation framework to communicate with network entities. You can also use the
NSStream classes in the Foundation framework and the low-level BSD sockets found in the Core OS layer of
the system.
For information about how to use the CFNetwork framework for network communication, see CFNetwork
Programming Guide and CFNetwork Framework Reference . For information about using the NSStream class,
see Foundation Framework Reference .
Tips for Efficient Networking
Implementing code to receive or transmit data acrossthe network is one of the most power-intensive operations
on a device. Minimizing the amount of time spent transmitting or receiving data helps improve battery life.
To that end, you should consider the following tips when writing your network-related code:
● For protocols you control, define your data formats to be as compact as possible.
● Avoid using chatty protocols.
● Transmit data packets in bursts whenever you can.
Cellular and Wi-Fi radios are designed to power down when there is no activity. Depending on the radio,
though, doing so can take several seconds. If your app transmits small bursts of data every few seconds, the
radios may stay powered up and continue to consume power, even when they are not actually doing anything.
Rather than transmit small amounts of data more often, it is better to transmit a larger amount of data once
or at relatively large intervals.
When communicating over the network, packets can be lost at any time. Therefore, when writing your
networking code, you should be sure to make it as robust as possible when it comes to failure handling. It is
perfectly reasonable to implement handlers that respond to changes in network conditions, but do not be
surprised if those handlers are not called consistently. For example, the Bonjour networking callbacks may not
always be called immediately in response to the disappearance of a network service. The Bonjour system
service immediately invokes browsing callbacks when it receives a notification that a service is going away,
but network services can disappear without notification. This situation might occur if the device providing the
network service unexpectedly loses network connectivity or the notification is lost in transit.
Performance Tuning
Tune Your Networking Code
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
135Using Wi-Fi
If your app accesses the network using the Wi-Fi radios, you must notify the system of that fact by including
the UIRequiresPersistentWiFi key in the app’s Info.plist file. The inclusion of this key lets the system
know that it should display the network selection dialog if it detects any active Wi-Fi hot spots. It also lets the
system know that it should not attempt to shut down the Wi-Fi hardware while your app is running.
To prevent the Wi-Fi hardware from using too much power, iOS has a built-in timer that turns off the hardware
completely after 30 minutesif no running app hasrequested its use through the UIRequiresPersistentWiFi
key. If the user launches an app that includes the key, iOS effectively disables the timer for the duration of the
app’s life cycle. As soon as that app quits or is suspended, however, the system reenables the timer.
Note: Note that even when UIRequiresPersistentWiFi has a value of true, it has no effect
when the device is idle (that is, screen-locked). The app is considered inactive, and although it may
function on some levels, it has no Wi-Fi connection.
For more information on the UIRequiresPersistentWiFi key and the keys of the Info.plist file, see
Figure 6-1 (page 120).
The Airplane Mode Alert
If your app launches while the device is in airplane mode, the system may display an alert to notify the user
of that fact. The system displays this alert only when all of the following conditions are met:
● Your app’s information property list (Info.plist) file contains the UIRequiresPersistentWiFi key
and the value of that key is set to true.
● Your app launches while the device is currently in airplane mode.
● Wi-Fi on the device has not been manually reenabled after the switch to airplane mode.
Performance Tuning
Tune Your Networking Code
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
136The iOS environment affects several aspects of how you design your app. Understanding some key aspects
should help you when writing your code.
Specialized System Behaviors
The iOS system is based on the same technologies used by Mac OS X, namely the Mach kernel and BSD
interfaces. Thus, iOS apps run in a UNIX-based system and have full support for threads, sockets, and many of
the other technologies typically available at that level. However, there are places where the behavior of iOS
differs from that of Mac OS X.
The Virtual Memory System
To manage program memory, iOS uses essentially the same virtual memory system found in Mac OS X. In iOS,
each program still hasits own virtual addressspace, but unlike Mac OS X, the amount of usable virtual memory
is constrained by the amount of physical memory available. This is because iOS does not support paging to
disk when memory getsfull. Instead, the virtual memory system simply releasesread-only memory pages,such
as code pages, when it needs more space. Such pages can always be loaded back into memory later if they
are needed again.
If memory continues to be constrained, the system may send low-memory notifications to any running apps,
asking them to free up additional memory. All apps should respond to this notification and do their part to
help relieve the memory pressure. For information on how to handle such notificationsin your app,see “Observe
Low-Memory Warnings” (page 129).
The Automatic Sleep Timer
One way iOS saves battery power is through the automatic sleep timer. When the system does not detect
touch events for an extended period of time, it dims the screen initially and eventually turns it off altogether.
If you are creating an app that does not use touch inputs, such as a game that relies on the accelerometers,
you can disable the automatic sleep timer to prevent the screen from dimming. You should use this timer
sparingly and reenable it as soon as possible to conserve power. Only apps that display visual content and do
not rely on touch inputs should ever disable the timer. Audio apps or apps that do not need to present visual
content should not disable the timer.
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
137
The iOS EnvironmentThe process for disabling the timer is described in “Turning Off Screen Locking” (page 126). For additional tips
on how to save power in your app, see “Reduce Power Consumption” (page 132).
Multitasking Support
In iOS 4 and later, multitasking allows apps to run in the background even when they are not visible on the
screen. Most background appsreside in memory but do not actually execute any code. These apps are suspended
by the system shortly after entering the background to preserve battery life. Apps can ask the system for
background execution time in a number of ways, though.
For an overview of multitasking and what you need to do to support it, see “Background Execution and
Multitasking” (page 54).
Security
The security infrastructure in iOS isthere to protect your app’s data and the system as a whole. Security breaches
can and will happen, so the first line of defense in iOS is to minimize the damage caused by such breaches by
securing each app separately in its own sandbox. But iOS provides other technologies, such as encryption and
certificate support, to help you protect your data at an even more fundamental level.
For an introduction to security and how it impacts the design of your app, see Security Overview.
The App Sandbox
For security reasons, iOS places each app (including its preferences and data) in a sandbox at install time. A
sandbox is a set of fine-grained controls that limit the app’s access to files, preferences, network resources,
hardware, and so on. As part of the sandboxing process, the system installs each app in its own sandbox
directory, which acts as the home for the app and its data.
The iOS Environment
Security
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
138To help apps organize their data, each sandbox directory containsseveral well-known subdirectoriesfor placing
files. Figure A-1 shows the basic layout of a sandbox directory. For detailed information about the sandbox
directory and what belongs in each of its subdirectories, see File System Programming Guide .
Figure A-1 Sandbox directories in iOS
The iOS Environment
Security
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
139Important: The purpose of a sandbox is to limit the damage that a compromised app can cause to the
system. Sandboxes do not prevent attacksfrom happening to a particular app and it isstill your responsibility
to code defensively to prevent attacks. For example, if your app does not validate user input and there is
an exploitable buffer overflow in your input-handling code, an attacker could still hijack your app or cause
it to crash. The sandbox only prevents the hijacked app from affecting other apps and other parts of the
system.
Keychain Data
A keychain is a secure, encrypted container for passwords and other secrets. The keychain is intended for
storing small amounts of sensitive data that are specific to your app. It is not intended as a general-purpose
mechanism for encrypting and storing data.
Keychain data for an app isstored outside of the app’ssandbox. When the user backs up app data using iTunes,
the keychain data is also backed up. Before iOS 4.0, keychain data could only be restored to the device from
which the backup was made. In iOS 4.0 and later, a keychain item that is password protected can be restored
to a different device only if its accessibility is not set to kSecAttrAccessibleAlwaysThisDeviceOnly or
any other value that restricts it to the current device. Upgrading an app does not affect that app’s keychain
data.
For more on the iOS keychain, see “Keychain Services Concepts” in Keychain Services Programming Guide .
The iOS Environment
Security
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
140This table describes the changes to iOS App Programming Guide .
Date Notes
2012-09-19 Contains information about new features in iOS 6.
Added information about the NSURL and CFURL keys used to prevent a
file from being backed up.
2012-03-07
Updated the section that describes the behavior of apps in the
background.
2012-01-09
2011-10-12 Added information about features introduced in iOS 5.0.
Reorganized book and added more design-level information.
Added high-level information about iCloud and how it impactsthe design
of applications.
2011-02-24 Added information about using AirPlay in the background.
2010-12-13 Made minor editorial changes.
2010-11-15 Incorporated additional iPad-related design guidelinesinto this document.
Updated the information about how keychain data is preserved and
restored.
Fixed several typographical errors and updated the code sample on
initiating background tasks.
2010-08-20
Updated the guidance related to specifying application icons and launch
images.
2010-06-30
Changed the title from iPhone Application Programming Guide .
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
141
Document Revision HistoryDate Notes
Reorganized the book so that it focuses on the design of the core parts
of your application.
2010-06-14
Added information about how to support multitasking in iOS 4 and later.
For more information, see “Core App Objects” (page 17).
Updated the section describing how to determine what hardware is
available.
Added information about how to support devices with high-resolution
screens.
Incorporated iPad-related information.
2010-02-24 Made minor corrections.
Updated the “Multimedia Support” chapter with improved descriptions
of audio formats and codecs.
2010-01-20
Moved the iPhone specific Info.plist keys to Information Property List
Key Reference .
2009-10-19
Updated the “Multimedia Support” chapter for iOS 3.1.
2009-06-17 Added information about using the compass interfaces.
Moved information about OpenGL support to OpenGL ES Programming
Guide for iOS .
Updated the list of supported Info.plist keys.
2009-03-12 Updated for iOS 3.0
Added code examples to "Copy and Paste Operations" in the Event
Handling chapter.
Added a section on keychain data to the Files and Networking chapter.
Added information about how to display map and email interfaces.
Made various small corrections.
Document Revision History
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
142Date Notes
Fixed several typos and clarified the creation process for child pages in
the Settings application.
2009-01-06
2008-11-12 Added guidance about floating-point math considerations
Updated information related to what is backed up by iTunes.
2008-10-15 Reorganized the contents of the book.
Moved the high-level iOS information to iOS Technology Overview.
Moved information about the standard system URL schemesto Apple URL
Scheme Reference .
Moved information about the development tools and how to configure
devices to Tools Workflow Guide for iOS .
Created the Core Application chapter, which now introduces the
application architecture and covers much of the guidance for creating
iPhone applications.
Added a Text and Web chapter to cover the use of text and web classes
and the manipulation of the onscreen keyboard.
Created a separate chapter for Files and Networking and moved existing
information into it.
Changed the title from iPhone OS Programming Guide .
New document that describesiOS and the development processfor iPhone
applications.
2008-07-08
Document Revision History
2012-09-19 | © 2012 Apple Inc. All Rights Reserved.
143Apple Inc.
© 2012 Apple Inc.
All rights reserved.
No part of this publication may be reproduced,
stored in a retrievalsystem, or transmitted, in any
form or by any means, mechanical, electronic,
photocopying, recording, or otherwise, without
prior written permission of Apple Inc., with the
following exceptions: Any person is hereby
authorized to store documentation on a single
computer for personal use only and to print
copies of documentation for personal use
provided that the documentation contains
Apple’s copyright notice.
No licenses, express or implied, are granted with
respect to any of the technology described in this
document. Apple retains all intellectual property
rights associated with the technology described
in this document. This document is intended to
assist application developers to develop
applications only for Apple-labeled computers.
Apple Inc.
1 Infinite Loop
Cupertino, CA 95014
408-996-1010
Apple, the Apple logo, AirPlay, Bonjour, Cocoa,
Instruments, iPad, iPhone, iPod, iPod touch,
iTunes, Keychain, Mac, Mac OS, Macintosh,
Numbers, Objective-C, OS X, Sand, Spotlight, and
Xcode are trademarks of Apple Inc., registered in
the U.S. and other countries.
Retina is a trademark of Apple Inc.
iCloud is a service mark of Apple Inc., registered
in the U.S. and other countries.
App Store is a service mark of Apple Inc.
Intel and Intel Core are registered trademarks of
Intel Corporation or its subsidiaries in the United
States and other countries.
OpenGL is a registered trademark of Silicon
Graphics, Inc.
Times is a registered trademark of Heidelberger
Druckmaschinen AG, available from Linotype
Library GmbH.
UNIX is a registered trademark of The Open
Group.
iOS is a trademark or registered trademark of
Cisco in the U.S. and other countries and is used
under license.
Even though Apple has reviewed this document,
APPLE MAKES NO WARRANTY OR REPRESENTATION,
EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS
DOCUMENT, ITS QUALITY, ACCURACY,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR
PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED
“AS IS,” AND YOU, THE READER, ARE ASSUMING THE
ENTIRE RISK AS TO ITS QUALITY AND ACCURACY.
IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT,
INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL
DAMAGES RESULTING FROM ANY DEFECT OR
INACCURACY IN THIS DOCUMENT, even if advised of
the possibility of such damages.
THE WARRANTY AND REMEDIES SET FORTH ABOVE
ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL
OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer,
agent, or employee is authorized to make any
modification, extension, or addition to this warranty.
Some states do not allow the exclusion or limitation
of implied warranties or liability for incidental or
consequential damages, so the above limitation or
exclusion may not apply to you. This warranty gives
you specific legal rights, and you may also have other
rights which vary from state to state.
Concurrency
Programming GuideContents
Introduction 7
Organization of This Document 7
A Note About Terminology 8
See Also 8
Concurrency and Application Design 9
The Move Away from Threads 10
Dispatch Queues 10
Dispatch Sources 11
Operation Queues 12
Asynchronous Design Techniques 12
Define Your Application’s Expected Behavior 13
Factor Out Executable Units of Work 13
Identify the Queues You Need 14
Tips for Improving Efficiency 14
Performance Implications 15
Concurrency and Other Technologies 15
OpenCL and Concurrency 15
When to Use Threads 16
Operation Queues 17
About Operation Objects 17
Concurrent Versus Non-concurrent Operations 18
Creating an NSInvocationOperation Object 19
Creating an NSBlockOperation Object 20
Defining a Custom Operation Object 21
Performing the Main Task 21
Responding to Cancellation Events 22
Configuring Operations for Concurrent Execution 24
Maintaining KVO Compliance 27
Customizing the Execution Behavior of an Operation Object 28
Configuring Interoperation Dependencies 29
Changing an Operation’s Execution Priority 29
Changing the Underlying Thread Priority 30
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
2Setting Up a Completion Block 30
Tips for Implementing Operation Objects 31
Managing Memory in Operation Objects 31
Handling Errors and Exceptions 32
Determining an Appropriate Scope for Operation Objects 32
Executing Operations 33
Adding Operations to an Operation Queue 33
Executing Operations Manually 34
Canceling Operations 36
Waiting for Operations to Finish 36
Suspending and Resuming Queues 37
Dispatch Queues 38
About Dispatch Queues 38
Queue-Related Technologies 41
Implementing Tasks Using Blocks 41
Creating and Managing Dispatch Queues 43
Getting the Global Concurrent Dispatch Queues 43
Creating Serial Dispatch Queues 44
Getting Common Queues at Runtime 45
Memory Management for Dispatch Queues 45
Storing Custom Context Information with a Queue 46
Providing a Clean Up Function For a Queue 46
Adding Tasks to a Queue 47
Adding a Single Task to a Queue 48
Performing a Completion Block When a Task Is Done 49
Performing Loop Iterations Concurrently 50
Performing Tasks on the Main Thread 51
Using Objective-C Objects in Your Tasks 51
Suspending and Resuming Queues 52
Using Dispatch Semaphores to Regulate the Use of Finite Resources 52
Waiting on Groups of Queued Tasks 53
Dispatch Queues and Thread Safety 54
Dispatch Sources 56
About Dispatch Sources 56
Creating Dispatch Sources 57
Writing and Installing an Event Handler 58
Installing a Cancellation Handler 60
Changing the Target Queue 61
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
3
ContentsAssociating Custom Data with a Dispatch Source 61
Memory Management for Dispatch Sources 62
Dispatch Source Examples 62
Creating a Timer 62
Reading Data from a Descriptor 64
Writing Data to a Descriptor 66
Monitoring a File-System Object 68
Monitoring Signals 70
Monitoring a Process 71
Canceling a Dispatch Source 72
Suspending and Resuming Dispatch Sources 73
Migrating Away from Threads 74
Replacing Threads with Dispatch Queues 74
Eliminating Lock-Based Code 76
Implementing an Asynchronous Lock 76
Executing Critical Sections Synchronously 77
Improving on Loop Code 77
Replacing Thread Joins 79
Changing Producer-Consumer Implementations 80
Replacing Semaphore Code 81
Replacing Run-Loop Code 81
Compatibility with POSIX Threads 82
Glossary 84
Document Revision History 87
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
4
ContentsTables and Listings
Operation Queues 17
Table 2-1 Operation classes of the Foundation framework 17
Table 2-2 Methods to override for concurrent operations 24
Listing 2-1 Creating an NSInvocationOperation object 19
Listing 2-2 Creating an NSBlockOperation object 20
Listing 2-3 Defining a simple operation object 22
Listing 2-4 Responding to a cancellation request 23
Listing 2-5 Defining a concurrent operation 25
Listing 2-6 The start method 26
Listing 2-7 Updating an operation at completion time 27
Listing 2-8 Executing an operation object manually 35
Dispatch Queues 38
Table 3-1 Types of dispatch queues 39
Table 3-2 Technologies that use dispatch queues 41
Listing 3-1 A simple block example 42
Listing 3-2 Creating a new serial queue 45
Listing 3-3 Installing a queue clean up function 46
Listing 3-4 Executing a completion callback after a task 49
Listing 3-5 Performing the iterations of a for loop concurrently 51
Listing 3-6 Waiting on asynchronous tasks 54
Dispatch Sources 56
Table 4-1 Getting data from a dispatch source 59
Listing 4-1 Creating a timer dispatch source 63
Listing 4-2 Reading data from a file 65
Listing 4-3 Writing data to a file 67
Listing 4-4 Watching for filename changes 68
Listing 4-5 Installing a block to monitor signals 70
Listing 4-6 Monitoring the death of a parent process 71
Migrating Away from Threads 74
Listing 5-1 Modifying protected resources asynchronously 77
Listing 5-2 Running critical sections synchronously 77
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
5Listing 5-3 Replacing a for loop without striding 78
Listing 5-4 Adding a stride to a dispatched for loop 78
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
6
Tables and ListingsConcurrency is the notion of multiple things happening at the same time. With the proliferation of multicore
CPUs and the realization that the number of cores in each processor will only increase, software developers
need new ways to take advantage of them. Although operating systems like OS X and iOS are capable of
running multiple programs in parallel, most of those programs run in the background and perform tasks that
require little continuous processor time. It is the current foreground application that both captures the user’s
attention and keeps the computer busy. If an application has a lot of work to do but keeps only a fraction of
the available cores occupied, those extra processing resources are wasted.
In the past, introducing concurrency to an application required the creation of one or more additional threads.
Unfortunately, writing threaded code is challenging. Threads are a low-level tool that must be managed
manually. Given that the optimal number of threads for an application can change dynamically based on the
currentsystem load and the underlying hardware, implementing a correct threading solution becomes extremely
difficult, if not impossible to achieve. In addition, the synchronization mechanisms typically used with threads
add complexity and risk to software designs without any guarantees of improved performance.
Both OS X and iOS adopt a more asynchronous approach to the execution of concurrent tasksthan istraditionally
found in thread-based systems and applications. Rather than creating threads directly, applications need only
define specific tasks and then let the system perform them. By letting the system manage the threads,
applications gain a level ofscalability not possible with raw threads. Application developers also gain a simpler
and more efficient programming model.
This document describes the technique and technologies you should be using to implement concurrency in
your applications. The technologies described in this document are available in both OS X and iOS.
Organization of This Document
This document contains the following chapters:
●
“Concurrency and Application Design” (page 9) introduces the basics of asynchronous application design
and the technologies for performing your custom tasks asynchronously.
●
“Operation Queues” (page 17)shows you how to encapsulate and perform tasks using Objective-C objects.
●
“Dispatch Queues” (page 38) shows you how to execute tasks concurrently in C-based applications.
●
“Dispatch Sources” (page 56) shows you how to handle system events asynchronously.
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
7
Introduction●
“Migrating Away from Threads” (page 74) provides tips and techniques for migrating your existing
thread-based code over to use newer technologies.
This document also includes a glossary that defines relevant terms.
A Note About Terminology
Before entering into a discussion about concurrency, it is necessary to define some relevant terminology to
prevent confusion. Developers who are more familiar with UNIX systems or older OS X technologies may find
the terms “task”, “process”, and “thread” used somewhat differently in this document. This document uses
these terms in the following way:
● The term thread is used to refer to a separate path of execution for code. The underlying implementation
for threads in OS X is based on the POSIX threads API.
● The term process is used to refer to a running executable, which can encompass multiple threads.
● The term task is used to refer to the abstract concept of work that needs to be performed.
For complete definitions of these and other key terms used by this document, see “Glossary” (page 84).
See Also
This document focuses on the preferred technologies for implementing concurrency in your applications and
does not cover the use of threads. If you need information about using threads and other thread-related
technologies, see Threading Programming Guide .
Introduction
A Note About Terminology
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
8In the early days of computing, the maximum amount of work per unit of time that a computer could perform
was determined by the clock speed of the CPU. But as technology advanced and processor designs became
more compact, heat and other physical constraints started to limit the maximum clock speeds of processors.
And so, chip manufacturerslooked for other waysto increase the total performance of their chips. The solution
they settled on was increasing the number of processor cores on each chip. By increasing the number of cores,
a single chip could execute more instructions per second without increasing the CPU speed or changing the
chip size or thermal characteristics. The only problem was how to take advantage of the extra cores.
In order to take advantage of multiple cores, a computer needs software that can do multiple things
simultaneously. For a modern, multitasking operating system like OS X or iOS, there can be a hundred or more
programs running at any given time, so scheduling each program on a different core should be possible.
However, most of these programs are either system daemons or background applications that consume very
little real processing time. Instead, what is really needed is a way for individual applications to make use of the
extra cores more effectively.
The traditional way for an application to use multiple cores is to create multiple threads. However, as the
number of cores increases, there are problems with threaded solutions. The biggest problem is that threaded
code does not scale very well to arbitrary numbers of cores. You cannot create as many threads as there are
cores and expect a program to run well. What you would need to know is the number of cores that can be
used efficiently, which is a challenging thing for an application to compute on its own. Even if you manage to
get the numbers correct, there is still the challenge of programming for so many threads, of making them run
efficiently, and of keeping them from interfering with one another.
So, to summarize the problem, there needsto be a way for applicationsto take advantage of a variable number
of computer cores. The amount of work performed by a single application also needs to be able to scale
dynamically to accommodate changing system conditions. And the solution has to be simple enough so as to
not increase the amount of work needed to take advantage of those cores. The good news is that Apple’s
operating systems provide the solution to all of these problems, and this chapter takes a look at the technologies
that comprise this solution and the design tweaks you can make to your code to take advantage of them.
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
9
Concurrency and Application DesignThe Move Away from Threads
Although threads have been around for many years and continue to have their uses, they do not solve the
general problem of executing multiple tasks in a scalable way. With threads, the burden of creating a scalable
solution rests squarely on the shoulders of you, the developer. You have to decide how many threads to create
and adjust that number dynamically as system conditions change. Another problem is that your application
assumes most of the costs associated with creating and maintaining any threads it uses.
Instead of relying on threads, OS X and iOS take an asynchronous design approach to solving the concurrency
problem. Asynchronous functions have been present in operating systems for many years and are often used
to initiate tasks that might take a long time, such as reading data from the disk. When called, an asynchronous
function does some work behind the scenes to start a task running but returns before that task might actually
be complete. Typically, this work involves acquiring a background thread, starting the desired task on that
thread, and then sending a notification to the caller (usually through a callback function) when the task is
done. In the past, if an asynchronous function did not exist for what you want to do, you would have to write
your own asynchronous function and create your own threads. But now, OS X and iOS provide technologies
to allow you to perform any task asynchronously without having to manage the threads yourself.
One of the technologies for starting tasks asynchronously is Grand Central Dispatch (GCD). This technology
takes the thread management code you would normally write in your own applications and moves that code
down to the system level. All you have to do is define the tasks you want to execute and add them to an
appropriate dispatch queue. GCD takes care of creating the needed threads and of scheduling your tasks to
run on those threads. Because the thread management is now part of the system, GCD provides a holistic
approach to task management and execution, providing better efficiency than traditional threads.
Operation queues are Objective-C objects that act very much like dispatch queues. You define the tasks you
want to execute and then add them to an operation queue, which handles the scheduling and execution of
those tasks. Like GCD, operation queues handle all of the thread management for you, ensuring that tasks are
executed as quickly and as efficiently as possible on the system.
The following sections provide more information about dispatch queues, operation queues, and some other
related asynchronous technologies you can use in your applications.
Dispatch Queues
Dispatch queues are a C-based mechanism for executing custom tasks. A dispatch queue executestasks either
serially or concurrently but alwaysin a first-in, first-out order. (In other words, a dispatch queue always dequeues
and starts tasks in the same order in which they were added to the queue.) A serial dispatch queue runs only
one task at a time, waiting until that task is complete before dequeuing and starting a new one. By contrast,
a concurrent dispatch queue starts as many tasks as it can without waiting for already started tasks to finish.
Dispatch queues have other benefits:
Concurrency and Application Design
The Move Away from Threads
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
10● They provide a straightforward and simple programming interface.
● They offer automatic and holistic thread pool management.
● They provide the speed of tuned assembly.
● They are much more memory efficient (because thread stacks do not linger in application memory).
● They do not trap to the kernel under load.
● The asynchronous dispatching of tasks to a dispatch queue cannot deadlock the queue.
● They scale gracefully under contention.
● Serial dispatch queues offer a more efficient alternative to locks and other synchronization primitives.
The tasks you submit to a dispatch queue must be encapsulated inside either a function or a block object. Block
objects are a C language feature introduced in OS X v10.6 and iOS 4.0 that are similar to function pointers
conceptually, but have some additional benefits. Instead of defining blocks in their own lexical scope, you
typically define blocks inside another function or method so that they can access other variables from that
function or method. Blocks can also be moved out of their original scope and copied onto the heap, which is
what happens when you submit them to a dispatch queue. All of these semantics make it possible to implement
very dynamic tasks with relatively little code.
Dispatch queues are part of the Grand Central Dispatch technology and are part of the C runtime. For more
information about using dispatch queues in your applications, see “Dispatch Queues” (page 38). For more
information about blocks and their benefits, see Blocks Programming Topics.
Dispatch Sources
Dispatch sources are a C-based mechanism for processing specific types of system events asynchronously. A
dispatch source encapsulates information about a particular type of system event and submits a specific block
object or function to a dispatch queue whenever that event occurs. You can use dispatch sources to monitor
the following types of system events:
● Timers
● Signal handlers
● Descriptor-related events
● Process-related events
● Mach port events
● Custom events that you trigger
Dispatch sources are part of the Grand Central Dispatch technology. For information about using dispatch
sources to receive events in your application, see “Dispatch Sources” (page 56).
Concurrency and Application Design
The Move Away from Threads
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
11Operation Queues
An operation queue is the Cocoa equivalent of a concurrent dispatch queue and is implemented by the
NSOperationQueue class. Whereas dispatch queues always execute tasks in first-in, first-out order, operation
queues take other factors into account when determining the execution order of tasks. Primary among these
factors is whether a given task depends on the completion of other tasks. You configure dependencies when
defining your tasks and can use them to create complex execution-order graphs for your tasks.
The tasks you submit to an operation queue must be instances of the NSOperation class. An operation object
is an Objective-C object that encapsulates the work you want to perform and any data needed to perform it.
Because the NSOperation class is essentially an abstract base class, you typically define custom subclasses
to perform your tasks. However, the Foundation framework does include some concrete subclasses that you
can create and use as is to perform tasks.
Operation objects generate key-value observing (KVO) notifications, which can be a useful way of monitoring
the progress of your task. Although operation queues always execute operations concurrently, you can use
dependencies to ensure they are executed serially when needed.
For more information about how to use operation queues, and how to define custom operation objects, see
“Operation Queues” (page 17).
Asynchronous Design Techniques
Before you even consider redesigning your code to support concurrency, you should ask yourself whether
doing so is necessary. Concurrency can improve the responsiveness of your code by ensuring that your main
thread is free to respond to user events. It can even improve the efficiency of your code by leveraging more
cores to do more work in the same amount of time. However, it also adds overhead and increases the overall
complexity of your code, making it harder to write and debug your code.
Because it adds complexity, concurrency is not a feature that you can graft onto an application at the end of
your product cycle. Doing it right requires careful consideration of the tasks your application performs and the
data structures used to perform those tasks. Done incorrectly, you might find your code runsslower than before
and is less responsive to the user. Therefore, it is worthwhile to take some time at the beginning of your design
cycle to set some goals and to think about the approach you need to take.
Every application has different requirements and a different set of tasks that it performs. It is impossible for a
document to tell you exactly how to design your application and its associated tasks. However, the following
sections try to provide some guidance to help you make good choices during the design process.
Concurrency and Application Design
Asynchronous Design Techniques
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
12Define Your Application’s Expected Behavior
Before you even think about adding concurrency to your application, you should alwaysstart by defining what
you deem to be the correct behavior of your application. Understanding your application’s expected behavior
gives you a way to validate your design later. It should also give you some idea of the expected performance
benefits you might receive by introducing concurrency.
The first thing you should do is enumerate the tasks your application performs and the objects or data structures
associated with each task. Initially, you might want to start with tasks that are performed when the user selects
a menu item or clicks a button. These tasks offer discrete behavior and have a well defined start and end point.
You should also enumerate other types of tasks your application may perform without user interaction, such
as timer-based tasks.
After you have your list of high-level tasks,start breaking each task down further into the set ofstepsthat must
be taken to complete the task successfully. At thislevel, you should be primarily concerned with the modifications
you need to make to any data structures and objects and how those modifications affect your application’s
overallstate. You should also note any dependencies between objects and data structures as well. For example,
if a task involves making the same change to an array of objects, it is worth noting whether the changes to
one object affect any other objects. If the objects can be modified independently of each other, that might be
a place where you could make those modifications concurrently.
Factor Out Executable Units of Work
From your understanding of your application’s tasks, you should already be able to identify places where your
code might benefit from concurrency. If changing the order of one or more steps in a task changes the results,
you probably need to continue performing those steps serially. If changing the order has no effect on the
output, though, you should consider performing those steps concurrently. In both cases, you define the
executable unit of work that represents the step or steps to be performed. This unit of work then becomes
what you encapsulate using either a block or an operation object and dispatch to the appropriate queue.
For each executable unit of work you identify, do not worry too much about the amount of work being
performed, at least initially. Although there is always a cost to spinning up a thread, one of the advantages of
dispatch queues and operation queues is that in many cases those costs are much smaller than they are for
traditional threads. Thus, it is possible for you to execute smaller units of work more efficiently using queues
than you could using threads. Of course, you should always measure your actual performance and adjust the
size of your tasks as needed, but initially, no task should be considered too small.
Concurrency and Application Design
Asynchronous Design Techniques
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
13Identify the Queues You Need
Now that your tasks are broken up into distinct units of work and encapsulated using block objects or operation
objects, you need to define the queues you are going to use to execute that code. For a given task, examine
the blocks or operation objects you created and the order in which they must be executed to perform the task
correctly.
If you implemented your tasks using blocks, you can add your blocks to either a serial or concurrent dispatch
queue. If a specific order is required, you would always add your blocks to a serial dispatch queue. If a specific
order is not required, you can add the blocks to a concurrent dispatch queue or add them to several different
dispatch queues, depending on your needs.
If you implemented your tasks using operation objects, the choice of queue is often less interesting than the
configuration of your objects. To perform operation objectsserially, you must configure dependencies between
the related objects. Dependencies prevent one operation from executing until the objects on which it depends
have finished their work.
Tips for Improving Efficiency
In addition to simply factoring your code into smaller tasks and adding them to a queue, there are other ways
to improve the overall efficiency of your code using queues:
● Consider computing values directly within your task if memory usage is a factor. If your application is
already memory bound, computing values directly now may be faster than loading cached values from
main memory. Computing values directly uses the registers and caches of the given processor core, which
are much faster than main memory. Of course, you should only do this if testing indicates this is a
performance win.
●
Identify serial tasks early and do what you can to make them more concurrent. If a task must be
executed serially because it relies on some shared resource, consider changing your architecture to remove
that shared resource. You might consider making copies of the resource for each client that needs one or
eliminate the resource altogether.
● Avoid using locks. The support provided by dispatch queues and operation queuesmakeslocks unnecessary
in most situations. Instead of using locks to protect some shared resource, designate a serial queue (or
use operation object dependencies) to execute tasks in the correct order.
● Rely on the system frameworks whenever possible. The best way to achieve concurrency is to take
advantage of the built-in concurrency provided by the system frameworks. Many frameworks use threads
and other technologies internally to implement concurrent behaviors. When defining your tasks, look to
see if an existing framework defines a function or method that does exactly what you want and does so
concurrently. Using that API may save you effort and is more likely to give you the maximum concurrency
possible.
Concurrency and Application Design
Asynchronous Design Techniques
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
14Performance Implications
Operation queues, dispatch queues, and dispatch sources are provided to make it easier for you to execute
more code concurrently. However, these technologies do not guarantee improvements to the efficiency or
responsiveness in your application. It is still your responsibility to use queues in a manner that is both effective
for your needs and does not impose an undue burden on your application’s other resources. For example,
although you could create 10,000 operation objects and submit them to an operation queue, doing so would
cause your application to allocate a potentially nontrivial amount of memory, which could lead to paging and
decreased performance.
Before introducing any amount of concurrency to your code—whether using queues or threads—you should
always gather a set of baseline metrics that reflect your application’s current performance. After introducing
your changes, you should then gather additional metrics and compare them to your baseline to see if your
application’s overall efficiency has improved. If the introduction of concurrency makes your application less
efficient or responsive, you should use the available performance tools to check for the potential causes.
For an introduction to performance and the available performance tools, and for links to more advanced
performance-related topics, see Performance Overview.
Concurrency and Other Technologies
Factoring your code into modular tasks is the best way to try and improve the amount of concurrency in your
application. However, this design approach may not satisfy the needs of every application in every case.
Depending on your tasks, there might be other options that can offer additional improvements in your
application’s overall concurrency. This section outlines some of the other technologies to consider using as
part of your design.
OpenCL and Concurrency
In OS X, the Open Computing Language (OpenCL) is a standards-based technology for performing
general-purpose computations on a computer’s graphics processor. OpenCL is a good technology to use if
you have a well-defined set of computations that you want to apply to large data sets. For example, you might
use OpenCL to perform filter computations on the pixels of an image or use it to perform complex math
calculations on several values at once. In other words, OpenCL is geared more toward problem sets whose
data can be operated on in parallel.
Although OpenCL is good for performing massively data-parallel operations, it is not suitable for more
general-purpose calculations. There is a nontrivial amount of effort required to prepare and transfer both the
data and the required work kernel to a graphics card so that it can be operated on by a GPU. Similarly, there
is a nontrivial amount of effort required to retrieve any results generated by OpenCL. As a result, any tasks that
interact with the system are generally not recommended for use with OpenCL. For example, you would not
Concurrency and Application Design
Performance Implications
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
15use OpenCL to process data from files or network streams. Instead, the work you perform using OpenCL must
be much more self-contained so that it can be transferred to the graphics processor and computed
independently.
For more information about OpenCL and how you use it, see OpenCL Programming Guide for Mac .
When to Use Threads
Although operation queues and dispatch queues are the preferred way to perform tasks concurrently, they
are not a panacea. Depending on your application, there may still be times when you need to create custom
threads. If you do create custom threads, you should strive to create as few threads as possible yourself and
you should use those threads only for specific tasks that cannot be implemented any other way.
Threads are still a good way to implement code that must run in real time. Dispatch queues make every attempt
to run their tasks as fast as possible but they do not address real time constraints. If you need more predictable
behavior from code running in the background, threads may still offer a better alternative.
As with any threaded programming, you should always use threads judiciously and only when absolutely
necessary. For more information about thread packages and how you use them, see Threading Programming
Guide .
Concurrency and Application Design
Concurrency and Other Technologies
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
16Cocoa operations are an object-oriented way to encapsulate work that you want to perform asynchronously.
Operations are designed to be used either in conjunction with an operation queue or by themselves. Because
they are Objective-C based, operations are most commonly used in Cocoa-based applications in OS X and iOS.
This chapter shows you how to define and use operations.
About Operation Objects
An operation object is an instance of the NSOperation class (in the Foundation framework) that you use to
encapsulate work you want your application to perform. The NSOperation classitself is an abstract base class
that must be subclassed in order to do any useful work. Despite being abstract, this class does provide a
significant amount of infrastructure to minimize the amount of work you have to do in your own subclasses.
In addition, the Foundation framework provides two concrete subclasses that you can use as-is with your
existing code. Table 2-1 lists these classes, along with a summary of how you use each one.
Table 2-1 Operation classes of the Foundation framework
Class Description
A class you use as-is to create an operation object based on an object and
selector from your application. You can use this class in cases where you have
an existing method that already performs the needed task. Because it does
not require subclassing, you can also use this classto create operation objects
in a more dynamic fashion.
For information about how to use this class, see “Creating an
NSInvocationOperation Object” (page 19).
NSInvocationOperation
A class you use as-isto execute one or more block objects concurrently. Because
it can execute more than one block, a block operation object operates using
a group semantic; only when all of the associated blocks have finished
executing is the operation itself considered finished.
For information about how to use this class,see “Creating an NSBlockOperation
Object” (page 20). This class is available in OS X v10.6 and later. For more
information about blocks, see Blocks Programming Topics.
NSBlockOperation
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
17
Operation QueuesClass Description
The base class for defining custom operation objects. Subclassing
NSOperation gives you complete control over the implementation of your
own operations, including the ability to alter the default way in which your
operation executes and reports its status.
For information about how to define custom operation objects, see “Defining
a Custom Operation Object” (page 21).
NSOperation
All operation objects support the following key features:
● Support for the establishment of graph-based dependencies between operation objects. These
dependencies prevent a given operation from running until all of the operations on which it depends
have finished running. For information about how to configure dependencies, see “Configuring
Interoperation Dependencies” (page 29).
● Support for an optional completion block, which is executed after the operation’s main task finishes. (OS
X v10.6 and later only.) For information about how to set a completion block,see “Setting Up a Completion
Block” (page 30).
● Support for monitoring changes to the execution state of your operations using KVO notifications. For
information about how to observe KVO notifications, see Key-Value Observing Programming Guide .
● Support for prioritizing operations and thereby affecting their relative execution order. For more information,
see “Changing an Operation’s Execution Priority” (page 29).
● Support for canceling semantics that allow you to halt an operation while it is executing. For information
about how to cancel operations, see “Canceling Operations” (page 36). For information about how to
support cancellation in your own operations, see “Responding to Cancellation Events” (page 22).
Operations are designed to help you improve the level of concurrency in your application. Operations are also
a good way to organize and encapsulate your application’s behavior into simple discrete chunks. Instead of
running some bit of code on your application’s main thread, you can submit one or more operation objects
to a queue and let the corresponding work be performed asynchronously on one or more separate threads.
Concurrent Versus Non-concurrent Operations
Although you typically execute operations by adding them to an operation queue, doing so is not required.
It is also possible to execute an operation object manually by calling its start method, but doing so does not
guarantee that the operation runs concurrently with the rest of your code. The isConcurrent method of the
Operation Queues
Concurrent Versus Non-concurrent Operations
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
18NSOperation class tells you whether an operation runs synchronously or asynchronously with respect to the
thread in which its start method was called. By default, this method returns NO, which means the operation
runs synchronously in the calling thread.
If you want to implement a concurrent operation—that is, one that runs asynchronously with respect to the
calling thread—you must write additional code to start the operation asynchronously. For example, you might
spawn a separate thread, call an asynchronous system function, or do anything else to ensure that the start
method starts the task and returns immediately and, in all likelihood, before the task is finished.
Most developers should never need to implement concurrent operation objects. If you always add your
operations to an operation queue, you do not need to implement concurrent operations. When you submit a
nonconcurrent operation to an operation queue, the queue itself creates a thread on which to run your
operation. Thus, adding a nonconcurrent operation to an operation queue still results in the asynchronous
execution of your operation object code. The ability to define concurrent operations is only necessary in cases
where you need to execute the operation asynchronously without adding it to an operation queue.
For information about how to create a concurrent operation, see “Configuring Operations for Concurrent
Execution” (page 24) and NSOperation Class Reference .
Creating an NSInvocationOperation Object
The NSInvocationOperation class is a concrete subclass of NSOperation that, when run, invokes the
selector you specify on the object you specify. Use this classto avoid defining large numbers of custom operation
objects for each task in your application; especially if you are modifying an existing application and already
have the objects and methods needed to perform the necessary tasks. You can also use it when the method
you want to call can change depending on the circumstances. For example, you could use an invocation
operation to perform a selector that is chosen dynamically based on user input.
The process for creating an invocation operation is straightforward. You create and initialize a new instance of
the class, passing the desired object and selector to execute to the initialization method. Listing 2-1 shows
two methodsfrom a custom classthat demonstrate the creation process. The taskWithData: method creates
a new invocation object and supplies it with the name of another method, which contains the task
implementation.
Listing 2-1 Creating an NSInvocationOperation object
@implementation MyCustomClass
- (NSOperation*)taskWithData:(id)data {
NSInvocationOperation* theOp = [[NSInvocationOperation alloc] initWithTarget:self
selector:@selector(myTaskMethod:) object:data];
Operation Queues
Creating an NSInvocationOperation Object
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
19return theOp;
}
// This is the method that does the actual work of the task.
- (void)myTaskMethod:(id)data {
// Perform the task.
}
@end
Creating an NSBlockOperation Object
The NSBlockOperation class is a concrete subclass of NSOperation that acts as a wrapper for one or more
block objects. This class provides an object-oriented wrapper for applications that are already using operation
queues and do not want to create dispatch queues as well. You can also use block operationsto take advantage
of operation dependencies, KVO notifications, and other features that might not be available with dispatch
queues.
When you create a block operation, you typically add at least one block at initialization time; you can add more
blocks as needed later. When it comes time to execute an NSBlockOperation object, the object submits all
of its blocks to the default-priority, concurrent dispatch queue. The object then waits until all of the blocks
finish executing. When the last block finishes executing, the operation object marks itself as finished. Thus,
you can use a block operation to track a group of executing blocks, much like you would use a thread join to
merge the results from multiple threads. The difference is that because the block operation itself runs on a
separate thread, your application’s other threads can continue doing work while waiting for the block operation
to complete.
Listing 2-2 shows a simple example of how to create an NSBlockOperation object. The block itself has no
parameters and no significant return result.
Listing 2-2 Creating an NSBlockOperation object
NSBlockOperation* theOp = [NSBlockOperation blockOperationWithBlock: ^{
NSLog(@"Beginning operation.\n");
// Do some work.
}];
Operation Queues
Creating an NSBlockOperation Object
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
20After creating a block operation object, you can add more blocks to it using the addExecutionBlock:
method. If you need to execute blocks serially, you must submit them directly to the desired dispatch queue.
Defining a Custom Operation Object
If the block operation and invocation operation objects do not quite meet the needs of your application, you
can subclass NSOperation directly and add whatever behavior you need. The NSOperation class provides
a generalsubclassing point for all operation objects. The class also provides a significant amount of infrastructure
to handle most of the work needed for dependencies and KVO notifications. However, there may still be times
when you need to supplement the existing infrastructure to ensure that your operations behave correctly. The
amount of extra work you have to do depends on whether you are implementing a nonconcurrent or a
concurrent operation.
Defining a nonconcurrent operation is much simpler than defining a concurrent operation. For a nonconcurrent
operation, all you have to do is perform your main task and respond appropriately to cancellation events; the
existing class infrastructure does all of the other work for you. For a concurrent operation, you must replace
some of the existing infrastructure with your custom code. The following sectionsshow you how to implement
both types of object.
Performing the Main Task
At a minimum, every operation object should implement at least the following methods:
● A custom initialization method
● main
You need a custom initialization method to put your operation object into a known state and a custom main
method to perform your task. You can implement additional methods as needed, of course, such as the
following:
● Custom methods that you plan to call from the implementation of your main method
● Accessor methods for setting data values and accessing the results of the operation
● Methods of the NSCoding protocol to allow you to archive and unarchive the operation object
Listing 2-3 shows a starting template for a custom NSOperation subclass. (This listing does not show how to
handle cancellation but does show the methods you would typically have. For information about handling
cancellation, see “Responding to Cancellation Events” (page 22).) The initialization method for this class takes
a single object as a data parameter and stores a reference to it inside the operation object. The main method
would ostensibly work on that data object before returning the results back to your application.
Operation Queues
Defining a Custom Operation Object
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
21Listing 2-3 Defining a simple operation object
@interface MyNonConcurrentOperation : NSOperation
@property id (strong) myData;
-(id)initWithData:(id)data;
@end
@implementation MyNonConcurrentOperation
- (id)initWithData:(id)data {
if (self = [super init])
myData = data;
return self;
}
-(void)main {
@try {
// Do some work on myData and report the results.
}
@catch(...) {
// Do not rethrow exceptions.
}
}
@end
For a detailed example of how to implement an NSOperation subclass, see NSOperationSample .
Responding to Cancellation Events
After an operation begins executing, it continues performing its task until it is finished or until your code
explicitly cancelsthe operation. Cancellation can occur at any time, even before an operation begins executing.
Although the NSOperation class provides a way for clientsto cancel an operation, recognizing the cancellation
event is voluntary by necessity. If an operation were terminated outright, there might not be a way to reclaim
resources that had been allocated. As a result, operation objects are expected to check for cancellation events
and to exit gracefully when they occur in the middle of the operation.
Operation Queues
Defining a Custom Operation Object
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
22To support cancellation in an operation object, all you have to do is call the object’s isCancelled method
periodically from your custom code and return immediately if it ever returns YES. Supporting cancellation is
important regardless of the duration of your operation or whether you subclass NSOperation directly or use
one of its concrete subclasses. The isCancelled method itself is very lightweight and can be called frequently
without any significant performance penalty. When designing your operation objects, you should consider
calling the isCancelled method at the following places in your code:
●
Immediately before you perform any actual work
● At least once during each iteration of a loop, or more frequently if each iteration is relatively long
● At any points in your code where it would be relatively easy to abort the operation
Listing 2-4 provides a very simple example of how to respond to cancellation events in the main method of
an operation object. In this case, the isCancelled method is called each time through a while loop, allowing
for a quick exit before work begins and again at regular intervals.
Listing 2-4 Responding to a cancellation request
- (void)main {
@try {
BOOL isDone = NO;
while (![self isCancelled] && !isDone) {
// Do some work and set isDone to YES when finished
}
}
@catch(...) {
// Do not rethrow exceptions.
}
}
Although the preceding example contains no cleanup code, your own code should be sure to free up any
resources that were allocated by your custom code.
Operation Queues
Defining a Custom Operation Object
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
23Configuring Operations for Concurrent Execution
Operation objects execute in a synchronous manner by default—that is, they perform their task in the thread
that calls their start method. Because operation queues provide threads for nonconcurrent operations,
though, most operations still run asynchronously. However, if you plan to execute operations manually and
still want them to run asynchronously, you must take the appropriate actions to ensure that they do. You do
this by defining your operation object as a concurrent operation.
Table 2-2 lists the methods you typically override to implement a concurrent operation.
Table 2-2 Methods to override for concurrent operations
Method Description
(Required) All concurrent operations must override this method and replace the
default behavior with their own custom implementation. To execute an operation
manually, you call its start method. Therefore, your implementation of this method
is the starting point for your operation and is where you set up the thread or other
execution environment in which to execute your task. Your implementation must
not call super at any time.
start
(Optional) This method is typically used to implement the task associated with the
operation object. Although you could perform the task in the start method,
implementing the task using this method can result in a cleaner separation of your
setup and task code.
main
(Required) Concurrent operations are responsible for setting up their execution
environment and reporting the status of that environment to outside clients.
Therefore, a concurrent operation must maintain some state information to know
when it is executing its task and when it has finished that task. It must then report
that state using these methods.
Your implementations of these methods must be safe to call from other threads
simultaneously. You must also generate the appropriate KVO notifications for the
expected key paths when changing the values reported by these methods.
isExecuting
isFinished
(Required) To identify an operation as a concurrent operation, override this method
and return YES.
isConcurrent
The rest of this section shows a sample implementation of the MyOperation class, which demonstrates the
fundamental code needed to implement a concurrent operation. The MyOperation class simply executes its
own main method on a separate thread that it creates. The actual work that the main method performs is
irrelevant. The point of the sample is to demonstrate the infrastructure you need to provide when defining a
concurrent operation.
Operation Queues
Defining a Custom Operation Object
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
24Listing 2-5 showsthe interface and part of the implementation of the MyOperation class. The implementations
of the isConcurrent, isExecuting, and isFinished methods for the MyOperation class are relatively
straightforward. The isConcurrent method should simply return YES to indicate that this is a concurrent
operation. The isExecuting and isFinished methods simply return values stored in instance variables of
the class itself.
Listing 2-5 Defining a concurrent operation
@interface MyOperation : NSOperation {
BOOL executing;
BOOL finished;
}
- (void)completeOperation;
@end
@implementation MyOperation
- (id)init {
self = [super init];
if (self) {
executing = NO;
finished = NO;
}
return self;
}
- (BOOL)isConcurrent {
return YES;
}
- (BOOL)isExecuting {
return executing;
}
- (BOOL)isFinished {
return finished;
}
Operation Queues
Defining a Custom Operation Object
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
25@end
Listing 2-6 shows the start method of MyOperation. The implementation of this method is minimal so as
to demonstrate the tasks you absolutely must perform. In this case, the method simply starts up a new thread
and configures it to call the main method. The method also updates the executing member variable and
generates KVO notifications for the isExecuting key path to reflect the change in that value. With its work
done, this method then simply returns, leaving the newly detached thread to perform the actual task.
Listing 2-6 The start method
- (void)start {
// Always check for cancellation before launching the task.
if ([self isCancelled])
{
// Must move the operation to the finished state if it is canceled.
[self willChangeValueForKey:@"isFinished"];
finished = YES;
[self didChangeValueForKey:@"isFinished"];
return;
}
// If the operation is not canceled, begin executing the task.
[self willChangeValueForKey:@"isExecuting"];
[NSThread detachNewThreadSelector:@selector(main) toTarget:self withObject:nil];
executing = YES;
[self didChangeValueForKey:@"isExecuting"];
}
Listing 2-7 shows the remaining implementation for the MyOperation class. As was seen in Listing 2-6 (page
26), the main method is the entry point for a new thread. It performs the work associated with the operation
object and calls the custom completeOperation method when that work is finally done. The
completeOperation method then generates the needed KVO notifications for both the isExecuting and
isFinished key paths to reflect the change in state of the operation.
Operation Queues
Defining a Custom Operation Object
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
26Listing 2-7 Updating an operation at completion time
- (void)main {
@try {
// Do the main work of the operation here.
[self completeOperation];
}
@catch(...) {
// Do not rethrow exceptions.
}
}
- (void)completeOperation {
[self willChangeValueForKey:@"isFinished"];
[self willChangeValueForKey:@"isExecuting"];
executing = NO;
finished = YES;
[self didChangeValueForKey:@"isExecuting"];
[self didChangeValueForKey:@"isFinished"];
}
Even if an operation is canceled, you should always notify KVO observers that your operation is now finished
with its work. When an operation object is dependent on the completion of other operation objects, it monitors
the isFinished key path for those objects. Only when all objects report that they are finished does the
dependent operation signal that it isready to run. Failing to generate a finish notification can therefore prevent
the execution of other operations in your application.
Maintaining KVO Compliance
The NSOperation class is key-value observing (KVO) compliant for the following key paths:
● isCancelled
● isConcurrent
● isExecuting
Operation Queues
Defining a Custom Operation Object
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
27● isFinished
● isReady
● dependencies
● queuePriority
● completionBlock
If you override the start method or do any significant customization of an NSOperation object other than
override main, you must ensure that your custom object remains KVO compliant for these key paths. When
overriding the start method, the key paths you should be most concerned with are isExecuting and
isFinished. These are the key paths most commonly affected by reimplementing that method.
If you want to implement support for dependencies on something besides other operation objects, you can
also override the isReady method and force it to return NO until your custom dependencies were satisfied.
(If you implement custom dependencies, be sure to call super from your isReady method if you still support
the default dependency managementsystem provided by the NSOperation class.) When the readinessstatus
of your operation object changes, generate KVO notificationsfor the isReady key path to report those changes.
Unless you override the addDependency: or removeDependency: methods, you should not need to worry
about generating KVO notifications for the dependencies key path.
Although you could generate KVO notifications for other key paths of NSOperation, it is unlikely you would
ever need to do so. If you need to cancel an operation, you can simply call the existing cancel method to do
so. Similarly, there should be little need for you to modify the queue priority information in an operation object.
Finally, unless your operation is capable of changing its concurrency status dynamically, you do not need to
provide KVO notifications for the isConcurrent key path.
For more information on key-value observing and how to support it in your custom objects, see Key-Value
Observing Programming Guide .
Customizing the Execution Behavior of an Operation Object
The configuration of operation objects occurs after you have created them but before you add them to a
queue. The types of configurations described in this section can be applied to all operation objects, regardless
of whether you subclassed NSOperation yourself or used an existing subclass.
Operation Queues
Customizing the Execution Behavior of an Operation Object
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
28Configuring Interoperation Dependencies
Dependencies are a way for you to serialize the execution of distinct operation objects. An operation that is
dependent on other operations cannot begin executing until all of the operations on which it depends have
finished executing. Thus, you can use dependencies to create simple one-to-one dependencies between two
operation objects or to build complex object dependency graphs.
To establish dependencies between two operation objects, you use the addDependency: method of
NSOperation. This method creates a one-way dependency from the current operation object to the target
operation you specify as a parameter. This dependency means that the current object cannot begin executing
until the target object finishes executing. Dependencies are also not limited to operations in the same queue.
Operation objects manage their own dependencies and so it is perfectly acceptable to create dependencies
between operations and add them all to different queues. One thing that is not acceptable, however, is to
create circular dependencies between operations. Doing so is a programmer error that will prevent the affected
operations from ever running.
When all of an operation’s dependencies have themselves finished executing, an operation object normally
becomes ready to execute. (If you customize the behavior of the isReady method, the readiness of the
operation is determined by the criteria you set.) If the operation object is in a queue, the queue may start
executing that operation at any time. If you plan to execute the operation manually, it is up to you to call the
operation’s start method.
Important: You should always configure dependencies before running your operations or adding them
to an operation queue. Dependencies added afterward may not prevent a given operation object from
running.
Dependencies rely on each operation object sending out appropriate KVO notifications whenever the status
of the object changes. If you customize the behavior of your operation objects, you may need to generate
appropriate KVO notifications from your custom code in order to avoid causing issues with dependencies. For
more information on KVO notifications and operation objects, see “Maintaining KVO Compliance” (page 27).
For additional information on configuring dependencies, see NSOperation Class Reference .
Changing an Operation’s Execution Priority
For operations added to a queue, execution order is determined first by the readiness of the queued operations
and then by their relative priority. Readinessis determined by an operation’s dependencies on other operations,
but the priority level is an attribute of the operation object itself. By default, all new operation objects have a
“normal” priority, but you can increase or decrease that priority as needed by calling the object’s
setQueuePriority: method.
Operation Queues
Customizing the Execution Behavior of an Operation Object
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
29Priority levels apply only to operations in the same operation queue. If your application has multiple operation
queues, each prioritizes its own operations independently of any other queues. Thus, it is still possible for
low-priority operations to execute before high-priority operations in a different queue.
Priority levels are not a substitute for dependencies. Priorities determine the order in which an operation queue
starts executing only those operations that are currently ready. For example, if a queue contains both a
high-priority and low-priority operation and both operations are ready, the queue executes the high-priority
operation first. However, if the high-priority operation is not ready but the low-priority operation is, the queue
executes the low-priority operation first. If you want to prevent one operation from starting until another
operation has finished, you must use dependencies (as described in “Configuring Interoperation
Dependencies” (page 29)) instead.
Changing the Underlying Thread Priority
In OS X v10.6 and later, it is possible to configure the execution priority of an operation’s underlying thread.
Thread policies in the system are themselves managed by the kernel, but in general higher-priority threads
are given more opportunities to run than lower-priority threads. In an operation object, you specify the thread
priority as a floating-point value in the range 0.0 to 1.0, with 0.0 being the lowest priority and 1.0 being the
highest priority. If you do not specify an explicit thread priority, the operation runs with the default thread
priority of 0.5.
To set an operation’s thread priority, you must call the setThreadPriority: method of your operation
object before adding it to a queue (or executing it manually). When it comes time to execute the operation,
the default start method uses the value you specified to modify the priority of the current thread. This new
priority remains in effect for the duration of your operation’s main method only. All other code (including your
operation’s completion block) is run with the default thread priority. If you create a concurrent operation, and
therefore override the start method, you must configure the thread priority yourself.
Setting Up a Completion Block
In OS X v10.6 and later, an operation can execute a completion block when its main task finishes executing.
You can use a completion block to perform any work that you do not consider part of the main task. For
example, you might use this block to notify interested clients that the operation itself has completed. A
concurrent operation object might use this block to generate its final KVO notifications.
To set a completion block, use the setCompletionBlock: method of NSOperation. The block you pass to
this method should have no arguments and no return value.
Operation Queues
Customizing the Execution Behavior of an Operation Object
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
30Tips for Implementing Operation Objects
Although operation objects are fairly easy to implement, there are several things you should be aware of as
you are writing your code. The following sections describe factors that you should take into account when
writing the code for your operation objects.
Managing Memory in Operation Objects
The following sections describe key elements of good memory management in an operation object. For general
information about memory management in Objective-C programs, see Advanced Memory Management
Programming Guide .
Avoid Per-Thread Storage
Although most operations execute on a thread, in the case of nonconcurrent operations, that thread is usually
provided by an operation queue. If an operation queue provides a thread for you, you should consider that
thread to be owned by the queue and not to be touched by your operation. Specifically, you should never
associate any data with a thread that you do not create yourself or manage. The threads managed by an
operation queue come and go depending on the needs of the system and your application. Therefore, passing
data between operations using per-thread storage is unreliable and likely to fail.
In the case of operation objects, there should be no reason for you to use per-thread storage in any case. When
you initialize an operation object, you should provide the object with everything it needsto do itsjob. Therefore,
the operation object itself provides the contextual storage you need. All incoming and outgoing data should
be stored there until it can be integrated back into your application or is no longer required.
Keep References to Your Operation Object As Needed
Just because operation objects run asynchronously, you should not assume that you can create them and
forget about them. They are still just objects and it is up to you to manage any references to them that your
code needs. This is especially important if you need to retrieve result data from an operation after it is finished.
The reason you should always keep your own references to operations is that you may not get the chance to
ask a queue for the object later. Queues make every effort to dispatch and execute operations as quickly as
possible. In many cases, queues start executing operations almost immediately after they are added. By the
time your own code goes back to the queue to get a reference to the operation, that operation could already
be finished and removed from the queue.
Operation Queues
Tips for Implementing Operation Objects
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
31Handling Errors and Exceptions
Because operations are essentially discrete entities inside your application, they are responsible for handling
any errors or exceptions that arise. In OS X v10.6 and later, the default start method provided by the
NSOperation class does not catch exceptions. (In OS X v10.5, the start method does catch and suppress
exceptions.) Your own code should always catch and suppress exceptions directly. It should also check error
codes and notify the appropriate parts of your application as needed. And if you replace the start method,
you must similarly catch any exceptions in your custom implementation to prevent them from leaving the
scope of the underlying thread.
Among the types of error situations you should be prepared to handle are the following:
● Check and handle UNIX errno-style error codes.
● Check explicit error codes returned by methods and functions.
● Catch exceptions thrown by your own code or by other system frameworks.
● Catch exceptions thrown by the NSOperation class itself, which throws exceptions in the following
situations:
● When the operation is not ready to execute but its start method is called
● When the operation is executing or finished (possibly because it was canceled) and its start method
is called again
● When you try to add a completion block to an operation that is already executing or finished
● When you try to retrieve the result of an NSInvocationOperation object that was canceled
If your custom code does encounter an exception or error, you should take whatever steps are needed to
propagate that error to the rest of your application. The NSOperation class does not provide explicit methods
for passing along error result codes or exceptionsto other parts of your application. Therefore, ifsuch information
is important to your application, you must provide the necessary code.
Determining an Appropriate Scope for Operation Objects
Although it is possible to add an arbitrarily large number of operations to an operation queue, doing so is
often impractical. Like any object, instances of the NSOperation class consume memory and have real costs
associated with their execution. If each of your operation objects does only a small amount of work, and you
create tens of thousands of them, you may find that you are spending more time dispatching operations than
doing real work. And if your application is already memory-constrained, you might find that just having tens
of thousands of operation objects in memory might degrade performance even further.
Operation Queues
Determining an Appropriate Scope for Operation Objects
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
32The key to using operations efficiently isto find an appropriate balance between the amount of work you need
to do and to keep the computer busy. Try to make sure that your operations do a reasonable amount of work.
For example, if your application creates 100 operation objectsto perform the same task on 100 different values,
consider creating 10 operation objects to process 10 values each instead.
You should also avoid adding large numbers of operationsto a queue all at once, or avoid continuously adding
operation objects to a queue faster than they can be processed. Rather than flood a queue with operation
objects, create those objects in batches. As one batch finishes executing, use a completion block to tell your
application to create a new batch. When you have a lot of work to do, you want to keep the queues filled with
enough operations so that the computer stays busy, but you do not want to create so many operations at
once that your application runs out of memory.
Of course, the number of operation objects you create, and the amount of work you perform in each, is variable
and entirely dependent on your application. You should always use tools such as Instruments and Shark to
help you find an appropriate balance between efficiency and speed. For an overview of Instruments, Shark,
and the other performance tools you can use to gather metrics for your code, see Performance Overview.
Executing Operations
Ultimately, your application needs to execute operations in order to do the associated work. In this section,
you learn several waysto execute operations as well as how you can manipulate the execution of your operations
at runtime.
Adding Operations to an Operation Queue
By far, the easiest way to execute operations is to use an operation queue, which is an instance of the
NSOperationQueue class. Your application is responsible for creating and maintaining any operation queues
it intends to use. An application can have any number of queues, but there are practical limits to how many
operations may be executing at a given point in time. Operation queues work with the system to restrict the
number of concurrent operationsto a value that is appropriate for the available cores and system load. Therefore,
creating additional queues does not mean that you can execute additional operations.
To create a queue, you allocate it in your application as you would any other object:
NSOperationQueue* aQueue = [[NSOperationQueue alloc] init];
To add operations to a queue, you use the addOperation: method. In OS X v10.6 and later, you can add
groups of operations using the addOperations:waitUntilFinished: method, or you can add block objects
directly to a queue (without a corresponding operation object) using the addOperationWithBlock: method.
Each of these methods queues up an operation (or operations) and notifies the queue that it should begin
Operation Queues
Executing Operations
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
33processing them. In most cases, operations are executed shortly after being added to a queue, but the operation
queue may delay execution of queued operations for any of several reasons. Specifically, execution may be
delayed if queued operations are dependent on other operations that have not yet completed. Execution may
also be delayed if the operation queue itself is suspended or is already executing its maximum number of
concurrent operations. The following examples show the basic syntax for adding operations to a queue.
[aQueue addOperation:anOp]; // Add a single operation
[aQueue addOperations:anArrayOfOps waitUntilFinished:NO]; // Add multiple operations
[aQueue addOperationWithBlock:^{
/* Do something. */
}];
Important: Never modify an operation object after it has been added to a queue. While waiting in a queue,
the operation could start executing at any time, so changing its dependencies or the data it contains could
have adverse effects. If you want to know the status of an operation, you can use the methods of the
NSOperation class to determine if the operation is running, waiting to run, or already finished.
Although the NSOperationQueue class is designed for the concurrent execution of operations, it is possible
to force a single queue to run only one operation at a time. The setMaxConcurrentOperationCount:
method lets you configure the maximum number of concurrent operations for an operation queue object.
Passing a value of 1 to this method causes the queue to execute only one operation at a time. Although only
one operation at a time may execute, the order of execution isstill based on other factors,such asthe readiness
of each operation and its assigned priority. Thus, a serialized operation queue does not offer quite the same
behavior as a serial dispatch queue in Grand Central Dispatch does. If the execution order of your operation
objectsisimportant to you, you should use dependenciesto establish that order before adding your operations
to a queue. For information about configuring dependencies, see “Configuring Interoperation
Dependencies” (page 29).
For information about using operation queues, see NSOperationQueue Class Reference . For more information
about serial dispatch queues, see “Creating Serial Dispatch Queues” (page 44).
Executing Operations Manually
Although operation queues are the most convenient way to run operation objects, it is also possible to execute
operations without a queue. If you choose to execute operations manually, however, there are some precautions
you should take in your code. In particular, the operation must be ready to run and you must always start it
using its start method.
Operation Queues
Executing Operations
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
34An operation is not considered able to run until its isReady method returns YES. The isReady method is
integrated into the dependency management system of the NSOperation class to provide the status of the
operation’s dependencies. Only when its dependencies are cleared is an operation free to begin executing.
When executing an operation manually, you should always use the start method to begin execution. You
use this method, instead of main or some other method, because the start method performs several safety
checks before it actually runs your custom code. In particular, the default start method generates the KVO
notificationsthat operationsrequire to processtheir dependencies correctly. This method also correctly avoids
executing your operation if it has already been canceled and throws an exception if your operation is not actually
ready to run.
If your application defines concurrent operation objects, you should also consider calling the isConcurrent
method of operations prior to launching them. In cases where this method returns NO, your local code can
decide whether to execute the operation synchronously in the current thread or create a separate thread first.
However, implementing this kind of checking is entirely up to you.
Listing 2-8 shows a simple example of the kind of checks you should perform before executing operations
manually. If the method returns NO, you could schedule a timer and call the method again later. You would
then keep rescheduling the timer until the method returns YES, which could occur because the operation was
canceled.
Listing 2-8 Executing an operation object manually
- (BOOL)performOperation:(NSOperation*)anOp
{
BOOL ranIt = NO;
if ([anOp isReady] && ![anOp isCancelled])
{
if (![anOp isConcurrent])
[anOp start];
else
[NSThread detachNewThreadSelector:@selector(start)
toTarget:anOp withObject:nil];
ranIt = YES;
}
else if ([anOp isCancelled])
{
// If it was canceled before it was started,
Operation Queues
Executing Operations
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
35// move the operation to the finished state.
[self willChangeValueForKey:@"isFinished"];
[self willChangeValueForKey:@"isExecuting"];
executing = NO;
finished = YES;
[self didChangeValueForKey:@"isExecuting"];
[self didChangeValueForKey:@"isFinished"];
// Set ranIt to YES to prevent the operation from
// being passed to this method again in the future.
ranIt = YES;
}
return ranIt;
}
Canceling Operations
Once added to an operation queue, an operation object is effectively owned by the queue and cannot be
removed. The only way to dequeue an operation is to cancel it. You can cancel a single individual operation
object by calling its cancel method or you can cancel all of the operation objects in a queue by calling the
cancelAllOperations method of the queue object.
You should cancel operations only when you are sure you no longer need them. Issuing a cancel command
puts the operation object into the “canceled” state, which prevents it from ever being run. Because a canceled
operation is still considered to be “finished”, objects that are dependent on it receive the appropriate KVO
notifications to clear that dependency. Thus, it is more common to cancel all queued operations in response
to some significant event, like the application quitting or the user specifically requesting the cancellation,
rather than cancel operations selectively.
Waiting for Operations to Finish
For the best performance, you should design your operations to be as asynchronous as possible, leaving your
application free to do additional work while the operation executes. If the code that creates an operation also
processes the results of that operation, you can use the waitUntilFinished method of NSOperation to
block that code until the operation finishes. In general, though, it is best to avoid calling this method if you
can help it. Blocking the current thread may be a convenient solution, but it does introduce more serialization
into your code and limits the overall amount of concurrency.
Operation Queues
Executing Operations
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
36Important: You should never wait for an operation from your application’s main thread. You should only
do so from a secondary thread or from another operation. Blocking your main thread prevents your
application from responding to user events and could make your application appear unresponsive.
In addition to waiting for a single operation to finish, you can also wait on all of the operations in a queue by
calling the waitUntilAllOperationsAreFinished method of NSOperationQueue. When waiting for an
entire queue to finish, be aware that your application’s other threads can still add operations to the queue,
thus prolonging the wait.
Suspending and Resuming Queues
If you want to issue a temporary halt to the execution of operations, you can suspend the corresponding
operation queue using the setSuspended: method. Suspending a queue does not cause already executing
operations to pause in the middle of their tasks. It simply prevents new operations from being scheduled for
execution. You might suspend a queue in response to a user request to pause any ongoing work, because the
expectation is that the user might eventually want to resume that work.
Operation Queues
Executing Operations
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
37Grand Central Dispatch (GCD) dispatch queues are a powerful tool for performing tasks. Dispatch queues let
you execute arbitrary blocks of code either asynchronously or synchronously with respect to the caller. You
can use dispatch queues to perform nearly all of the tasks that you used to perform on separate threads. The
advantage of dispatch queues is that they are simpler to use and much more efficient at executing those tasks
than the corresponding threaded code.
This chapter provides an introduction to dispatch queues, along with information about how to use them to
execute general tasks in your application. If you want to replace existing threaded code with dispatch queues,
you can find some additional tips for how to do that in “Migrating Away from Threads” (page 74).
About Dispatch Queues
Dispatch queues are an easy way to perform tasks asynchronously and concurrently in your application. A task
is simply some work that your application needs to perform. For example, you could define a task to perform
some calculations, create or modify a data structure, process some data read from a file, or any number of
things. You define tasks by placing the corresponding code inside either a function or a block object and adding
it to a dispatch queue.
A dispatch queue is an object-like structure that manages the tasks you submit to it. All dispatch queues are
first-in, first-out data structures. Thus, the tasks you add to a queue are always started in the same order that
they were added. GCD provides some dispatch queues for you automatically, but others you can create for
specific purposes. Table 3-1 lists the types of dispatch queues available to your application and how you use
them.
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
38
Dispatch QueuesTable 3-1 Types of dispatch queues
Type Description
Serial queues (also known as private dispatch queues) execute one task at a time in
the order in which they are added to the queue. The currently executing task runs on
a distinct thread (which can vary from task to task) that is managed by the dispatch
queue. Serial queues are often used to synchronize access to a specific resource.
You can create as many serial queues as you need, and each queue operates concurrently
with respect to all other queues. In other words, if you create four serial queues, each
queue executes only one task at a time but up to four tasks could still execute
concurrently, one from each queue. For information on how to create serial queues, see
“Creating Serial Dispatch Queues” (page 44).
Serial
Concurrent queues (also known as a type of global dispatch queue) execute one or
more tasks concurrently, but tasks are stillstarted in the order in which they were added
to the queue. The currently executing tasks run on distinct threads that are managed
by the dispatch queue. The exact number of tasks executing at any given point is variable
and depends on system conditions.
You cannot create concurrent dispatch queues yourself. Instead, there are three global
concurrent queues for your application to use. For more information on how to get the
global concurrent queues, see “Getting the Global Concurrent Dispatch Queues” (page
43).
Concurrent
The main dispatch queue is a globally available serial queue that executes tasks on the
application’s main thread. This queue works with the application’s run loop (if one is
present) to interleave the execution of queued tasks with the execution of other event
sources attached to the run loop. Because it runs on your application’s main thread, the
main queue is often used as a key synchronization point for an application.
Although you do not need to create the main dispatch queue, you do need to make
sure your application drains it appropriately. For more information on how this queue
is managed, see “Performing Tasks on the Main Thread” (page 51).
Main
dispatch
queue
When it comes to adding concurrency to an application, dispatch queues provide several advantages over
threads. The most direct advantage is the simplicity of the work-queue programming model. With threads,
you have to write code both for the work you want to perform and for the creation and management of the
threads themselves. Dispatch queues let you focus on the work you actually want to perform without having
to worry about the thread creation and management. Instead, the system handles all of the thread creation
and management for you. The advantage is that the system is able to manage threads much more efficiently
than any single application ever could. The system can scale the number of threads dynamically based on the
available resources and current system conditions. In addition, the system is usually able to start running your
task more quickly than you could if you created the thread yourself.
Dispatch Queues
About Dispatch Queues
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
39Although you might think rewriting your code for dispatch queues would be difficult, it is often easier to write
code for dispatch queues than it is to write code for threads. The key to writing your code is to design tasks
that are self-contained and able to run asynchronously. (This is actually true for both threads and dispatch
queues.) However, where dispatch queues have an advantage is in predictability. If you have two tasks that
access the same shared resource but run on different threads, either thread could modify the resource first
and you would need to use a lock to ensure that both tasks did not modify that resource at the same time.
With dispatch queues, you could add both tasksto a serial dispatch queue to ensure that only one task modified
the resource at any given time. This type of queue-based synchronization is more efficient than locks because
locks alwaysrequire an expensive kernel trap in both the contested and uncontested cases, whereas a dispatch
queue works primarily in your application’s process space and only calls down to the kernel when absolutely
necessary.
Although you would be right to point out that two tasks running in a serial queue do not run concurrently,
you have to remember that if two threads take a lock at the same time, any concurrency offered by the threads
is lost or significantly reduced. More importantly, the threaded model requires the creation of two threads,
which take up both kernel and user-space memory. Dispatch queues do not pay the same memory penalty
for their threads, and the threads they do use are kept busy and not blocked.
Some other key points to remember about dispatch queues include the following:
● Dispatch queues execute their tasks concurrently with respect to other dispatch queues. The serialization
of tasks is limited to the tasks in a single dispatch queue.
● The system determines the total number of tasks executing at any one time. Thus, an application with 100
tasks in 100 different queues may not execute all of those tasks concurrently (unless it has 100 or more
effective cores).
● The system takes queue priority levelsinto account when choosing which new tasksto start. For information
about how to set the priority of a serial queue, see “Providing a Clean Up Function For a Queue” (page
46).
● Tasks in a queue must be ready to execute at the time they are added to the queue. (If you have used
Cocoa operation objects before, notice that this behavior differs from the model operations use.)
● Private dispatch queues are reference-counted objects. In addition to retaining the queue in your own
code, be aware that dispatch sources can also be attached to a queue and also increment its retain count.
Thus, you must make sure that all dispatch sources are canceled and all retain calls are balanced with an
appropriate release call. For more information about retaining and releasing queues, see “Memory
Management for Dispatch Queues” (page 45). For more information about dispatch sources, see “About
Dispatch Sources” (page 56).
For more information about interfaces you use to manipulate dispatch queues, see Grand Central Dispatch
(GCD) Reference .
Dispatch Queues
About Dispatch Queues
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
40Queue-Related Technologies
In addition to dispatch queues, Grand Central Dispatch provides several technologies that use queues to help
manage your code. Table 3-2 lists these technologies and provides links to where you can find out more
information about them.
Table 3-2 Technologies that use dispatch queues
Technology Description
A dispatch group is a way to monitor a set of block objects for completion. (You can
monitor the blocks synchronously or asynchronously depending on your needs.)
Groups provide a useful synchronization mechanism for code that depends on the
completion of other tasks. For more information about using groups, see “Waiting
on Groups of Queued Tasks” (page 53).
Dispatch
groups
A dispatch semaphore is similar to a traditional semaphore but is generally more
efficient. Dispatch semaphores call down to the kernel only when the calling thread
needs to be blocked because the semaphore is unavailable. If the semaphore is
available, no kernel call is made. For an example of how to use dispatch semaphores,
see “Using Dispatch Semaphores to Regulate the Use of Finite Resources” (page 52).
Dispatch
semaphores
A dispatch source generates notifications in response to specific types of system
events. You can use dispatch sourcesto monitor eventssuch as process notifications,
signals, and descriptor events among others. When an event occurs, the dispatch
source submits your task code asynchronously to the specified dispatch queue for
processing. For more information about creating and using dispatch sources, see
“Dispatch Sources” (page 56).
Dispatch
sources
Implementing Tasks Using Blocks
Block objects are a C-based language feature that you can use in your C, Objective-C, and C++ code. Blocks make
it easy to define a self-contained unit of work. Although they might seem akin to function pointers, a block is
actually represented by an underlying data structure that resembles an object and is created and managed
for you by the compiler. The compiler packages up the code you provide (along with any related data) and
encapsulates it in a form that can live in the heap and be passed around your application.
One of the key advantages of blocks is their ability to use variables from outside their own lexical scope. When
you define a block inside a function or method, the block acts as a traditional code block would in some ways.
For example, a block can read the values of variables defined in the parent scope. Variables accessed by the
block are copied to the block data structure on the heap so that the block can access them later. When blocks
Dispatch Queues
Queue-Related Technologies
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
41are added to a dispatch queue, these values must typically be left in a read-only format. However, blocks that
are executed synchronously can also use variables that have the __block keyword prepended to return data
back to the parent’s calling scope.
You declare blocks inline with your code using a syntax that is similar to the syntax used for function pointers.
The main difference between a block and a function pointer is that the block name is preceded with a caret
(^) instead of an asterisk (*). Like a function pointer, you can pass arguments to a block and receive a return
value from it. Listing 3-1 shows you how to declare and execute blockssynchronously in your code. The variable
aBlock is declared to be a block that takes a single integer parameter and returns no value. An actual block
matching that prototype is then assigned to aBlock and declared inline. The last line executes the block
immediately, printing the specified integers to standard out.
Listing 3-1 A simple block example
int x = 123;
int y = 456;
// Block declaration and assignment
void (^aBlock)(int) = ^(int z) {
printf("%d %d %d\n", x, y, z);
};
// Execute the block
aBlock(789); // prints: 123 456 789
The following is a summary of some of the key guidelines you should consider when designing your blocks:
● For blocks that you plan to perform asynchronously using a dispatch queue, it is safe to capture scalar
variables from the parent function or method and use them in the block. However, you should not try to
capture large structures or other pointer-based variables that are allocated and deleted by the calling
context. By the time your block is executed, the memory referenced by that pointer may be gone. Of
course, it issafe to allocate memory (or an object) yourself and explicitly hand off ownership of that memory
to the block.
● Dispatch queues copy blocks that are added to them, and they release blocks when they finish executing.
In other words, you do not need to explicitly copy blocks before adding them to a queue.
Dispatch Queues
Implementing Tasks Using Blocks
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
42● Although queues are more efficient than raw threads at executing small tasks, there is still overhead to
creating blocks and executing them on a queue. If a block doestoo little work, it may be cheaper to execute
it inline than dispatch it to a queue. The way to tell if a block is doing too little work is to gather metrics
for each path using the performance tools and compare them.
● Do not cache data relative to the underlying thread and expect that data to be accessible from a different
block. If tasks in the same queue need to share data, use the context pointer of the dispatch queue to
store the data instead. For more information on how to access the context data of a dispatch queue, see
“Storing Custom Context Information with a Queue” (page 46).
●
If your block creates more than a few Objective-C objects, you might want to enclose parts of your block’s
code in an @autorelease block to handle the memory management for those objects. Although GCD
dispatch queues have their own autorelease pools, they make no guarantees as to when those pools are
drained. If your application is memory constrained, creating your own autorelease pool allows you to free
up the memory for autoreleased objects at more regular intervals.
For more information about blocks, including how to declare and use them, see Blocks Programming Topics.
For information about how you add blocks to a dispatch queue, see “Adding Tasks to a Queue” (page 47).
Creating and Managing Dispatch Queues
Before you add your tasks to a queue, you have to decide what type of queue to use and how you intend to
use it. Dispatch queues can execute tasks either serially or concurrently. In addition, if you have a specific use
for the queue in mind, you can configure the queue attributes accordingly. The following sections show you
how to create dispatch queues and configure them for use.
Getting the Global Concurrent Dispatch Queues
A concurrent dispatch queue is useful when you have multiple tasks that can run in parallel. A concurrent
queue is still a queue in that it dequeues tasks in a first-in, first-out order; however, a concurrent queue may
dequeue additional tasks before any previoustasksfinish. The actual number of tasks executed by a concurrent
queue at any given moment is variable and can change dynamically as conditions in your application change.
Many factors affect the number of tasks executed by the concurrent queues, including the number of available
cores, the amount of work being done by other processes, and the number and priority of tasks in other serial
dispatch queues.
The system provides each application with three concurrent dispatch queues. These queues are global to the
application and are differentiated only by their priority level. Because they are global, you do not create them
explicitly. Instead, you ask for one of the queues using the dispatch_get_global_queue function, asshown
in the following example:
Dispatch Queues
Creating and Managing Dispatch Queues
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
43dispatch_queue_t aQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,
0);
In addition to getting the default concurrent queue, you can also get queues with high- and low-priority levels
by passing in the DISPATCH_QUEUE_PRIORITY_HIGH and DISPATCH_QUEUE_PRIORITY_LOW constants to
the function instead. As you might expect, tasks in the high-priority concurrent queue execute before those
in the default and low-priority queues. Similarly, tasks in the default queue execute before those in the
low-priority queue.
Note: The second argument to the dispatch_get_global_queue function is reserved for future
expansion. For now, you should always pass 0 for this argument.
Although dispatch queues are reference-counted objects, you do not need to retain and release the global
concurrent queues. Because they are global to your application, retain and release calls for these queues are
ignored. Therefore, you do not need to store references to these queues. You can just call the
dispatch_get_global_queue function whenever you need a reference to one of them.
Creating Serial Dispatch Queues
Serial queues are useful when you want your tasks to execute in a specific order. A serial queue executes only
one task at a time and always pulls tasks from the head of the queue. You might use a serial queue instead of
a lock to protect a shared resource or mutable data structure. Unlike a lock, a serial queue ensures that tasks
are executed in a predictable order. And as long as you submit your tasks to a serial queue asynchronously,
the queue can never deadlock.
Unlike concurrent queues, which are created for you, you must explicitly create and manage any serial queues
you want to use. You can create any number of serial queues for your application but should avoid creating
large numbers of serial queues solely as a means to execute as many tasks simultaneously as you can. If you
want to execute large numbers of tasks concurrently, submit them to one of the global concurrent queues.
When creating serial queues, try to identify a purpose for each queue, such as protecting a resource or
synchronizing some key behavior of your application.
Listing 3-2 shows the steps required to create a custom serial queue. The dispatch_queue_create function
takes two parameters: the queue name and a set of queue attributes. The debugger and performance tools
display the queue name to help you track how your tasks are being executed. The queue attributes are reserved
for future use and should be NULL.
Dispatch Queues
Creating and Managing Dispatch Queues
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
44Listing 3-2 Creating a new serial queue
dispatch_queue_t queue;
queue = dispatch_queue_create("com.example.MyQueue", NULL);
In addition to any custom queues you create, the system automatically creates a serial queue and binds it to
your application’s main thread. For more information about getting the queue for the main thread,see “Getting
Common Queues at Runtime” (page 45).
Getting Common Queues at Runtime
Grand Central Dispatch provides functions to let you access several common dispatch queues from your
application:
● Use the dispatch_get_current_queue function for debugging purposes or to test the identity of the
current queue. Calling this function from inside a block object returns the queue to which the block was
submitted (and on which it is now presumably running). Calling this function from outside of a block
returns the default concurrent queue for your application.
● Use the dispatch_get_main_queue function to get the serial dispatch queue associated with your
application’s main thread. This queue is created automatically for Cocoa applications and for applications
that either call the dispatch_main function or configure a run loop (using either the CFRunLoopRef
type or an NSRunLoop object) on the main thread.
● Use the dispatch_get_global_queue function to get any of the shared concurrent queues. For more
information, see “Getting the Global Concurrent Dispatch Queues” (page 43).
Memory Management for Dispatch Queues
Dispatch queues and other dispatch objects are reference-counted data types. When you create a serial dispatch
queue, it has an initial reference count of 1. You can use the dispatch_retain and dispatch_release
functions to increment and decrement that reference count as needed. When the reference count of a queue
reaches zero, the system asynchronously deallocates the queue.
It is important to retain and release dispatch objects, such as queues, to ensure that they remain in memory
while they are being used. As with memory-managed Cocoa objects, the general rule is that if you plan to use
a queue that was passed to your code, you should retain the queue before you use it and release it when you
no longer need it. This basic pattern ensures that the queue remains in memory for as long as you are using
it.
Dispatch Queues
Creating and Managing Dispatch Queues
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
45Note: You do not need to retain or release any of the global dispatch queues, including the
concurrent dispatch queues or the main dispatch queue. Any attemptsto retain or release the queues
are ignored.
Even if you implement a garbage-collected application, you must still retain and release your dispatch queues
and other dispatch objects. Grand Central Dispatch does notsupport the garbage collection model for reclaiming
memory.
Storing Custom Context Information with a Queue
All dispatch objects (including dispatch queues) allow you to associate custom context data with the object.
To set and get this data on a given object, you use the dispatch_set_context and dispatch_get_context
functions. The system does not use your custom data in any way, and it is up to you to both allocate and
deallocate the data at the appropriate times.
For queues, you can use context data to store a pointer to an Objective-C object or other data structure that
helps identify the queue or its intended usage to your code. You can use the queue’s finalizer function to
deallocate (or disassociate) your context data from the queue before it is deallocated. An example of how to
write a finalizer function that clears a queue’s context data is shown in Listing 3-3 (page 46).
Providing a Clean Up Function For a Queue
After you create a serial dispatch queue, you can attach a finalizer function to perform any custom clean up
when the queue is deallocated. Dispatch queues are reference counted objects and you can use the
dispatch_set_finalizer_f function to specify a function to be executed when the reference count of
your queue reaches zero. You use this function to clean up the context data associated with a queue and the
function is called only if the context pointer is not NULL.
Listing 3-3 shows a custom finalizer function and a function that creates a queue and installs that finalizer. The
queue uses the finalizer function to release the data stored in the queue’s context pointer. (The
myInitializeDataContextFunction and myCleanUpDataContextFunction functionsreferenced from
the code are custom functions that you would provide to initialize and clean up the contents of the data
structure itself.) The context pointer passed to the finalizer function contains the data object associated with
the queue.
Listing 3-3 Installing a queue clean up function
void myFinalizerFunction(void *context)
{
Dispatch Queues
Creating and Managing Dispatch Queues
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
46MyDataContext* theData = (MyDataContext*)context;
// Clean up the contents of the structure
myCleanUpDataContextFunction(theData);
// Now release the structure itself.
free(theData);
}
dispatch_queue_t createMyQueue()
{
MyDataContext* data = (MyDataContext*) malloc(sizeof(MyDataContext));
myInitializeDataContextFunction(data);
// Create the queue and set the context data.
dispatch_queue_t serialQueue =
dispatch_queue_create("com.example.CriticalTaskQueue", NULL);
if (serialQueue)
{
dispatch_set_context(serialQueue, data);
dispatch_set_finalizer_f(serialQueue, &myFinalizerFunction);
}
return serialQueue;
}
Adding Tasks to a Queue
To execute a task, you must dispatch it to an appropriate dispatch queue. You can dispatch taskssynchronously
or asynchronously, and you can dispatch them singly or in groups. Once in a queue, the queue becomes
responsible for executing your tasks as soon as possible, given its constraints and the existing tasks already in
the queue. This section shows you some of the techniques for dispatching tasks to a queue and describes the
advantages of each.
Dispatch Queues
Adding Tasks to a Queue
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
47Adding a Single Task to a Queue
There are two ways to add a task to a queue: asynchronously or synchronously. When possible, asynchronous
execution using the dispatch_async and dispatch_async_f functions is preferred over the synchronous
alternative. When you add a block object or function to a queue, there is no way to know when that code will
execute. As a result, adding blocks or functions asynchronously lets you schedule the execution of the code
and continue to do other work from the calling thread. This is especially important if you are scheduling the
task from your application’s main thread—perhaps in response to some user event.
Although you should add tasks asynchronously whenever possible, there may still be times when you need
to add a task synchronously to prevent race conditions or other synchronization errors. In these instances, you
can use the dispatch_sync and dispatch_sync_f functions to add the task to the queue. These functions
block the current thread of execution until the specified task finishes executing.
Important: You should never call the dispatch_sync or dispatch_sync_f function from a task that
is executing in the same queue that you are planning to pass to the function. This is particularly important
for serial queues, which are guaranteed to deadlock, but should also be avoided for concurrent queues.
The following example shows how to use the block-based variants for dispatching tasks asynchronously and
synchronously:
dispatch_queue_t myCustomQueue;
myCustomQueue = dispatch_queue_create("com.example.MyCustomQueue", NULL);
dispatch_async(myCustomQueue, ^{
printf("Do some work here.\n");
});
printf("The first block may or may not have run.\n");
dispatch_sync(myCustomQueue, ^{
printf("Do some more work here.\n");
});
printf("Both blocks have completed.\n");
Dispatch Queues
Adding Tasks to a Queue
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
48Performing a Completion Block When a Task Is Done
By their nature, tasks dispatched to a queue run independently of the code that created them. However, when
the task is done, your application might still want to be notified of that fact so that it can incorporate the
results. With traditional asynchronous programming, you might do this using a callback mechanism, but with
dispatch queues you can use a completion block.
A completion block is just another piece of code that you dispatch to a queue at the end of your original task.
The calling code typically provides the completion block as a parameter when it starts the task. All the task
code has to do is submit the specified block or function to the specified queue when it finishes its work.
Listing 3-4 shows an averaging function implemented using blocks. The last two parameters to the averaging
function allow the caller to specify a queue and block to use when reporting the results. After the averaging
function computes its value, it passes the results to the specified block and dispatches it to the queue. To
prevent the queue from being released prematurely, it is critical to retain that queue initially and release it
once the completion block has been dispatched.
Listing 3-4 Executing a completion callback after a task
void average_async(int *data, size_t len,
dispatch_queue_t queue, void (^block)(int))
{
// Retain the queue provided by the user to make
// sure it does not disappear before the completion
// block can be called.
dispatch_retain(queue);
// Do the work on the default concurrent queue and then
// call the user-provided block with the results.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0),
^{
int avg = average(data, len);
dispatch_async(queue, ^{ block(avg);});
// Release the user-provided queue when done
dispatch_release(queue);
});
}
Dispatch Queues
Adding Tasks to a Queue
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
49Performing Loop Iterations Concurrently
One place where concurrent dispatch queues might improve performance is in places where you have a loop
that performs a fixed number of iterations. For example, suppose you have a for loop that does some work
through each loop iteration:
for (i = 0; i < count; i++) {
printf("%u\n",i);
}
If the work performed during each iteration is distinct from the work performed during all other iterations,
and the order in which each successive loop finishes is unimportant, you can replace the loop with a call to
the dispatch_apply or dispatch_apply_f function. These functionssubmit the specified block or function
to a queue once for each loop iteration. When dispatched to a concurrent queue, it is therefore possible to
perform multiple loop iterations at the same time.
You can specify either a serial queue or a concurrent queue when calling dispatch_apply or
dispatch_apply_f. Passing in a concurrent queue allows you to perform multiple loop iterations
simultaneously and isthe most common way to use these functions. Although using a serial queue is permissible
and does the right thing for your code, using such a queue has no real performance advantages over leaving
the loop in place.
Important: Like a regular for loop, the dispatch_apply and dispatch_apply_f functions do not
return until all loop iterations are complete. You should therefore be careful when calling them from code
that is already executing from the context of a queue. If the queue you pass as a parameter to the function
is a serial queue and is the same one executing the current code, calling these functions will deadlock the
queue.
Because they effectively block the current thread, you should also be careful when calling these functions
from your main thread, where they could prevent your event handling loop from responding to events in
a timely manner. If your loop code requires a noticeable amount of processing time, you might want to
call these functions from a different thread.
Listing 3-5 shows how to replace the preceding for loop with the dispatch_apply syntax. The block you
pass in to the dispatch_apply function must contain a single parameter that identifies the current loop
iteration. When the block is executed, the value of this parameter is 0 for the first iteration, 1 for the second,
and so on. The value of the parameter for the last iteration is count - 1, where count is the total number
of iterations.
Dispatch Queues
Adding Tasks to a Queue
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
50Listing 3-5 Performing the iterations of a for loop concurrently
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,
0);
dispatch_apply(count, queue, ^(size_t i) {
printf("%u\n",i);
});
You should make sure that your task code does a reasonable amount of work through each iteration. As with
any block or function you dispatch to a queue, there is overhead to scheduling that code for execution. If each
iteration of your loop performs only a small amount of work, the overhead ofscheduling the code may outweigh
the performance benefits you might achieve from dispatching it to a queue. If you find this is true during your
testing, you can use striding to increase the amount of work performed during each loop iteration. With striding,
you group together multiple iterations of your original loop into a single block and reduce the iteration count
proportionately. For example, if you perform 100 iterations initially but decide to use a stride of 4, you now
perform 4 loop iterations from each block and your iteration count is 25. For an example of how to implement
striding, see “Improving on Loop Code” (page 77).
Performing Tasks on the Main Thread
Grand Central Dispatch provides a special dispatch queue that you can use to execute tasks on your application’s
main thread. This queue is provided automatically for all applications and is drained automatically by any
application that sets up a run loop (managed by either a CFRunLoopRef type or NSRunLoop object) on its
main thread. If you are not creating a Cocoa application and do not want to set up a run loop explicitly, you
must call the dispatch_main function to drain the main dispatch queue explicitly. You can still add tasks to
the queue, but if you do not call this function those tasks are never executed.
You can get the dispatch queue for your application’s main thread by calling the dispatch_get_main_queue
function. Tasks added to this queue are performed serially on the main thread itself. Therefore, you can use
this queue as a synchronization point for work being done in other parts of your application.
Using Objective-C Objects in Your Tasks
GCD provides built-in support for Cocoa memory management techniques so you may freely use Objective-C
objects in the blocks you submit to dispatch queues. Each dispatch queue maintains its own autorelease pool
to ensure that autoreleased objects are released at some point; queues make no guarantee about when they
actually release those objects.
Dispatch Queues
Adding Tasks to a Queue
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
51If your application is memory constrained and your block creates more than a few autoreleased objects, creating
your own autorelease pool is the only way to ensure that your objects are released in a timely manner. If your
block creates hundreds of objects, you might want to create more than one autorelease pool or drain your
pool at regular intervals.
For more information about autorelease pools and Objective-C memory management, see Advanced Memory
Management Programming Guide .
Suspending and Resuming Queues
You can prevent a queue from executing block objects temporarily by suspending it. You suspend a dispatch
queue using the dispatch_suspend function and resume it using the dispatch_resume function. Calling
dispatch_suspend increments the queue’s suspension reference count, and calling dispatch_resume
decrementsthe reference count. While the reference count is greater than zero, the queue remainssuspended.
Therefore, you must balance allsuspend calls with a matching resume call in order to resume processing blocks.
Important: Suspend and resume calls are asynchronous and take effect only between the execution of
blocks. Suspending a queue does not cause an already executing block to stop.
Using Dispatch Semaphoresto Regulate the Use of Finite Resources
If the tasks you are submitting to dispatch queues access some finite resource, you may want to use a dispatch
semaphore to regulate the number of tasks simultaneously accessing that resource. A dispatch semaphore
works like a regular semaphore with one exception. When resources are available, it takes less time to acquire
a dispatch semaphore than it does to acquire a traditional system semaphore. This is because Grand Central
Dispatch does not call down into the kernel for this particular case. The only time it calls down into the kernel
is when the resource is not available and the system needsto park your thread until the semaphore issignaled.
The semantics for using a dispatch semaphore are as follows:
1. When you create the semaphore (using the dispatch_semaphore_create function), you can specify
a positive integer indicating the number of resources available.
2. In each task, call dispatch_semaphore_wait to wait on the semaphore.
3. When the wait call returns, acquire the resource and do your work.
4. When you are done with the resource, release it and signal the semaphore by calling the
dispatch_semaphore_signal function.
Dispatch Queues
Suspending and Resuming Queues
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
52For an example of how these steps work, consider the use of file descriptors on the system. Each application
is given a limited number of file descriptors to use. If you have a task that processes large numbers of files, you
do not want to open so many files at one time that you run out of file descriptors. Instead, you can use a
semaphore to limit the number of file descriptors in use at any one time by your file-processing code. The
basic pieces of code you would incorporate into your tasks is as follows:
// Create the semaphore, specifying the initial pool size
dispatch_semaphore_t fd_sema = dispatch_semaphore_create(getdtablesize() / 2);
// Wait for a free file descriptor
dispatch_semaphore_wait(fd_sema, DISPATCH_TIME_FOREVER);
fd = open("/etc/services", O_RDONLY);
// Release the file descriptor when done
close(fd);
dispatch_semaphore_signal(fd_sema);
When you create the semaphore, you specify the number of available resources. This value becomes the initial
count variable for the semaphore. Each time you wait on the semaphore, the dispatch_semaphore_wait
function decrements that count variable by 1. If the resulting value is negative, the function tells the kernel to
block your thread. On the other end, the dispatch_semaphore_signal function increments the count
variable by 1 to indicate that a resource has been freed up. If there are tasks blocked and waiting for a resource,
one of them is subsequently unblocked and allowed to do its work.
Waiting on Groups of Queued Tasks
Dispatch groups are a way to block a thread until one or more tasks finish executing. You can use this behavior
in places where you cannot make progress until all of the specified tasks are complete. For example, after
dispatching several tasksto compute some data, you might use a group to wait on those tasks and then process
the results when they are done. Another way to use dispatch groupsis as an alternative to thread joins. Instead
of starting several child threads and then joining with each of them, you could add the corresponding tasks
to a dispatch group and wait on the entire group.
Listing 3-6 shows the basic process for setting up a group, dispatching tasks to it, and waiting on the results.
Instead of dispatching tasks to a queue using the dispatch_async function, you use the
dispatch_group_async function instead. This function associates the task with the group and queues it for
execution. To wait on a group of tasks to finish, you then use the dispatch_group_wait function, passing
in the appropriate group.
Dispatch Queues
Waiting on Groups of Queued Tasks
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
53Listing 3-6 Waiting on asynchronous tasks
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,
0);
dispatch_group_t group = dispatch_group_create();
// Add a task to the group
dispatch_group_async(group, queue, ^{
// Some asynchronous work
});
// Do some other work while the tasks execute.
// When you cannot make any more forward progress,
// wait on the group to block the current thread.
dispatch_group_wait(group, DISPATCH_TIME_FOREVER);
// Release the group when it is no longer needed.
dispatch_release(group);
Dispatch Queues and Thread Safety
It might seem odd to talk about thread safety in the context of dispatch queues, but thread safety is still a
relevant topic. Any time you are implementing concurrency in your application, there are a few things you
should know:
● Dispatch queues themselves are thread safe. In other words, you can submit tasks to a dispatch queue
from any thread on the system without first taking a lock or synchronizing access to the queue.
● Do not call the dispatch_sync function from a task that is executing on the same queue that you pass
to your function call. Doing so will deadlock the queue. If you need to dispatch to the current queue, do
so asynchronously using the dispatch_async function.
● Avoid taking locks from the tasks you submit to a dispatch queue. Although it is safe to use locks from
your tasks, when you acquire the lock, you risk blocking a serial queue entirely if that lock is unavailable.
Similarly, for concurrent queues, waiting on a lock might prevent other tasks from executing instead. If
you need to synchronize parts of your code, use a serial dispatch queue instead of a lock.
Dispatch Queues
Dispatch Queues and Thread Safety
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
54● Although you can obtain information about the underlying thread running a task, it is better to avoid
doing so. For more information about the compatibility of dispatch queues with threads,see “Compatibility
with POSIX Threads” (page 82).
For additional tips on how to change your existing threaded code to use dispatch queues,see “Migrating Away
from Threads” (page 74).
Dispatch Queues
Dispatch Queues and Thread Safety
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
55Whenever you interact with the underlying system, you must be prepared for that task to take a nontrivial
amount of time. Calling down to the kernel or othersystem layersinvolves a change in context that isreasonably
expensive compared to calls that occur within your own process. As a result, many system libraries provide
asynchronous interfaces to allow your code to submit a request to the system and continue to do other work
while that request is processed. Grand Central Dispatch builds on this general behavior by allowing you to
submit your request and have the results reported back to your code using blocks and dispatch queues.
About Dispatch Sources
A dispatch source is a fundamental data type that coordinates the processing of specific low-level system
events. Grand Central Dispatch supports the following types of dispatch sources:
● Timer dispatch sources generate periodic notifications.
● Signal dispatch sources notify you when a UNIX signal arrives.
● Descriptor sources notify you of various file- and socket-based operations, such as:
● When data is available for reading
● When it is possible to write data
● When files are deleted, moved, or renamed in the file system
● When file meta information changes
● Process dispatch sources notify you of process-related events, such as:
● When a process exits
● When a process issues a fork or exec type of call
● When a signal is delivered to the process
● Mach port dispatch sources notify you of Mach-related events.
● Custom dispatch sources are ones you define and trigger yourself.
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
56
Dispatch SourcesDispatch sourcesreplace the asynchronous callback functionsthat are typically used to processsystem-related
events. When you configure a dispatch source, you specify the events you want to monitor and the dispatch
queue and code to use to process those events. You can specify your code using block objects or functions.
When an event of interest arrives, the dispatch source submits your block or function to the specified dispatch
queue for execution.
Unlike tasks that you submit to a queue manually, dispatch sources provide a continuous source of events for
your application. A dispatch source remains attached to its dispatch queue until you cancel it explicitly. While
attached, it submits its associated task code to the dispatch queue whenever the corresponding event occurs.
Some events, such as timer events, occur at regular intervals but most occur only sporadically as specific
conditions arise. For this reason, dispatch sources retain their associated dispatch queue to prevent it from
being released prematurely while events may still be pending.
To prevent events from becoming backlogged in a dispatch queue, dispatch sources implement an event
coalescing scheme. If a new event arrives before the event handler for a previous event has been dequeued
and executed, the dispatch source coalesces the data from the new event data with data from the old event.
Depending on the type of event, coalescing may replace the old event or update the information it holds. For
example, a signal-based dispatch source provides information about only the most recent signal but also
reports how many total signals have been delivered since the last invocation of the event handler.
Creating Dispatch Sources
Creating a dispatch source involves creating both the source of the events and the dispatch source itself. The
source of the events is whatever native data structures are required to process the events. For example, for a
descriptor-based dispatch source you would need to open the descriptor and for a process-based source you
would need to obtain the process ID of the target program. When you have your event source, you can then
create the corresponding dispatch source as follows:
1. Create the dispatch source using the dispatch_source_create function.
2. Configure the dispatch source:
● Assign an event handler to the dispatch source; see “Writing and Installing an Event Handler” (page
58).
● For timer sources, set the timer information using the dispatch_source_set_timer function; see
“Creating a Timer” (page 62).
3. Optionally assign a cancellation handler to the dispatch source;see “Installing a Cancellation Handler” (page
60).
4. Call the dispatch_resume function to start processing events; see “Suspending and Resuming Dispatch
Sources” (page 73).
Dispatch Sources
Creating Dispatch Sources
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
57Because dispatch sources require some additional configuration before they can be used, the
dispatch_source_create function returns dispatch sources in a suspended state. While suspended, a
dispatch source receives events but does not process them. This gives you time to install an event handler and
perform any additional configuration needed to process the actual events.
The following sections show you how to configure various aspects of a dispatch source. For detailed examples
showing you how to configure specific types of dispatch sources, see “Dispatch Source Examples” (page 62).
For additional information about the functions you use to create and configure dispatch sources, see Grand
Central Dispatch (GCD) Reference .
Writing and Installing an Event Handler
To handle the events generated by a dispatch source, you must define an event handler to process those
events. An event handler is a function or block object that you install on your dispatch source using the
dispatch_source_set_event_handler or dispatch_source_set_event_handler_f function. When
an event arrives, the dispatch source submits your event handler to the designated dispatch queue for
processing.
The body of your event handler is responsible for processing any events that arrive. If your event handler is
already queued and waiting to process an event when a new event arrives, the dispatch source coalesces the
two events. An event handler generally sees information only for the most recent event, but depending on
the type of the dispatch source it may also be able to get information about other events that occurred and
were coalesced. If one or more new events arrive after the event handler has begun executing, the dispatch
source holds onto those events until the current event handler has finished executing. At that point, it submits
the event handler to the queue again with the new events.
Function-based event handlerstake a single context pointer, containing the dispatch source object, and return
no value. Block-based event handlers take no parameters and have no return value.
// Block-based event handler
void (^dispatch_block_t)(void)
// Function-based event handler
void (*dispatch_function_t)(void *)
Inside your event handler, you can get information about the given event from the dispatch source itself.
Although function-based event handlers are passed a pointer to the dispatch source as a parameter, block-based
event handlers must capture that pointer themselves. You can do thisfor your blocks by referencing the variable
containing the dispatch source normally. For example, the following code snippet capturesthe source variable,
which is declared outside the scope of the block.
Dispatch Sources
Creating Dispatch Sources
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
58dispatch_source_t source = dispatch_source_create(DISPATCH_SOURCE_TYPE_READ,
myDescriptor, 0, myQueue);
dispatch_source_set_event_handler(source, ^{
// Get some data from the source variable, which is captured
// from the parent context.
size_t estimated = dispatch_source_get_data(source);
// Continue reading the descriptor...
});
dispatch_resume(source);
Capturing variablesinside of a block is commonly done to allow for greater flexibility and dynamism. Of course,
captured variables are read-only within the block by default. Although the blocks feature provides support for
modifying captured variables under specific circumstances, you should not attempt to do so in the event
handlers associated with a dispatch source. Dispatch sources always execute their event handlers asynchronously,
so the defining scope of any variables you captured is likely gone by the time your event handler executes.
For more information about how to capture and use variables inside of blocks, see Blocks Programming Topics.
Table 4-1 lists the functions you can call from your event handler code to obtain information about an event.
Table 4-1 Getting data from a dispatch source
Function Description
Thisfunction returnsthe underlying system data type that the dispatch source
manages.
For a descriptor dispatch source, this function returns an int type containing
the descriptor associated with the dispatch source.
For a signal dispatch source, this function returns an int type containing the
signal number for the most recent event.
For a process dispatch source, this function returns a pid_t data structure
for the process being monitored.
For a Mach port dispatch source, this function returns a mach_port_t data
structure.
For other dispatch sources, the value returned by this function is undefined.
dispatch_source_-
get_handle
Dispatch Sources
Creating Dispatch Sources
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
59Function Description
This function returns any pending data associated with the event.
For a descriptor dispatch source that reads data from a file, this function
returns the number of bytes available for reading.
For a descriptor dispatch source that writes data to a file, this function returns
a positive integer if space is available for writing.
For a descriptor dispatch source that monitorsfile system activity, thisfunction
returns a constant indicating the type of event that occurred. For a list of
constants, see the dispatch_source_vnode_flags_t enumerated type.
For a process dispatch source, this function returns a constant indicating the
type of event that occurred. For a list of constants, see the
dispatch_source_proc_flags_t enumerated type.
For a Mach port dispatch source, this function returns a constant indicating
the type of event that occurred. For a list of constants, see the
dispatch_source_machport_flags_t enumerated type.
For a custom dispatch source, thisfunction returnsthe new data value created
from the existing data and the new data passed to the
dispatch_source_merge_data function.
dispatch_source_-
get_data
This function returns the event flags that were used to create the dispatch
source.
For a process dispatch source, this function returns a mask of the events that
the dispatch source receives. For a list of constants, see the
dispatch_source_proc_flags_t enumerated type.
For a Mach port dispatch source with send rights, thisfunction returns a mask
of the desired events. For a list of constants, see the dispatch_source_-
mach_send_flags_t enumerated type.
For a custom OR dispatch source, thisfunction returnsthe mask used to merge
the data values.
dispatch_source_-
get_mask
Forsome examples of how to write and install event handlersforspecific types of dispatch sources,see “Dispatch
Source Examples” (page 62).
Installing a Cancellation Handler
Cancellation handlers are used to clean up a dispatch source before it is released. For most types of dispatch
sources, cancellation handlers are optional and only necessary if you have some custom behaviors tied to the
dispatch source that also need to be updated. For dispatch sourcesthat use a descriptor or Mach port, however,
Dispatch Sources
Creating Dispatch Sources
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
60you must provide a cancellation handler to close the descriptor or release the Mach port. Failure to do so can
lead to subtle bugs in your code resulting from those structures being reused unintentionally by your code or
other parts of the system.
You can install a cancellation handler at any time but usually you would do so when creating the dispatch
source. You install a cancellation handler using the dispatch_source_set_cancel_handler or
dispatch_source_set_cancel_handler_f function, depending on whether you want to use a block object
or a function in your implementation. The following example shows a simple cancellation handler that closes
a descriptor that was opened for a dispatch source. The fd variable is a captured variable containing the
descriptor.
dispatch_source_set_cancel_handler(mySource, ^{
close(fd); // Close a file descriptor opened earlier.
});
To see a complete code example for a dispatch source that uses a cancellation handler, see “Reading Data
from a Descriptor” (page 64).
Changing the Target Queue
Although you specify the queue on which to run your event and cancellation handlers when you create a
dispatch source, you can change that queue at any time using the dispatch_set_target_queue function.
You might do this to change the priority at which the dispatch source’s events are processed.
Changing a dispatch source’s queue is an asynchronous operation and the dispatch source does its best to
make the change as quickly as possible. If an event handler is already queued and waiting to be processed, it
executes on the previous queue. However, other events arriving around the time you make the change could
be processed on either queue.
Associating Custom Data with a Dispatch Source
Like many other data types in Grand Central Dispatch, you can use the dispatch_set_context function to
associate custom data with a dispatch source. You can use the context pointer to store any data your event
handler needs to process events. If you do store any custom data in the context pointer, you should also install
a cancellation handler (as described in “Installing a Cancellation Handler” (page 60)) to release that data when
the dispatch source is no longer needed.
If you implement your event handler using blocks, you can also capture local variables and use them within
your block-based code. Although this might alleviate the need to store data in the context pointer of the
dispatch source, you should always use this feature judiciously. Because dispatch sources may be long-lived
in your application, you should be careful when capturing variables containing pointers. If the data pointed
Dispatch Sources
Creating Dispatch Sources
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
61to by a pointer could be deallocated at any time, you should either copy the data or retain it to prevent that
from happening. In either case, you would then need to provide a cancellation handler to release the data
later.
Memory Management for Dispatch Sources
Like other dispatch objects, dispatch sources are reference counted data types. A dispatch source has an initial
reference count of 1 and can be retained and released using the dispatch_retain and dispatch_release
functions. When the reference count of a queue reaches zero, the system automatically deallocatesthe dispatch
source data structures.
Because of the way they are used, the ownership of dispatch sources can be managed either internally or
externally to the dispatch source itself. With external ownership, another object or piece of code takes ownership
of the dispatch source and is responsible for releasing it when it is no longer needed. With internal ownership,
the dispatch source ownsitself and isresponsible for releasing itself at the appropriate time. Although external
ownership is very common, you might use internal ownership in cases where you want to create an autonomous
dispatch source and let it manage some behavior of your code without any further interactions. For example,
if a dispatch source is designed to respond to a single global event, you might have it handle that event and
then exit immediately.
Dispatch Source Examples
The following sections show you how to create and configure some of the more commonly used dispatch
sources. For more information about configuring specific types of dispatch sources,see Grand Central Dispatch
(GCD) Reference .
Creating a Timer
Timer dispatch sources generate events at regular, time-based intervals. You can use timers to initiate specific
tasksthat need to be performed regularly. For example, games and other graphics-intensive applications might
use timers to initiate screen or animation updates. You could also set up a timer and use the resulting events
to check for new information on a frequently updated server.
All timer dispatch sources are interval timers—that is, once created, they deliver regular events at the interval
you specify. When you create a timer dispatch source, one of the values you must specify is a leeway value to
give the system some idea of the desired accuracy for timer events. Leeway values give the system some
flexibility in how it manages power and wakes up cores. For example, the system might use the leeway value
to advance or delay the fire time and align it better with other system events. You should therefore specify a
leeway value whenever possible for your own timers.
Dispatch Sources
Dispatch Source Examples
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
62Note: Even if you specify a leeway value of 0, you should never expect a timer to fire at the exact
nanosecond you requested. The system does its best to accommodate your needs but cannot
guarantee exact firing times.
When a computer goes to sleep, all timer dispatch sources are suspended. When the computer wakes up,
those timer dispatch sources are automatically woken up as well. Depending on the configuration of the timer,
pauses of this nature may affect when your timer fires next. If you set up your timer dispatch source using the
dispatch_time function or the DISPATCH_TIME_NOW constant, the timer dispatch source uses the default
system clock to determine when to fire. However, the default clock does not advance while the computer is
asleep. By contrast, when you set up your timer dispatch source using the dispatch_walltime function, the
timer dispatch source tracks its firing time to the wall clock time. This latter option is typically appropriate for
timers whose firing interval is relatively large because it prevents there from being too much drift between
event times.
Listing 4-1 shows an example of a timer that fires once every 30 seconds and has a leeway value of 1 second.
Because the timer interval is relatively large, the dispatch source is created using the dispatch_walltime
function. The first firing of the timer occurs immediately and subsequent events arrive every 30 seconds. The
MyPeriodicTask and MyStoreTimer symbolsrepresent custom functionsthat you would write to implement
the timer behavior and to store the timer somewhere in your application’s data structures.
Listing 4-1 Creating a timer dispatch source
dispatch_source_t CreateDispatchTimer(uint64_t interval,
uint64_t leeway,
dispatch_queue_t queue,
dispatch_block_t block)
{
dispatch_source_t timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER,
0, 0, queue);
if (timer)
{
dispatch_source_set_timer(timer, dispatch_walltime(NULL, 0), interval,
leeway);
dispatch_source_set_event_handler(timer, block);
dispatch_resume(timer);
}
return timer;
}
Dispatch Sources
Dispatch Source Examples
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
63void MyCreateTimer()
{
dispatch_source_t aTimer = CreateDispatchTimer(30ull * NSEC_PER_SEC,
1ull * NSEC_PER_SEC,
dispatch_get_main_queue(),
^{ MyPeriodicTask(); });
// Store it somewhere for later use.
if (aTimer)
{
MyStoreTimer(aTimer);
}
}
Although creating a timer dispatch source isthe main way to receive time-based events, there are other options
available as well. If you want to perform a block once after a specified time interval, you can use the
dispatch_after or dispatch_after_f function. This function behaves much like the dispatch_async
function except that it allows you to specify a time value at which to submit the block to a queue. The time
value can be specified as a relative or absolute time value depending on your needs.
Reading Data from a Descriptor
To read data from a file or socket, you must open the file or socket and create a dispatch source of type
DISPATCH_SOURCE_TYPE_READ. The event handler you specify should be capable of reading and processing
the contents of the file descriptor. In the case of a file, this amounts to reading the file data (or a subset of that
data) and creating the appropriate data structures for your application. For a network socket, this involves
processing newly received network data.
Whenever reading data, you should always configure your descriptor to use non-blocking operations. Although
you can use the dispatch_source_get_data function to see how much data is available for reading, the
number returned by that function could change between the time you make the call and the time you actually
read the data. If the underlying file istruncated or a network error occurs, reading from a descriptor that blocks
the current thread could stall your event handler in mid execution and prevent the dispatch queue from
dispatching other tasks. For a serial queue, this could deadlock your queue, and even for a concurrent queue
this reduces the number of new tasks that can be started.
Dispatch Sources
Dispatch Source Examples
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
64Listing 4-2 shows an example that configures a dispatch source to read data from a file. In this example, the
event handler reads the entire contents of the specified file into a buffer and calls a custom function (that you
would define in your own code) to processthe data. (The caller of thisfunction would use the returned dispatch
source to cancel it once the read operation was completed.) To ensure that the dispatch queue does not block
unnecessarily when there is no data to read, this example usesthe fcntl function to configure the file descriptor
to perform nonblocking operations. The cancellation handler installed on the dispatch source ensures that the
file descriptor is closed after the data is read.
Listing 4-2 Reading data from a file
dispatch_source_t ProcessContentsOfFile(const char* filename)
{
// Prepare the file for reading.
int fd = open(filename, O_RDONLY);
if (fd == -1)
return NULL;
fcntl(fd, F_SETFL, O_NONBLOCK); // Avoid blocking the read operation
dispatch_queue_t queue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_source_t readSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_READ,
fd, 0, queue);
if (!readSource)
{
close(fd);
return NULL;
}
// Install the event handler
dispatch_source_set_event_handler(readSource, ^{
size_t estimated = dispatch_source_get_data(readSource) + 1;
// Read the data into a text buffer.
char* buffer = (char*)malloc(estimated);
if (buffer)
{
ssize_t actual = read(fd, buffer, (estimated));
Boolean done = MyProcessFileData(buffer, actual); // Process the data.
Dispatch Sources
Dispatch Source Examples
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
65// Release the buffer when done.
free(buffer);
// If there is no more data, cancel the source.
if (done)
dispatch_source_cancel(readSource);
}
});
// Install the cancellation handler
dispatch_source_set_cancel_handler(readSource, ^{close(fd);});
// Start reading the file.
dispatch_resume(readSource);
return readSource;
}
In the preceding example, the custom MyProcessFileData function determines when enough file data has
been read and the dispatch source can be canceled. By default, a dispatch source configured for reading from
a descriptor schedules its event handler repeatedly while there is still data to read. If the socket connection
closes or you reach the end of a file, the dispatch source automatically stops scheduling the event handler. If
you know you do not need a dispatch source though, you can cancel it directly yourself.
Writing Data to a Descriptor
The process for writing data to a file or socket is very similar to the process for reading data. After configuring
a descriptor for write operations, you create a dispatch source of type DISPATCH_SOURCE_TYPE_WRITE. Once
that dispatch source is created, the system calls your event handler to give it a chance to start writing data to
the file or socket. When you are finished writing data, use the dispatch_source_cancel function to cancel
the dispatch source.
Whenever writing data, you should always configure your file descriptor to use non-blocking operations.
Although you can use the dispatch_source_get_data function to see how much space is available for
writing, the value returned by that function is advisory only and could change between the time you make
the call and the time you actually write the data. If an error occurs, writing data to a blocking file descriptor
Dispatch Sources
Dispatch Source Examples
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
66could stall your event handler in mid execution and prevent the dispatch queue from dispatching other tasks.
For a serial queue, this could deadlock your queue, and even for a concurrent queue this reduces the number
of new tasks that can be started.
Listing 4-3 shows the basic approach for writing data to a file using a dispatch source. After creating the new
file, this function passes the resulting file descriptor to its event handler. The data being put into the file is
provided by the MyGetData function, which you would replace with whatever code you needed to generate
the data for the file. After writing the data to the file, the event handler cancels the dispatch source to prevent
it from being called again. The owner of the dispatch source would then be responsible for releasing it.
Listing 4-3 Writing data to a file
dispatch_source_t WriteDataToFile(const char* filename)
{
int fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC,
(S_IRUSR | S_IWUSR | S_ISUID | S_ISGID));
if (fd == -1)
return NULL;
fcntl(fd, F_SETFL); // Block during the write.
dispatch_queue_t queue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_source_t writeSource =
dispatch_source_create(DISPATCH_SOURCE_TYPE_WRITE,
fd, 0, queue);
if (!writeSource)
{
close(fd);
return NULL;
}
dispatch_source_set_event_handler(writeSource, ^{
size_t bufferSize = MyGetDataSize();
void* buffer = malloc(bufferSize);
size_t actual = MyGetData(buffer, bufferSize);
write(fd, buffer, actual);
Dispatch Sources
Dispatch Source Examples
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
67free(buffer);
// Cancel and release the dispatch source when done.
dispatch_source_cancel(writeSource);
});
dispatch_source_set_cancel_handler(writeSource, ^{close(fd);});
dispatch_resume(writeSource);
return (writeSource);
}
Monitoring a File-System Object
If you want to monitor a file system object for changes, you can set up a dispatch source of type
DISPATCH_SOURCE_TYPE_VNODE. You can use this type of dispatch source to receive notifications when a
file is deleted, written to, or renamed. You can also use it to be alerted when specific types of meta information
for a file (such as its size and link count) change.
Note: The file descriptor you specify for your dispatch source must remain open while the source
itself is processing events.
Listing 4-4 shows an example that monitors a file for name changes and performssome custom behavior when
it does. (You would provide the actual behavior in place of the MyUpdateFileName function called in the
example.) Because a descriptor is opened specifically for the dispatch source, the dispatch source includes a
cancellation handler that closesthe descriptor. Because the file descriptor created by the example is associated
with the underlying file-system object, thissame dispatch source can be used to detect any number of filename
changes.
Listing 4-4 Watching for filename changes
dispatch_source_t MonitorNameChangesToFile(const char* filename)
{
int fd = open(filename, O_EVTONLY);
if (fd == -1)
return NULL;
Dispatch Sources
Dispatch Source Examples
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
68dispatch_queue_t queue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_source_t source = dispatch_source_create(DISPATCH_SOURCE_TYPE_VNODE,
fd, DISPATCH_VNODE_RENAME, queue);
if (source)
{
// Copy the filename for later use.
int length = strlen(filename);
char* newString = (char*)malloc(length + 1);
newString = strcpy(newString, filename);
dispatch_set_context(source, newString);
// Install the event handler to process the name change
dispatch_source_set_event_handler(source, ^{
const char* oldFilename = (char*)dispatch_get_context(source);
MyUpdateFileName(oldFilename, fd);
});
// Install a cancellation handler to free the descriptor
// and the stored string.
dispatch_source_set_cancel_handler(source, ^{
char* fileStr = (char*)dispatch_get_context(source);
free(fileStr);
close(fd);
});
// Start processing events.
dispatch_resume(source);
}
else
close(fd);
return source;
}
Dispatch Sources
Dispatch Source Examples
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
69Monitoring Signals
UNIX signals allow the manipulation of an application from outside of its domain. An application can receive
many different types of signals ranging from unrecoverable errors (such as illegal instructions) to notifications
about important information (such as when a child process exits). Traditionally, applications use the sigaction
function to install a signal handler function, which processes signals synchronously as soon as they arrive. If
you just want to be notified of a signal’s arrival and do not actually want to handle the signal, you can use a
signal dispatch source to process the signals asynchronously.
Signal dispatch sources are not a replacement for the synchronous signal handlers you install using the
sigaction function. Synchronous signal handlers can actually catch a signal and prevent it from terminating
your application. Signal dispatch sources allow you to monitor only the arrival of the signal. In addition, you
cannot use signal dispatch sources to retrieve all types of signals. Specifically, you cannot use them to monitor
the SIGILL, SIGBUS, and SIGSEGV signals.
Because signal dispatch sources are executed asynchronously on a dispatch queue, they do not suffer from
some of the same limitations as synchronous signal handlers. For example, there are no restrictions on the
functions you can call from yoursignal dispatch source’s event handler. The tradeoff for thisincreased flexibility
is the fact that there may be some increased latency between the time a signal arrives and the time your
dispatch source’s event handler is called.
Listing 4-5 shows how you configure a signal dispatch source to handle the SIGHUP signal. The event handler
for the dispatch source calls the MyProcessSIGHUP function, which you would replace in your application
with code to process the signal.
Listing 4-5 Installing a block to monitor signals
void InstallSignalHandler()
{
// Make sure the signal does not terminate the application.
signal(SIGHUP, SIG_IGN);
dispatch_queue_t queue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_source_t source = dispatch_source_create(DISPATCH_SOURCE_TYPE_SIGNAL,
SIGHUP, 0, queue);
if (source)
{
dispatch_source_set_event_handler(source, ^{
Dispatch Sources
Dispatch Source Examples
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
70MyProcessSIGHUP();
});
// Start processing signals
dispatch_resume(source);
}
}
If you are developing code for a custom framework, an advantage of using signal dispatch sources is that your
code can monitor signals independent of any applications linked to it. Signal dispatch sources do not interfere
with other dispatch sources or any synchronous signal handlers the application might have installed.
For more information about implementing synchronous signal handlers, and for a list of signal names, see
signal man page.
Monitoring a Process
A process dispatch source lets you monitor the behavior of a specific process and respond appropriately. A
parent process might use this type of dispatch source to monitor any child processes it creates. For example,
the parent process could use it to watch for the death of a child process. Similarly, a child process could use it
to monitor its parent process and exit if the parent process exits.
Listing 4-6 shows the steps for installing a dispatch source to monitor for the termination of a parent process.
When the parent process dies, the dispatch source sets some internal state information to let the child process
know it should exit. (Your own application would need to implement the MySetAppExitFlag function to set
an appropriate flag for termination.) Because the dispatch source runs autonomously, and therefore owns
itself, it also cancels and releases itself in anticipation of the program shutting down.
Listing 4-6 Monitoring the death of a parent process
void MonitorParentProcess()
{
pid_t parentPID = getppid();
dispatch_queue_t queue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_source_t source = dispatch_source_create(DISPATCH_SOURCE_TYPE_PROC,
parentPID, DISPATCH_PROC_EXIT,
queue);
Dispatch Sources
Dispatch Source Examples
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
71if (source)
{
dispatch_source_set_event_handler(source, ^{
MySetAppExitFlag();
dispatch_source_cancel(source);
dispatch_release(source);
});
dispatch_resume(source);
}
}
Canceling a Dispatch Source
Dispatch sourcesremain active until you cancel them explicitly using the dispatch_source_cancel function.
Canceling a dispatch source stops the delivery of new events and cannot be undone. Therefore, you typically
cancel a dispatch source and then immediately release it, as shown here:
void RemoveDispatchSource(dispatch_source_t mySource)
{
dispatch_source_cancel(mySource);
dispatch_release(mySource);
}
Cancellation of a dispatch source is an asynchronous operation. Although no new events are processed after
you call the dispatch_source_cancel function, events that are already being processed by the dispatch
source continue to be processed. After it finishes processing any final events, the dispatch source executes its
cancellation handler if one is present.
The cancellation handler is your chance to deallocate memory or clean up any resources that were acquired
on behalf of the dispatch source. If your dispatch source uses a descriptor or mach port, you must provide a
cancellation handler to close the descriptor or destroy the port when cancellation occurs. Other types of
dispatch sources do not require cancellation handlers, although you still should provide one if you associate
any memory or data with the dispatch source. For example, you should provide one if you store data in the
dispatch source’s context pointer. For more information about cancellation handlers,see “Installing a Cancellation
Handler” (page 60).
Dispatch Sources
Canceling a Dispatch Source
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
72Suspending and Resuming Dispatch Sources
You can suspend and resume the delivery of dispatch source eventstemporarily using the dispatch_suspend
and dispatch_resume methods. These methods increment and decrement the suspend count for your
dispatch object. As a result, you must balance each call to dispatch_suspend with a matching call to
dispatch_resume before event delivery resumes.
When you suspend a dispatch source, any events that occur while that dispatch source is suspended are
accumulated until the queue is resumed. When the queue resumes, rather than deliver all of the events, the
events are coalesced into a single event before delivery. For example, if you were monitoring a file for name
changes, the delivered event would include only the last name change. Coalescing events in this way prevents
them from building up in the queue and overwhelming your application when work resumes.
Dispatch Sources
Suspending and Resuming Dispatch Sources
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
73There are many ways to adapt existing threaded code to take advantage of Grand Central Dispatch and
operation objects. Although moving away from threads may not be possible in all cases, performance (and
the simplicity of your code) can improve dramatically in places where you do make the switch. Specifically,
using dispatch queues and operation queues instead of threads has several advantages:
●
It reduces the memory penalty your application pays for storing thread stacks in the application’s memory
space.
●
It eliminates the code needed to create and configure your threads.
●
It eliminates the code needed to manage and schedule work on threads.
●
It simplifies the code you have to write.
This chapter providessome tips and guidelines on how to replace your existing thread-based code and instead
use dispatch queues and operation queues to achieve the same types of behaviors.
Replacing Threads with Dispatch Queues
To understand how you might replace threads with dispatch queues, first considersome of the ways you might
be using threads in your application today:
● Single task threads. Create a thread to perform a single task and release the thread when the task is done.
● Worker threads. Create one or more worker threads with specific tasks in mind for each. Dispatch tasks
to each thread periodically.
● Thread pools. Create a pool of generic threads and set up run loops for each one. When you have a task
to perform, grab a thread from the pool and dispatch the task to it. If there are no free threads, queue the
task and wait for a thread to become available.
Although these might seem like dramatically different techniques, they are really just variants on the same
principle. In each case, a thread is being used to run some task that the application has to perform. The only
difference between them is the code used to manage the threads and the queueing of tasks. With dispatch
queues and operation queues, you can eliminate all of your thread and thread-communication code and
instead focus on just the tasks you want to perform.
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
74
Migrating Away from ThreadsIf you are using one of the above threading models, you should already have a pretty good idea of the type
of tasks your application performs. Instead ofsubmitting a task to one of your custom threads, try encapsulating
that task in an operation object or a block object and dispatching it to the appropriate queue. For tasks that
are not particularly contentious—that is, tasksthat do not take locks—you should be able to make the following
direct replacements:
● For a single task thread, encapsulate the task in a block or operation object and submit it to a concurrent
queue.
● For worker threads, you need to decide whether to use a serial queue or a concurrent queue. If you use
worker threads to synchronize the execution of specific sets of tasks, use a serial queue. If you do use
worker threads to execute arbitrary tasks with no interdependencies, use a concurrent queue.
● For thread pools, encapsulate your tasks in a block or operation object and dispatch them to a concurrent
queue for execution.
Of course, simple replacements like this may not work in all cases. If the tasks you are executing contend for
shared resources, the ideal solution is to try to remove or minimize that contention first. If there are ways that
you can refactor or rearchitect your code to eliminate mutual dependencies on shared resources, that is certainly
preferable. However, if doing so is not possible or might be less efficient, there are still ways to take advantage
of queues. A big advantage of queues is that they offer a more predictable way to execute your code. This
predictability means that there are still ways to synchronize the execution of your code without using locks or
other heavyweight synchronization mechanisms. Instead of using locks, you can use queues to perform many
of the same tasks:
●
If you have tasksthat must execute in a specific order,submit them to a serial dispatch queue. If you prefer
to use operation queues, use operation object dependencies to ensure that those objects execute in a
specific order.
●
If you are currently using locks to protect a shared resource, create a serial queue to execute any tasks
that modify that resource. The serial queue then replaces your existing locks as the synchronization
mechanism. For more information techniques for getting rid of locks, see “Eliminating Lock-Based
Code” (page 76).
●
If you use thread joins to wait for background tasks to complete, consider using dispatch groups instead.
You can also use an NSBlockOperation object or operation object dependencies to achieve similar
group-completion behaviors. Formore information on how to track groups of executing tasks,see “Replacing
Thread Joins” (page 79).
●
If you use a producer-consumer algorithm to manage a pool of finite resources, consider changing your
implementation to the one shown in “Changing Producer-Consumer Implementations” (page 80).
●
If you are using threads to read and write from descriptors, or monitor file operations, use the dispatch
sources as described in “Dispatch Sources” (page 56).
Migrating Away from Threads
Replacing Threads with Dispatch Queues
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
75It isimportant to remember that queues are not a panacea for replacing threads. The asynchronous programming
model offered by queues is appropriate in situations where latency is not an issue. Even though queues offer
ways to configure the execution priority of tasks in the queue, higher execution priorities do not guarantee
the execution of tasks at specific times. Therefore, threads are still a more appropriate choice in cases where
you need minimal latency, such as in audio and video playback.
Eliminating Lock-Based Code
For threaded code, locks are one of the traditional ways to synchronize access to resources that are shared
between threads. However, the use of locks comes at a cost. Even in the non-contested case, there is always
a performance penalty associated with taking a lock. And in the contested case, there is the potential for one
or more threads to block for an indeterminate amount of time while waiting for the lock to be released.
Replacing your lock-based code with queues eliminates many of the penalties associated with locks and also
simplifies your remaining code. Instead of using a lock to protect a shared resource, you can instead create a
queue to serialize the tasks that access that resource. Queues do not impose the same penalties as locks. For
example, queueing a task does not require trapping into the kernel to acquire a mutex.
When queueing tasks, the main decision you have to make is whether to do so synchronously or asynchronously.
Submitting a task asynchronously letsthe current thread continue to run while the task is performed. Submitting
a task synchronously blocksthe current thread until the task is completed. Both options have appropriate uses,
although it is certainly advantageous to submit tasks asynchronously whenever you can.
The following sectionsshow you how to replace your existing lock-based code with the equivalent queue-based
code.
Implementing an Asynchronous Lock
An asynchronous lock is a way for you to protect a shared resource without blocking any code that modifies
that resource. You might use an asynchronous lock when you need to modify a data structure as a side effect
of some other work your code is doing. Using traditional threads, the way you would normally implement this
code would be to take a lock for the shared resource, make the necessary changes, release the lock, and
continue with the main part of your task. However, using dispatch queues, the calling code can make the
modifications asynchronously without waiting for those changes to be completed.
Listing 5-1 shows an example of an asynchronouslock implementation. In this example, the protected resource
defines its own serial dispatch queue. The calling code submits a block object to this queue that contains the
modifications that need to be made to the resource. Because the queue itself executes blocks serially, changes
to the resource are guaranteed to be made in the order in which they were received; however, because the
task was executed asynchronously, the calling thread does not block.
Migrating Away from Threads
Eliminating Lock-Based Code
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
76Listing 5-1 Modifying protected resources asynchronously
dispatch_async(obj->serial_queue, ^{
// Critical section
});
Executing Critical Sections Synchronously
If the current code cannot continue until a given task is complete, you can submit the task synchronously using
the dispatch_sync function. This function adds the task to a dispatch queue and then blocks the current
thread until the task finishes executing. The dispatch queue itself can be a serial or concurrent queue depending
on your needs. Because thisfunction blocksthe current thread, though, you should use it only when necessary.
Listing 5-2 shows the technique for wrapping a critical section of your code using dispatch_sync.
Listing 5-2 Running critical sections synchronously
dispatch_sync(my_queue, ^{
// Critical section
});
If you are already using a serial queue to protect a shared resource, dispatching to that queue synchronously
does not protect the shared resource any more than if you dispatched asynchronously. The only reason to
dispatch synchronously is to prevent the current code from continuing until the critical section finishes. For
example, if you wanted to get some value from the shared resource and use it right away, you would need to
dispatch synchronously. If the current code does not need to wait for the critical section to complete, or if it
can simply submit additional follow-up tasks to the same serial queue, submitting asynchronously is generally
preferred.
Improving on Loop Code
If your code has loops, and the work being done each time through the loop is independent of the work being
done in the other iterations, you might consider reimplementing that loop code using the dispatch_apply
or dispatch_apply_f function. These functions submit each iteration of a loop separately to a dispatch
queue for processing. When used in conjunction with a concurrent queue, thisfeature lets you perform multiple
iterations of the loop concurrently.
Migrating Away from Threads
Improving on Loop Code
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
77The dispatch_apply and dispatch_apply_f functions are synchronous function calls that block the
current thread of execution until all of the loop iterations are complete. When submitted to a concurrent queue,
the execution order of the loop iterations is not guaranteed. The threads running each iteration could block
and cause a given iteration to finish before or after the others around it. Therefore, the block object or function
you use for each loop iteration must be reentrant.
Listing 5-3 shows how to replace a for loop with its dispatch-based equivalent. The block or function you pass
to dispatch_apply or dispatch_apply_f must take an integer value indicating the current loop iteration.
In this example, the code simply prints the current loop number to the console.
Listing 5-3 Replacing a for loop without striding
queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_apply(count, queue, ^(size_t i) {
printf("%u\n", i);
});
Although the preceding example is a simplistic one, it demonstrates the basic techniques for replacing a loop
using dispatch queues. And although this can be a good way to improve performance in loop-based code,
you must still use this technique discerningly. Although dispatch queues have very low overhead, there are
still costs to scheduling each loop iteration on a thread. Therefore, you should make sure your loop code does
enough work to warrant the costs. Exactly how much work you need to do is something you have to measure
using the performance tools.
A simple way to increase the amount of work in each loop iteration is to use striding. With striding, you rewrite
your block code to perform more than one iteration of the original loop. You then reduce the count value you
specify to the dispatch_apply function by a proportional amount. Listing 5-4 shows how youmightimplement
striding for the loop code shown in Listing 5-3 (page 78). In Listing 5-4, the block calls the printf statement
the same number of times as the stride value, which in this case is 137. (The actual stride value is something
you should configure based on the work being done by your code.) Because there is a remainder left over
when dividing the total number of iterations by a stride value, any remaining iterations are performed inline.
Listing 5-4 Adding a stride to a dispatched for loop
int stride = 137;
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,
0);
dispatch_apply(count / stride, queue, ^(size_t idx){
Migrating Away from Threads
Improving on Loop Code
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
78size_t j = idx * stride;
size_t j_stop = j + stride;
do {
printf("%u\n", (unsigned int)j++);
}while (j < j_stop);
});
size_t i;
for (i = count - (count % stride); i < count; i++)
printf("%u\n", (unsigned int)i);
There are some definite performance advantages to using strides. In particular, strides offer benefits when the
original number of loop iterations is high, relative to the stride. Dispatching fewer blocks concurrently means
that more time is spent executing the code of those blocks than dispatching them. As with any performance
metric though, you may have to play with the striding value to find the most efficient value for your code.
Replacing Thread Joins
Thread joins allow you to spawn one or more threads and then have the current thread wait until those threads
are finished. To implement a thread join, a parent creates a child thread as a joinable thread. When the parent
can no longer make progress without the results from a child thread, it joins with the child. This process blocks
the parent thread until the child finishes its task and exits, at which point the parent can gather the results
from the child and continue with its own work. If the parent needs to join with multiple child threads, it does
so one at a time.
Dispatch groups offer semantics that are similar to those of thread joins but that have some additional
advantages. Like thread joins, dispatch groups are a way for a thread to block until one or more child tasks
finishes executing. Unlike thread joins, a dispatch group waits on all of its child tasks simultaneously. And
because dispatch groups use dispatch queues to perform the work, they are very efficient.
To use a dispatch group to perform the same work performed by joinable threads, you would do the following:
1. Create a new dispatch group using the dispatch_group_create function.
2. Add tasks to the group using the dispatch_group_async or dispatch_group_async_f function.
Each task you submit to the group represents work you would normally perform on a joinable thread.
3. When the current thread cannot make any more forward progress, call the dispatch_group_wait
function to wait on the group. This function blocks the current thread until all of the tasks in the group
finish executing.
Migrating Away from Threads
Replacing Thread Joins
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
79If you are using operation objects to implement your tasks, you can also implement thread joins using
dependencies. Instead of having a parent thread wait for one or more tasks to complete, you would move the
parent code to an operation object. You would then set up dependencies between the parent operation object
and any number of child operation objects set up to do the work normally performed by the joinable threads.
Having dependencies on other operation objects prevents the parent operation object from executing until
all of the operations have finished.
For an example of how to use dispatch groups, see “Waiting on Groups of Queued Tasks” (page 53). For
information about setting up dependencies between operation objects, see “Configuring Interoperation
Dependencies” (page 29).
Changing Producer-Consumer Implementations
A producer-consumer model lets you manage a finite number of dynamically produced resources. While the
producer creates new resources (or work), one or more consumers wait for those resources (or work) to be
ready and consume them when they are. The typical mechanisms for implementing a producer-consumer
model are conditions or semaphores.
Using conditions, the producer thread typically does the following:
1. Lock the mutex associated with the condition (using pthread_mutex_lock).
2. Produce the resource or work to be consumed.
3. Signal the condition variable that there is something to consume (using pthread_cond_signal)
4. Unlock the mutex (using pthread_mutex_unlock).
In turn, the corresponding consumer thread does the following:
1. Lock the mutex associated with the condition (using pthread_mutex_lock).
2. Set up a while loop to do the following:
a. Check to see whether there is really work to be done.
b. If there is no work to do (or no resource available), call pthread_cond_wait to block the current
thread until a corresponding signal occurs.
3. Get the work (or resource) provided by the producer.
4. Unlock the mutex (using pthread_mutex_unlock).
5. Process the work.
With dispatch queues, you can simplify the producer and consumer implementations into a single call:
Migrating Away from Threads
Changing Producer-Consumer Implementations
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
80dispatch_async(queue, ^{
// Process a work item.
});
When your producer has work to be done, all it hasto do is add that work to a queue and let the queue process
the item. The only part of the preceding code that changes is the queue type. If the tasks generated by the
producer need to be performed in a specific order, you use a serial queue. If the tasks generated by the producer
can be performed concurrently, you add them to a concurrent queue and let the system execute as many of
them as possible simultaneously.
Replacing Semaphore Code
If you are currently using semaphoresto restrict accessto a shared resource, you should consider using dispatch
semaphores instead. Traditional semaphores always require calling down to the kernel to test the semaphore.
In contrast, dispatch semaphores test the semaphore state quickly in user space and trap into the kernel only
when the test fails and the calling thread needs to be blocked. This behavior results in dispatch semaphores
being much faster than traditional semaphores in the uncontested case. In all other aspects, though, dispatch
semaphores offer the same behavior as traditional semaphores.
For an example of how to use dispatch semaphores, see “Using Dispatch Semaphores to Regulate the Use of
Finite Resources” (page 52).
Replacing Run-Loop Code
If you are using run loops to manage the work being performed on one or more threads, you may find that
queues are much simpler to implement and maintain going forward. Setting up a custom run loop involves
setting up both the underlying thread and the run loop itself. The run-loop code consists of setting up one or
more run loop sources and writing callbacks to handle events arriving on those sources. Instead of all that
work, you can simply create a serial queue and dispatch tasks to it. Thus, you can replace all of your thread
and run-loop creation code with one line of code:
dispatch_queue_t myNewRunLoop = dispatch_queue_create("com.apple.MyQueue", NULL);
Migrating Away from Threads
Replacing Semaphore Code
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
81Because the queue automatically executes any tasks added to it, there is no extra code required to manage
the queue. You do not have to create or configure a thread, and you do not have to create or attach any
run-loop sources. In addition, you can perform new types of work on the queue by simply adding the tasks to
it. To do the same thing with a run loop, you would need to modify your existing run loop source or create a
new one to handle the new data.
One common configuration for run loops is to process data arriving asynchronously on a network socket.
Instead of configuring a run loop for this type of behavior, you can attach a dispatch source to the desired
queue. Dispatch sources also offer more options for processing data than traditional run loop sources. In
addition to processing timer and network port events, you can use dispatch sources to read and write to files,
monitor file system objects, monitor processes, and monitor signals. You can even define custom dispatch
sources and trigger them from other parts of your code asynchronously. For more information on setting up
dispatch sources, see “Dispatch Sources” (page 56).
Compatibility with POSIX Threads
Because Grand Central Dispatch manages the relationship between the tasks you provide and the threads on
which those tasks run, you should generally avoid calling POSIX thread routines from your task code. If you do
need to call them for some reason, you should be very careful about which routines you call. This section
provides you with an indication of which routines are safe to call and which are not safe to call from your
queued tasks. This list is not complete but should give you an indication of what is safe to call and what is not.
In general, your application must not delete or mutate objects or data structures that it did not create.
Consequently, block objects that are executed using a dispatch queue must not call the following functions:
pthread_detach
pthread_cancel
pthread_join
pthread_kill
pthread_exit
Although it is alright to modify the state of a thread while your task is running, you must return the thread to
its original state before your task returns. Therefore, it is safe to call the following functions as long as you
return the thread to its original state:
pthread_setcancelstate
pthread_setcanceltype
pthread_setschedparam
Migrating Away from Threads
Compatibility with POSIX Threads
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
82pthread_sigmask
pthread_setspecific
The underlying thread used to execute a given block can change from invocation to invocation. As a result,
your application should not rely on the following functions returning predictable results between invocations
of your block:
pthread_self
pthread_getschedparam
pthread_get_stacksize_np
pthread_get_stackaddr_np
pthread_mach_thread_np
pthread_from_mach_thread_np
pthread_getspecific
Important: Blocks must catch and suppress any language-level exceptions thrown within them. Other errors
that occur during the execution of your block should similarly be handled by the block or used to notify
other parts of your application.
For more information about POSIX threads and the functions mentioned in this section, see the pthread man
pages.
Migrating Away from Threads
Compatibility with POSIX Threads
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
83application A specific style of program that displays
a graphical interface to the user.
asynchronous design approach The principle of
organizing an application around blocks of code that
can be run concurrently with an application’s main
thread or other threads of execution. Asynchronous
tasks are started by one thread but actually run on
a different thread, taking advantage of additional
processor resourcesto finish their work more quickly.
block object A C construct for encapsulating inline
code and data so that it can be performed later. You
use blocksto encapsulate tasks you want to perform,
either inline in the current thread or on a separate
thread using a dispatch queue. For more information,
see Blocks Programming Topics.
concurrent operation An operation object that
does not perform its task in the thread from which
its start method was called. A concurrent operation
typically sets up its own thread or calls an interface
that sets up a separate thread on which to perform
the work.
condition A construct used to synchronize access
to a resource. A thread waiting on a condition is not
allowed to proceed until another thread explicitly
signals the condition.
critical section A portion of code that must be
executed by only one thread at a time.
custom source A dispatch source used to process
application-defined events. A custom source calls
your custom event handler in response to events
that your application generates.
descriptor An abstract identifier used to access a
file, socket, or other system resource.
dispatch queue A Grand Central Dispatch (GCD)
structure that you use to execute your application’s
tasks. GCD defines dispatch queues for executing
tasks either serially or concurrently.
dispatch source A Grand Central Dispatch (GCD)
data structure that you create to process
system-related events.
descriptor dispatch source A dispatch source used
to processfile-related events. A file descriptorsource
calls your custom event handler either when file data
is available for reading or writing or in response to
file system changes.
dynamic shared library A binary executable that
is loaded dynamically into an application’s process
space rather than linked statically as part of the
application binary.
framework A type of bundle that packages a
dynamic shared library with the resources and
header files that support that library. For more
information, see Framework Programming Guide .
global dispatch queue A dispatch queue provided
to your application automatically by Grand Central
Dispatch (GCD). You do not have to create global
queues yourself or retain or release them. Instead,
you retrieve them using the system-provided
functions.
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
84
GlossaryGrand Central Dispatch (GCD) A technology for
executing asynchronous tasks concurrently. GCD is
available in OS X v10.6 and later and iOS 4.0 and
later.
input source A source of asynchronous events for
a thread. Input sources can be port based or
manually triggered and must be attached to the
thread’s run loop.
joinable thread A thread whose resources are not
reclaimed immediately upon termination. Joinable
threads must be explicitly detached or be joined by
another thread before the resources can be
reclaimed. Joinable threads provide a return value
to the thread that joins with them.
library A UNIX feature for monitoring low-level
system events. For more information see the kqueue
man page.
Mach port dispatch source A dispatch source used
to process events arriving on a Mach port.
main thread A special type of thread created when
its owning processis created. When the main thread
of a program exits, the process ends.
mutex A lock that provides mutually exclusive
access to a shared resource. A mutex lock can be
held by only one thread at a time. Attempting to
acquire a mutex held by a different thread puts the
current thread to sleep until the lock is finally
acquired.
Open Computing Language (OpenCL) A
standards-based technology for performing
general-purpose computations on a computer’s
graphics processor. For more information, see
OpenCL Programming Guide for Mac .
operation object An instance of the NSOperation
class. Operation objects wrap the code and data
associated with a task into an executable unit.
operation queue An instance of the
NSOperationQueue class. Operation queues
manage the execution of operation objects.
private dispatch queue A dispatch queue that you
create, retain, and release explicitly.
process The runtime instance of an application or
program. A process hasits own virtualmemory space
and system resources(including port rights) that are
independent of those assigned to other programs.
A process always contains at least one thread (the
main thread) and may contain any number of
additional threads.
process dispatch source A dispatch source used
to handle process-related events. A process source
calls your custom event handler in response to
changes to the process you specify.
program A combination of code and resourcesthat
can be run to perform some task. Programs need
not have a graphical user interface, although
graphical applications are also considered programs.
reentrant Code that can be started on a new thread
safely while it is already running on another thread.
run loop An event-processing loop, during which
events are received and dispatched to appropriate
handlers.
run loop mode A collection of input sources, timer
sources, and run loop observers associated with a
particular name. When run in a specific “mode,” a
run loop monitors only the sources and observers
associated with that mode.
run loop object An instance of the NSRunLoop
class or CFRunLoopRef opaque type. These objects
provide the interface for implementing an
event-processing loop in a thread.
Glossary
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
85run loop observer A recipient of notifications
during different phases of a run loop’s execution.
semaphore A protected variable that restricts access
to a shared resource. Mutexes and conditions are
both different types of semaphore.
signal A UNIX mechanism for manipulating a
process from outside its domain. The system uses
signals to deliver important messages to an
application, such as whether the application
executed an illegal instruction. For more information
see the signal man page.
signal dispatch source A dispatch source used to
process UNIX signals. A signal source calls your
custom event handler whenever the processreceives
a UNIX signal.
task A quantity of work to be performed. Although
some technologies (most notably Carbon
Multiprocessing Services) use this term differently,
the preferred usage is as an abstract concept
indicating some quantity of work to be performed.
thread A flow of execution in a process. Each thread
has its own stack space but otherwise shares
memory with other threads in the same process.
timer dispatch source A dispatch source used to
process periodic events. A timer source calls your
custom event handler at regular, time-based
intervals.
Glossary
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
86This table describes the changes to Concurrency Programming Guide .
Date Notes
Removed obsolete information about autorelease pool usage with
operations.
2012-07-17
Updated the code for manually executing operations to handle
cancellation correctly.
2011-01-19
Added information about using Objective-C objects in conjunction with
dispatch queues.
2010-04-13 Updated to reflect support for iOS.
Corrected the start method for the nonconcurrent operation object
example.
2009-08-07
New document that describes technologies for executing multiple code
paths in a concurrent manner.
2009-05-22
2012-07-17 | © 2012 Apple Inc. All Rights Reserved.
87
Document Revision HistoryApple Inc.
© 2012 Apple Inc.
All rights reserved.
No part of this publication may be reproduced,
stored in a retrievalsystem, or transmitted, in any
form or by any means, mechanical, electronic,
photocopying, recording, or otherwise, without
prior written permission of Apple Inc., with the
following exceptions: Any person is hereby
authorized to store documentation on a single
computer for personal use only and to print
copies of documentation for personal use
provided that the documentation contains
Apple’s copyright notice.
No licenses, express or implied, are granted with
respect to any of the technology described in this
document. Apple retains all intellectual property
rights associated with the technology described
in this document. This document is intended to
assist application developers to develop
applications only for Apple-labeled computers.
Apple Inc.
1 Infinite Loop
Cupertino, CA 95014
408-996-1010
Apple, the Apple logo, Carbon, Cocoa,
Instruments, Mac, Objective-C, and OS X are
trademarks of Apple Inc., registered in the U.S.
and other countries.
OpenCL is a trademark of Apple Inc.
UNIX is a registered trademark of The Open
Group.
iOS is a trademark or registered trademark of
Cisco in the U.S. and other countries and is used
under license.
Even though Apple has reviewed this document,
APPLE MAKES NO WARRANTY OR REPRESENTATION,
EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS
DOCUMENT, ITS QUALITY, ACCURACY,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR
PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED
“AS IS,” AND YOU, THE READER, ARE ASSUMING THE
ENTIRE RISK AS TO ITS QUALITY AND ACCURACY.
IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT,
INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL
DAMAGES RESULTING FROM ANY DEFECT OR
INACCURACY IN THIS DOCUMENT, even if advised of
the possibility of such damages.
THE WARRANTY AND REMEDIES SET FORTH ABOVE
ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL
OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer,
agent, or employee is authorized to make any
modification, extension, or addition to this warranty.
Some states do not allow the exclusion or limitation
of implied warranties or liability for incidental or
consequential damages, so the above limitation or
exclusion may not apply to you. This warranty gives
you specific legal rights, and you may also have other
rights which vary from state to state.
MainStage 2
User ManualCopyright © 2009 Apple Inc. All rights reserved.
Your rights to the software are governed by the
accompanying software license agreement. The owner or
authorized user of a valid copy of Logic Studio software
may reproduce this publication for the purpose of learning
to use such software. No part of this publication may be
reproduced or transmitted for commercial purposes, such
as selling copies of this publication or for providing paid
for support services.
The Apple logo is a trademark of Apple Inc., registered in
the U.S. and other countries. Use of the “keyboard” Apple
logo (Shift-Option-K) for commercial purposes without
the prior written consent of Apple may constitute
trademark infringement and unfair competition in violation
of federal and state laws.
Every effort has been made to ensure that the information
in this manual is accurate. Apple is not responsible for
printing or clerical errors.
Note: Because Apple frequently releases new versions
and updates to its system software, applications, and
Internet sites, images shown in this manual may be slightly
different from what you see on your screen.
Apple
1 Infinite Loop
Cupertino, CA 95014–2084
408-996-1010
www.apple.com
Apple, the Apple logo, FireWire, GarageBand, Logic,
Logic Studio, Mac, MainStage, Ultrabeat, and WaveBurner
are trademarks of Apple Inc., registered in the U.S. and
other countries.
Finder is a trademark of Apple Inc.
Intel, Intel Core, and Xeon are trademarks of Intel Corp. in
the U.S. and other countries.
Other company and product names mentioned herein
are trademarks of their respective companies. Mention of
third-party products is for informational purposes only
and constitutes neither an endorsement nor a
recommendation. Apple assumes no responsibility with
regard to the performance or use of these products.Preface 7 Welcome to MainStage
7 About MainStage
8 About the MainStage Documentation
8 Additional Resources
Chapter 1 11 Introducing MainStage
11 What Is MainStage?
13 Using MainStage with Keyboard Controllers
13 Using MainStage with Electric Guitars
13 Using MainStage with Vocals, Drums, and Other Instruments
13 How to Use MainStage in Your Music Setup
16 Using MainStage in Live Performance
Chapter 2 17 Setting Up Your System
17 Using MIDI Devices with MainStage
19 Using Audio Devices with MainStage
19 Using Effects Plug-ins with MainStage
19 Using MainStage with Time Machine
Chapter 3 21 The MainStage Interface
22 The MainStage Window
23 Layout Mode
24 Edit Mode
25 Perform Mode
26 Full Screen Mode
26 Customizing the MainStage Window
Chapter 4 29 Getting Started with MainStage
29 Before You Begin
30 Opening MainStage
30 Choosing a Concert Template
31 Selecting Patch Settings in the Patch Library
32 Adding a Patch
33 Naming a Patch
3
Contents33 Selecting and Playing Patches
34 Adding a Channel Strip
36 Changing a Channel Strip Setting
37 Learning a Controller Assignment
39 Mapping a Screen Control
39 Trying Out Full Screen and Perform Modes
Chapter 5 41 Working in Edit Mode
41 Working with Patches in Edit Mode
48 Working with Channel Strips in Edit Mode
69 Mapping Screen Controls
77 Editing Screen Control Parameters in Edit Mode
81 Working with Sets in Edit Mode
83 Working at the Set Level
84 Sharing Patches and Sets Between Concerts
85 Recording the Audio Output of a Concert
Chapter 6 87 Working with Concerts
88 Opening and Closing Concerts
89 Saving Concerts
89 How Saving Affects Parameter Values
90 Setting the Time Signature for a Concert
91 Using Tempo in a MainStage Concert
92 Defining the Source for Program Change Messages for a Concert
93 Setting the Pan Law for a Concert
93 Changing the Tuning for a Concert
93 Silencing MIDI Notes
94 Muting Audio Output
95 Working at the Concert Level
101 Controlling the Metronome
Chapter 7 103 Working in Layout Mode
104 Modifying the Layout of a Concert
104 Working with Screen Controls
114 Assigning Hardware Controls to Screen Controls
116 Editing Screen Control Parameters
121 How MainStage Passes Through MIDI Messages
122 Exporting a Layout
122 Importing a Layout
123 Changing the Aspect Ratio of a Layout
Chapter 8 125 Playing Back Audio in MainStage
125 Adding a Playback Plug-in
4 Contents130 Using the Playback Plug-in
Chapter 9 133 Performing Live with MainStage
133 Before the Performance Starts
134 Using Full Screen Mode and Perform Mode
135 Selecting Patches in Performance
136 Using Screen Controls in Performance
137 Handling Tempo Changes in Performance
137 Tips for Performing with Keyboard Controllers
137 Tips for Performing with Guitars and Other Instruments
138 Using the Tuner
139 Using the Playback Plug-in in Performance
140 Recording Your Performances
141 After the Performance
141 Tips for Complex Hardware Setups
Chapter 10 143 Key Commands
143 Using the Command Editor
143 MainStage Default Key Commands
Appendix A 147 The Playback Plug-in
148 Getting to Know the Playback Interface
149 Using the Playback Waveform Display
150 Using the Playback Transport and Function Buttons
151 Using the Playback Information Display
152 Using the Playback Sync, Snap To, and Play From Parameters
153 Using the Playback Group Functions
154 Using the Playback Action Menu and File Field
155 Using the Playback Shortcut Menu
Appendix B 157 The Loopback Plug-in
158 Getting to Know the Loopback Interface
159 Using the Loopback Waveform Display
159 Using the Loopback Transport and Function Controls
160 Using the Loopback Information Display
161 Using the Loopback Sync, Snap To, and Play From Parameters
162 Using the Loopback Group Functions
163 Using the Loopback Action Menu
164 Adding Loopback to a Channel Strip
Appendix C 165 Setting MainStage Preferences
165 General Preferences
166 Audio Preferences
168 MIDI Preferences
Contents 5168 Display Preferences
Appendix D 169 Using MainStage Actions
169 Table of Actions
6 ContentsMainStage turns your computer into a powerful and customizable musical instrument
and effects processor that you can use with your music gear (your instruments,
microphones, controllers, and other equipment) in live performance.
This preface covers the following:
• About MainStage (p. 7)
• About the MainStage Documentation (p. 8)
• Additional Resources (p. 8)
About MainStage
For performing musicians, MainStage gives you the power and flexibility of Logic Pro in
an application optimized for live performance. Whether you are a keyboard player, guitarist,
vocalist, drummer, or play another instrument, you can use MainStage in your live
performance setup.
Some of the things you can do with MainStage include:
• Create custom sounds using a wide variety of software instruments and effects included
in Logic Studio. You can also use third-party plug-ins, ReWire applications, and external
sound modules.
• Organize your sounds for easy access when you perform.
• Create a visual layout that matches your hardware devices, putting the controls you
need at your fingertips.
• Connect MIDI devices to your MainStage concert so you can control parameters of your
sounds in real time.
• Trigger backing tracks and other audio files while you play.
• Loop your performances to create multitextured, dynamic sound environments.
• Record your performances in real time.
This is only a brief list of what you can do with MainStage. For a more detailed introduction,
see Introducing MainStage.
7
Welcome to MainStage
PrefaceAbout the MainStage Documentation
Logic Studio includes several documents that will introduce you to MainStage, help you
get started working, and provide detailed information about the features and controls
of MainStage.
• MainStage User Manual: This onscreen manual (the MainStage User Manual) describes
the MainStage interface, commands, and menus, and gives step-by-step instructions
for creating MainStage concerts and for accomplishing specific tasks. It also includes
information on setting up your system. It is designed to provide the information you
need to get up to speed quickly so you can make use of the intuitive interface and
powerful features of MainStage.
If you want to start by learning how to set up audio and MIDI hardware to use with
MainStage, read Setting Up Your System. If you want to learn about the features and
controls in the MainStage interface, read The MainStage Interface. If you want to jump
right in and start using the application, skip ahead to Getting Started with MainStage,
then read the chapters on Edit mode, working with concerts, and Layout mode. If you
want to read about using MainStage in live performance, turn to Performing Live with
MainStage.
• Exploring MainStage: This booklet introduces the basics of MainStage in an easy,
approachable way. It aims to get new users up and running with MainStage quickly so
you can have confidence and continue learning at your own pace. Each chapter presents
major features and guides you in trying things out. This document is a PDF version of
the printed Exploring MainStage document included in the Logic Studio package.
• Logic Studio Instruments: This onscreen manual provides comprehensive instructions
for using the powerful collection of instruments included with Logic Pro and MainStage.
• Logic Studio Effects: This onscreen manual provides comprehensive instructions for
using the powerful collection of effects included with Logic Pro, MainStage, and
WaveBurner.
• Logic Studio Working with Apogee Hardware: This onscreen manual describes the use
of Apogee hardware with Logic Pro.
Additional Resources
Along with the documentation that comes with Logic Studio, there are a variety of other
resources you can use to find out more.
Release Notes and New Features Documents
Each application offers detailed documentation that covers new or changed features and
functions. This documentation can be accessed in the following location:
• Click the Release Notes and New Features links in the application Help menu.
8 Preface Welcome to MainStageMainStage Website
For general information and updates, as well as the latest news on MainStage, go to:
• http://www.apple.com/logicstudio/mainstage
Apple Service and Support Websites
For software updates and answers to the most frequently asked questions for all Apple
products, go to the general Apple Support webpage. You’ll also have access to product
specifications, reference documentation, and Apple and third-party product technical
articles.
• http://www.apple.com/support
For software updates, documentation, discussion forums, and answers to the most
frequently asked questions for MainStage, go to:
• http://www.apple.com/support/mainstage
For discussion forums for all Apple products from around the world, where you can search
for an answer, post your question, or answer other users’ questions, go to:
• http://discussions.apple.com
Preface Welcome to MainStage 9This chapter gives you a conceptual overview of MainStage and describes how you can
use it together with your instruments and other musical equipment when you perform
live.
This chapter covers the following:
• What Is MainStage? (p. 11)
• Using MainStage with Keyboard Controllers (p. 13)
• Using MainStage with Electric Guitars (p. 13)
• Using MainStage with Vocals, Drums, and Other Instruments (p. 13)
• How to Use MainStage in Your Music Setup (p. 13)
• Using MainStage in Live Performance (p. 16)
What Is MainStage?
MainStage is a music application designed for you to use in live performance. MainStage
turns your computer into a powerful multi-instrument and effects processor that you can
use on stage when you perform. Whether you play a keyboard, guitar, another instrument,
or sing, you can use MainStage with your instruments, microphones, and MIDI hardware
when you perform live.
MainStage lets you use the professional-quality instruments and effects included in
Logic Studio in your live performances. You access and modify the instruments and effects
in MainStage using the familiar Logic channel strip interface. If you play a USB or MIDI
keyboard controller, you can play and control a wide variety of software instruments,
including pianos and other keyboards, synthesizers, strings, horns, percussion, and more.
If you play electric guitar, you can perform using Logic Studio effects setups, including
amp simulation, overdrive, reverb, compression, and more. You can create your own
effects setups and switch between them easily. Vocalists and acoustic musicians can also
use effects setups with sound input through a microphone.
11
Introducing MainStage
1MainStage provides a flexible interface for organizing and accessing your sounds in
concerts. Concerts are MainStage documents that hold your sounds—a concert can store
all the sounds you’ll use in an entire performance or a series of performances. In a
MainStage concert, individual sounds are stored as patches, and each patch can contain
one or more channel strips, each with its own instruments and effects. You can add
channel strips, choose channel strip settings, add instruments and effects, and edit their
parameters to customize your sounds. You can even mix channel strips of different types
in a single patch.
You can organize patches in a concert by ordering them in the Patch List and also by
grouping them into sets. Sets are folders where you can store patches you want to keep
together.
Each concert also includes a visual interface, called a layout, with controls that you can
use to modify your patches in live performance. Layouts contain screen controls, which
are onscreen representations of keyboards, faders, knobs, buttons, pedals, drum pads,
and other hardware controls and displays. You make connections between your MIDI
devices and your MainStage concert by assigning hardware controls to the screen controls
in the concert. After you make these controller assignments, you map the screen controls
to channel strip and plug-in parameters, completing the connection so that you can easily
access and manipulate the parameters you want for each patch in the concert. You can
also map screen controls to actions, which provide the ability to select patches, control
the Tuner or metronome, provide visual feedback, and perform other functions.
Parameter
mapping
MainStage screen control Channel strip or
plug-in parameter
Hardware control
Controller
assignment
MainStage lets you quickly and easily make controller assignments and parameter
mappings to speed your workflow. You can customize your layout to match the controls
on your MIDI hardware, to optimize the use of available screen space, or in other ways
that suit your needs.
12 Chapter 1 Introducing MainStageUsing MainStage with Keyboard Controllers
If you perform using a USB or MIDI keyboard controller, you can play and control MainStage
patches with software instruments using your controller. You can assign faders, knobs,
buttons, and other controls on the keyboard controller to screen controls in your concert,
and then map those screen controls to parameters in your patches. You can choose
exactly the parameters you want to have at your fingertips for each patch and access
them from your controller as you perform.
You can use MainStage with other MIDI controllers, including sustain pedals, expression
pedals, foot switches, MIDI guitars, and wind controllers that send standard MIDI messages.
You can also control external hardware synthesizers, ReWire applications, and other virtual
instruments using external instrument channel strips.
Using MainStage with Electric Guitars
If you play an electric guitar, you can use MainStage as a powerful, customizable
multi-effects processor. After you connect your instrument to your computer using an
audio interface, you send your guitar’s audio signal to audio channel strips in your patches,
where you can add effects including the Amp Designer and Pedalboard plug-ins designed
specifically for use with electric guitar. You can also use EQ, compression, reverb, overdrive,
and other Logic Studio effects in your guitar patches. You can control volume, effect
blend, or expression with an expression pedal, and use a foot switch to select patches
hands-free when you perform.
Using MainStage with Vocals, Drums, and Other Instruments
Vocalists and acoustic musicians can use MainStage by sending the audio output from
a microphone connected to their computer to audio channel strips in their patches. You
can use MainStage with Core Audio-compatible audio devices, such as audio interfaces
and digital mixers, for input from instruments and microphones, and for audio output to
speakers, monitors, a mixing board, or a public address (PA) system. In MainStage, you
can access a wide range of effects in your patches.
Drummers can also use MainStage by sending the audio output from microphones to
audio channel strips in their patches or by using drum pads or a virtual drum kit to control
the EXS24 mkII sampler, Ultrabeat, and percussion-oriented plug-ins.
How to Use MainStage in Your Music Setup
You can add MainStage to your music equipment setup by following these steps:
Chapter 1 Introducing MainStage 13Stage 1: Creating a Concert from a Template
You begin working in MainStage by creating a new concert from a template. MainStage
includes concert templates for keyboard, guitar, and other instruments, making it easy
to choose a template suited to your needs. MainStage recognizes many popular MIDI
controllers and automatically assigns hardware controls on the controller to corresponding
screen controls in the workspace, simplifying hardware setup.
For information about choosing a template to create a concert, see Choosing a Concert
Template.
Stage 2: Adding and Editing Patches to Customize Your Sounds
After you create a concert, you add patches for the sounds you want to play, and edit
the patches by adding channel strips, instruments, and effects, and adjusting their
parameters to “dial in” your custom sounds. You edit and organize patches in Edit mode.
In Edit mode, your patches are “live” so you can hear the results of your edits instantly.
You can select and play patches, choose channel strip settings, and edit channel strip
and plug-in parameters. You can quickly define key ranges for channel strips to create
keyboard layers and splits, scale expression and other parameters using transforms, and
filter incoming MIDI messages.
For information about editing patches, see Working with Patches in Edit Mode.
Stage 3: Organizing Your Patches for Easy Access
When you open a concert in Edit mode, the patches in the concert appear in the Patch List,
where you can select them and start playing. You can edit patch parameters, add channel
strips to existing patches or create new ones, and reorder patches to build your custom
collection of sounds to use when you perform.
You can also organize patches in sets for added flexibility. Sets are like folders that can
store groups of patches you want to keep together, which can be useful in several ways.
For example, you can store all your favorite lead synth patches in a set or store multiple
patches you intend to use in a single song, and quickly select the patches you want while
performing. You can also add channel strips at the set level, and have them available
with every patch in the set.
For information about organizing patches, see Working with Patches in Edit Mode. For
information about creating and editing sets, see Working with Sets in Edit Mode.
14 Chapter 1 Introducing MainStageStage 4: Customizing the Visual Layout of Your Concert to Match Your Hardware
Devices
In Layout mode, you arrange screen controls in the workspace to create the visual layout
corresponding to your hardware controls. MainStage features a variety of screen controls,
including keyboards, knobs, faders, pitch bend and modulation wheels, foot pedals, drum
pads, and more. Also included are screen controls to display parameter and system
informaton, text and images, and a selector that you can use to view and select patches
or markers while performing.
You can quickly add screen controls to the workspace, and move, resize, and copy them
to create your layout. Alignment guides and other tools make it easy to visually arrange
screen controls, and you can customize display color, text labels, and other parameters
in the Screen Control Inspector. You can also group controls and arrange the grouped
control as a single unit.
For information about working with screen controls in Layout mode, see Working with
Screen Controls.
Stage 5: Making Connections Between MainStage and Your Music Hardware
In Layout mode, you connect physical controls on your MIDI hardware to the screen
controls in your concert by assigning the physical controls to the corresponding screen
controls in the workspace. You can move and resize screen controls in the workspace
and customize the display of visual feedback for parameter values and other information.
You only need to make hardware controller assignments once for an entire concert,
greatly reducing the amount of work required to connect your hardware with your
computer.
For information about making hardware assignments, see Assigning Hardware Controls
to Screen Controls.
Stage 6: Mapping Screen Controls to the Parameters You Want to Control in
Performance
Edit mode is where you map screen controls to channel strip parameters. You can map
whichever parameters you want to modify for each patch to screen controls so they can
be easily manipulated from your hardware when you perform live. You can also map
screen controls to MainStage actions, such as selecting the next patch you want to play.
For information about mapping screen controls, see Mapping Screen Controls.
You need not follow these steps in a strict order; however, in most cases you will find
working easier if you create your layout before making hardware assignments and make
hardware assignments before you map screen controls. If you plan to use one of the
existing concert templates without modifying its layout significantly, you can concentrate
on stages 1 to 3 and stage 6.
Chapter 1 Introducing MainStage 15To make setup easier, MainStage divides these tasks into two groups, with separate modes
for each group of tasks. You customize and organize your patches in Edit mode and
customize your layout and make connections with your hardware in Layout mode. The
advantage of this division is that it separates tasks you normally perform only once, such
as setting up your layout (the Layout mode tasks), from those you are likely to repeat
more often, such as editing your sounds (the Edit mode tasks).
Using MainStage in Live Performance
After you have created a concert with your custom patches following the steps described
above, you’re ready to play. When you perform live, you can use your computer as the
final sound module and effects box in your rig. You can select a patch and start playing
it instantly. MainStage switches seamlessly between patches and can sustain notes from
the previous patch while you start playing the newly selected one. You can view feedback
about your patches, including names, parameter values, and audio output levels, in real
time. You can also adjust concert-wide effects using auxiliary channels and control other
concert-wide settings.
MainStage provides two modes optimized for performing live:Perform mode and Full
Screen mode. In Perform mode, the workspace fills the MainStage window but lets you
retain access to the Finder and to other applications. In Full Screen mode, the workspace
fills your entire screen, optimizing available screen space for your onscreen layout. You
can use whichever mode you prefer.
You can use MainStage with multiple MIDI controllers, microphones, musical instruments,
and other music equipment. For time-based effects such as reverb and delay, you can
set a pre-defined tempo, use MIDI input for tempo changes, or tap the tempo as you
perform.
For tips and other information about using MainStage when you perform live, see
Performing Live with MainStage.
16 Chapter 1 Introducing MainStageYou can use MainStage with a wide variety of MIDI controllers and Core Audio-compliant
audio devices. For basic information about designing and configuring your system,
including information about computer requirements, connecting audio and MIDI devices,
and configuring your audio hardware, see the “Setting Up Your System” chapter in the
Logic Pro User Manual.
Real-time generation and processing of digital audio requires intensive processing by
your computer. If you plan to work on large or complex projects, using a computer with
a faster processor and extra random-access memory (RAM) installed can facilitate your
productivity. Additional RAM is useful particularly when using a large number of effects
plug-ins and when playing sample-based software instruments. It is recommended that
you do not run other processor- or RAM-intensive applications simultaneously with
MainStage, particularly when performing live.
This chapter covers the following:
• Using MIDI Devices with MainStage (p. 17)
• Using Audio Devices with MainStage (p. 19)
• Using Effects Plug-ins with MainStage (p. 19)
• Using MainStage with Time Machine (p. 19)
Using MIDI Devices with MainStage
MainStage works with many USB and MIDI keyboard controllers and with other MIDI
devices such as foot pedals and switches. To work with MainStage, MIDI devices must
send standard MIDI control messages. MainStage receives standard MIDI messages and
can be used to control external MIDI devices using external MIDI instrument channel
strips. For more information about using keyboard controllers and other MIDI devices,
see the “Setting Up Your System” chapter in the Logic Pro User Manual.
17
Setting Up Your System
2Using MIDI Devices That Send Special MIDI Message Types
Certain types of hardware controls such as knobs (rotary controls) and buttons are capable
of sending several types of MIDI messages. When you assign these controls to MainStage
screen controls using the Learn process, MainStage analyzes the incoming MIDI data to
determine which type of message the hardware control is sending. In order for MainStage
to learn these controls correctly, be sure to turn knobs through their full range of motion
and to press buttons exactly three times during the Learn process.
Some MIDI controllers can send nonstandard or proprietary MIDI messages. MainStage
cannot process or respond to nonstandard MIDI messages, to “registered” or
“non-registered” parameter messages, or to system exclusive (SysEx) messages. MainStage
can process some system real-time messages and MIDI Machine Control (MMC) messages
when you assign a hardware control that sends these messages to a screen control.
Some devices feature buttons that send program change messages. You can use these
buttons to send program change messages to MainStage, but you cannot assign them
to control other parameters using MainStage screen controls.
Choosing a Controller Preset
Some keyboard controllers allow you to choose different presets or “scenes” that
reconfigure the messages sent by the controls on the device. In most cases, you should
choose a generic preset that sends standard MIDI messages rather than system exclusive
messages or messages intended for a particular application. After you have assigned
hardware controls to screen controls in MainStage, do not change the preset on the MIDI
device, or your assignments might be lost.
In some cases, you can change the message type the controller sends by choosing a
different preset or by reprogramming the device. Some devices may include software
that you can use to reprogram knobs, buttons, and other controls. For information about
reprogramming a MIDI device, see the documentation that came with the device.
Using MIDI Devices That Support Automatic Configuration
MainStage can automatically configure the screen controls in a concert to support many
popular MIDI controllers. If you are using a device that supports automatic configuration,
MainStage alerts you to select the appropriate preset on your device when you open a
new concert. After you select the preset on your MIDI device, the screen controls in the
concert are assigned to the corresponding controls on your hardware device so you can
use them in MainStage with no further configuration.
18 Chapter 2 Setting Up Your SystemUsing Audio Devices with MainStage
MainStage works with Core Audio-compliant audio devices, including FireWire, USB,
ExpressCard, and PCI audio interfaces. You can connect microphones, electronic musical
instruments, and other musical equipment to your computer, or to an audio interface or
other audio device, and use them with MainStage. For detailed information about using
audio devices, see the “Setting Up Your System” chapter in the Logic Pro User Manual.
MainStage can require a large amount of available RAM, particularly when playing
sample-based software instruments. It is recommended that you test your system and
the concerts you plan to use before you perform using MainStage to make sure there is
enough available memory to select and play the patches you want to use without causing
audio drop-outs or distortion. Unlike in Logic Pro, you can choose different audio input
and output drivers in MainStage. For more information about choosing audio drivers, see
Setting MainStage Preferences.
Using Effects Plug-ins with MainStage
You can use all of the Logic Studio effects plug-ins, except for surround plug-ins, in
MainStage channel strips. For more information about the included effects plug-ins, refer
to the Logic Studio Instruments and Logic Studio Effects manuals. You can also use Apple
and third-party Audio Units effects in MainStage channel strips in the same way you use
them in Logic Pro channel strips.
Some Logic Studio effects, including Space Designer, require intensive realtime processing
of the audio signal. Using Space Designer on individual patches can affect the performance
of your concert, and in some cases result in audio dropouts or glitches, particularly if you
set the audio buffer to a smaller size. For this reason, it is recommended that you use
Space Designer sparingly in your concerts, and use a few Space Designer instances on
auxiliary channel strips shared between multiple patches, rather than in individual patches.
Some Audio Units plug-ins can introduce latency. Using effects that introduce latency,
such as compressors and limiters, can produce undesirable or unpredictable results during
live performance. Other Audio Units plug-ins, particularly instrument and amp modelling
plug-ins, require high levels of realtime processing and can affect the performance of
your concert.
Using MainStage with Time Machine
If you use Time Machine to back up the computer you are using to perform with
MainStage, be aware that if Time Machine runs while you are performing in Perform or
Full Screen mode the performance of your MainStage concert could be affected. To avoid
any impact on performance, it is recommended that you disconnect your Time Machine
backup drive when you perform with MainStage.
Chapter 2 Setting Up Your System 19You do all your work in MainStage in a single window, the MainStage window.
The MainStage window is organized to make it easy to work with your patches and the
layout of your concert. When you open MainStage, the workspace fills the center of the
window, with Inspectors and other editing areas on the sides and below. When you are
ready to perform, you can use one of two performance-oriented modes to maximize your
computer performance and also maximize your display space for easy viewing on stage.
The first time you open MainStage, the Choose Template dialog appears so that you can
choose a concert template to create a new concert. To learn how to open MainStage, see
Opening MainStage. For information about choosing a template, see Choosing a Concert
Template.
This chapter covers the following:
• The MainStage Window (p. 22)
• Layout Mode (p. 23)
• Edit Mode (p. 24)
• Perform Mode (p. 25)
• Full Screen Mode (p. 26)
• Customizing the MainStage Window (p. 26)
21
The MainStage Interface
3The MainStage Window
Some features of the MainStage interface are common to all modes, while others are
exclusive to certain modes.
Inspector
Workspace with
Toolbar Activity Monitor screen controls
The main features of the MainStage window include:
• Toolbar: Includes buttons for quick access to common commands and tools. You can
customize the toolbar so that the commands you use most frequently are readily
available.
• Activity Monitor: Shows your computer’s processor and memory usage, and shows the
input from your MIDI devices as you edit and perform.
• Workspace: The “canvas” where you customize your onscreen layout, assign hardware
controls to screen controls, and view your concerts while you perform.
• Screen controls: The onscreen objects that correspond to the controls on your hardware
devices. You can add and arrange screen controls in the workspace, assign hardware
controls to screen controls, and then map them to parameters you want to control for
each patch in your concert. There are three types of screen controls: panel controls,
shelf controls, and grouped controls.
• Channel strips: Channel strips are where you build and customize your sounds.
MainStage channel strips are similar to channel strips in Logic Pro, with Insert, Sends,
and I/O menus as well as level meters, faders, pan knobs, and other controls.
22 Chapter 3 The MainStage Interface• Inspectors: Inspectors appear below (in Edit mode) or along the left side of the MainStage
window (in Layout mode) when you select different items onscreen. The Inspectors
allow you to edit parameters and attributes for patches, sets, screen controls, channel
strips, and the concert. Most Inspectors feature tabs that make it easy to quickly access
the parameters you want to edit.
To make working easier, MainStage features four different modes, each suited to a different
task. You audition, edit, and organize your sounds and map screen controls in Edit mode.
You customize the visual arrangement of controls onscreen and make controller
assignments in Layout mode. You use either Perform mode or Full Screen mode when
you perform live.
Layout Mode
Layout mode is where you customize your onscreen layout and make connections between
your MIDI hardware and the screen controls in your concert. You drag screen controls
into the workspace and arrange them onscreen to customize your layout, then create
connections (called controller assignments) between your MIDI hardware and the screen
controls.
In the Screen Control Inspector, you can edit layout parameters to customize hardware
assignments and modify the visual look of the screen controls in your concert.
Screen Control Inspector Screen Controls Palette
• Screen Control Inspector: View and edit parameters for screen controls in the workspace,
including hardware input, appearance, and certain types of MIDI output parameters.
Chapter 3 The MainStage Interface 23• Screen Controls Palette: Drag screen controls from the palette into the workspace to
add them to your onscreen layout. The palette has four tabs so that you can quickly
view all screen controls or only one type of screen control. Panel controls appear as
two-dimensional objects in the workspace, while shelf controls appear on an adjustable
three-dimensional shelf.
• Layout buttons: Along the left side of the workspace is a series of buttons that you can
use to quickly position selected screen controls in the workspace. You can align,
distribute, and group selected screen controls.
In Layout mode, unlike the other modes in MainStage, you can’t select or edit individual
patches. To learn what you can do in Layout mode, see Working in Layout Mode.
Edit Mode
Edit mode is where you create, customize, and organize your sounds. You can add patches,
add and edit channel strips, create keyboard layers and splits, and edit channel strip and
plug-in parameters. Edit mode is also where you map screen controls to channel strip
parameters and actions, and edit patch, set, and concert-level parameters.
Patch List Inspector (changes
depending on the
selection)
Channel Strips
area
• Patch List: Shows the patches and sets in the concert. You can add patches and sets
to the Patch List, name them, and organize them. The Patch List includes an Action
menu with commands to create patches and sets, reset program change numbers, skip
items, and import and export patches and sets to use in other concerts.
24 Chapter 3 The MainStage Interface• Inspector (varies depending on the type of item selected): View and edit parameters for
the currently selected patch, channel strip, screen control, set, or for the concert. The
name of the Inspector changes to identify the type of item you are currently inspecting.
• Channel Strips area: View and edit the channel strips in your patches or at the concert
or set level. Channel strips appear in a vertical format similar to Logic Pro channel strips,
with many of the same controls. You can also add channel strips and save channel strip
settings.
To learn what you can do in Edit mode, see Working in Edit Mode and Working with
Concerts.
The remaining two modes, Perform mode and Full Screen mode, are both optimized for
performing live. You can use either one when you perform.
Perform Mode
In Perform mode, the workspace fills the entire MainStage window. The toolbar is visible
so that you can switch modes using the Mode buttons, use the Panic or Master Mute
buttons and the Tuner, and view CPU and memory levels and MIDI input in the Activity
Monitor. The browsers and Inspectors are hidden to maximize the size of the workspace,
making screen controls larger and easier to read in onstage situations. You can still access
the Finder and switch to other applications in Perform mode but cannot open plug-in
windows.
Chapter 3 The MainStage Interface 25Full Screen Mode
In Full Screen mode, the workspace fills your entire computer display so that your screen
controls are as large as possible for maximum readability. Full Screen mode optimizes
your display for live performance when you want to use MainStage exclusively while you
play. Plug-in windows cannot be open in Full Screen mode.
To learn about using Perform mode and Full Screen mode when you perform live, see
Performing Live with MainStage.
Customizing the MainStage Window
You can customize the MainStage window to suit your way of working. In Edit mode, you
can adjust the width of the Patch List, show or hide the Inspectors and the Channel Strips
area, and customize the buttons on the toolbar.
Resizing the Workspace
You can adjust both the horizontal and vertical size of the workspace to give more room
to the Patch List, the Inspector, and the Channel Strips area.
To resize the workspace horizontally
1 Move the pointer to the space between the workspace and the Inspector.
The pointer becomes a resize pointer.
2 Drag up or down to resize the workspace.
To resize the workspace vertically
1 Move the pointer to the space between the workspace and the Channel Strips area.
26 Chapter 3 The MainStage InterfaceThe pointer becomes a resize pointer.
2 Drag left or right to resize the workspace.
Hiding and Showing the Inspector
You can hide the Inspector or show it if it is hidden.
To hide or show the Inspector
Do one of the following:
µ Choose View > Inspectors (or press Command-5).
µ In the toolbar, click the Inspectors button.
Hiding and Showing the Channel Strips Area
You can hide the Channel Strips area or show it if it is hidden. Hiding the Channel Strips
area gives you more room for the workspace.
To hide or show the Channel Strips area
Do one of the following:
µ Choose View > Channel Strips (or press Command-6).
µ In the toolbar, click the Channel Strips button.
Customizing the Toolbar
The toolbar at the top of the MainStage window contains buttons for frequently used
commands. You can customize the toolbar, adding buttons for the functions you use
most often and can return to the default set later.
The default set of toolbar buttons includes buttons for selecting the different window
modes, hiding the Inspector and the Channel Strips area, activating Master Mute, and
other common commands. You can customize the toolbar with additional buttons for
other commands and adjust the position and spacing of items. You can also hide the
toolbar to maximize available screen space. You customize the toolbar by dragging items
from the Customize Toolbar dialog to the toolbar.
To show the Customize dialog
Do one of the following:
µ Choose View > Customize Toolbar.
µ Control-click the toolbar, then choose Customize Toolbar from the shortcut menu.
The Customize Toolbar dialog appears, and spaces between buttons in the toolbar are
outlined in gray.
Chapter 3 The MainStage Interface 27To add a button to the toolbar
µ Drag a button from the Customize dialog to the toolbar.
If you drag a button between two existing buttons, the buttons move to make room for
the new button.
To move a button in the toolbar
Do one of the following:
µ If the Customize Toolbar dialog is visible, drag the button to move it.
µ If the Customize Toolbar dialog is not visible, Command-drag the button to move it.
You can also rearrange the toolbar using set-width spaces, flexible spaces, and separators.
To add a space or a separator to the toolbar
µ Drag a space, flexible space, or separator from the Customize Toolbar dialog to the toolbar.
To return the toolbar to the default set of buttons
µ Drag the default button set, located at the bottom of the Customize Toolbar dialog, to
the toolbar.
You can also change the toolbar so that it shows only icons or only text by Control-clicking
the toolbar, then choosing Icon Only or Text Only from the shortcut menu.
To show only icons in the toolbar
Do one of the following:
µ Control-click the toolbar, then choose Icon Only from the shortcut menu.
µ In the Customize Toolbar dialog, choose Icon Only from the Show pop-up menu.
To show only text in the toolbar
Do one of the following:
µ Control-click the toolbar, then choose Text Only from the shortcut menu.
µ In the Customize Toolbar dialog, choose Text Only from the Show pop-up menu.
To show both icons and text in the toolbar
Do one of the following:
µ Control-click the toolbar, then choose Icon & Text from the shortcut menu.
µ In the Customize Toolbar dialog, choose Icon & Text from the Show pop-up menu.
To close the Customize dialog
µ When you are finishing customizing the toolbar, click Done.
To hide the toolbar
µ Choose View > Hide Toolbar.
When the toolbar is hidden, the menu item becomes Show Toolbar.
28 Chapter 3 The MainStage InterfaceYou can quickly start working in MainStage by choosing a concert template and trying
out the patch settings in the concert. This chapter provides a brief guided “walkthrough”
you can follow the first time you open MainStage. If you wish to continue learning the
major features of the application in a hands-on manner, consult the Exploring MainStage
guide included in the Logic Studio package.
This chapter covers the following:
• Before You Begin (p. 29)
• Opening MainStage (p. 30)
• Choosing a Concert Template (p. 30)
• Selecting Patch Settings in the Patch Library (p. 31)
• Adding a Patch (p. 32)
• Naming a Patch (p. 33)
• Selecting and Playing Patches (p. 33)
• Adding a Channel Strip (p. 34)
• Changing a Channel Strip Setting (p. 36)
• Learning a Controller Assignment (p. 37)
• Mapping a Screen Control (p. 39)
• Trying Out Full Screen and Perform Modes (p. 39)
Before You Begin
Before you start working in MainStage, you should connect the hardware equipment that
you plan to use, such as your keyboard controller, audio interface, instruments, or
microphones, to your computer. To use keyboard controllers and other MIDI devices with
MainStage, the devices should be capable of sending standard MIDI messages. If you’re
not sure whether this is the case for a particular device, consult the owner’s manual or
the product website. For more information, see Setting Up Your System.
29
Getting Started with MainStage
4Opening MainStage
You start by opening MainStage and creating a new concert from a template.
To open MainStage
µ Double-click the MainStage icon in your Applications folder or in the Dock.
Choosing a Concert Template
MainStage includes templates for different musical instruments, including Keyboards,
Guitar Rigs, Drums, Vocals, and more. You can choose a concert template in the Choose
Template dialog, which appears the first time you open MainStage and when you create
a new concert or close a concert.
To choose a concert template
1 Choose File > New Concert (or press Command-N).
2 In the Choose Template dialog, click the instrument category on the left you want to
view templates for.
A brief description below each template describes its features and intended use.
30 Chapter 4 Getting Started with MainStage3 Scroll through the available templates to find the one you want to use.
4 Click Choose, or double-click the template.
A new concert created from the template opens in Edit mode. The workspace appears
in the center of the MainStage window, showing the screen controls in the concert. To
the left of the workspace is the Patch List, which shows the patches and sets in the concert.
The channel strips for the selected patch appear in the Channel Strips area to the right
of the workspace. The new concert may contain a single patch, or several patches. Below
the workspace, the Patch Library is open, so you can easily audition different patch settings
to find the one you want to use.
In the Choose Template dialog, you can view templates in either a grid or a Cover Flow
view. You can choose a different view using the view buttons, located in the lower-left
part of the dialog.
To choose a different view for the Choose Template dialog
µ To view templates in a grid, click the Grid button.
µ To view templates in Cover Flow, click the Cover Flow button.
For more information about opening, editing, and saving concerts, see Working with
Concerts.
Selecting Patch Settings in the Patch Library
When you open a concert or select a patch, the Patch Library opens in the Patch Inspector
below the workspace. The Patch Library contains a variety of patches optimized for the
instrument the concert is designed for. You can quickly audition patch settings in the
Patch Library and choose a setting for the selected patch.
To select a patch setting
1 Look through the settings in the Patch Library to find the one you want to use.
2 Click the patch setting.
You can start playing the patch immediately using the selected patch setting. You can
also search for patch settings by name.
To search for patch settings by name
1 Choose Find in Library from the Action menu in the upper-right corner of the Patch
Inspector.
2 Enter the name of the patch setting you want to find.
3 Click Find.
The first patch setting with the text you entered appears selected in the Patch Library.
Chapter 4 Getting Started with MainStage 314 To find subsequent patch settings with the same name, choose Find Again in Library
from the Action menu.
Note: If you have saved multiple patches to a .patch file using the Save as Set command
(or the Export as Set command in MainStage 1.0) in the Action menu, the saved file
appears as a patch in the Patch Library unless you have selected a different location for
saving the file. Clicking the saved file in the Patch Library causes an alert to appear while
the individual patches are opened from the .patch file.
Adding a Patch
You can add patches to the concert and organize them in the Patch List. The number of
patches is limited only by the amount of available memory in your system. When you
add a patch to a concert, the patch is selected so you can easily audition and select a
patch setting from the Patch Library.
To add a new patch
1 Click the Add Patch button (+), located in the upper-right corner of the Patch List.
The new patch appears in the Patch List, and the Patch Library is open in the Patch
Inspector.
2 Select the patch setting you want to use from the Patch Library.
If you want to play the patch using your keyboard controller, select a Keyboard patch. If
you want to play the patch using your electric guitar, select a Guitar Rig patch. For other
instruments or vocals, you can choose a template from the appropriate category or modify
a keyboard or guitar template to suit your needs.
3 If the patch uses an audio channel strip, make sure the channel strip is set to use the
correct audio input, then gradually raise the volume fader on the channel strip until you
hear sound on the channel.
32 Chapter 4 Getting Started with MainStageNaming a Patch
When you add a patch, by default it takes the name of the channel strip added with it.
You can give each patch a custom name to make it easier to identify and distinguish
between them.
To name a patch
1 Double-click the patch in the Patch List.
A field appears around the patch name, which is selected.
Double-click the
patch name, then
type a new name.
2 Type a new name in the patch name field.
For more information about editing and organizing patches, see Working with Patches
in Edit Mode.
Selecting and Playing Patches
The patches in the concert appear in the Patch List along the left side of the MainStage
window. You can easily access the patches in your concert by selecting them in the
Patch List. You can quickly select patches by clicking them in the Patch List.
If you are using a MIDI controller, you can play patches that have a software instrument
channel strip using your controller. If you are playing an electric guitar or another
instrument or are using a microphone connected to an audio interface, you can play or
sing using patches that have an audio channel strip. Before playing through an audio
channel strip, first make sure that the channel strip is set to receive input on the channel
(or stereo pair of channels) to which your instrument or microphone is connected.
With the patch selected, try moving some controls on your MIDI controller and check to
see if the screen controls in the workspace respond. Some screen controls, including the
keyboard, modulation and pitch bend wheels, and sustain pedal screen controls, respond
to appropriate MIDI messages without needing to be assigned or mapped.
Chapter 4 Getting Started with MainStage 33You can continue selecting and playing patches in the concert to find sounds you want
to perform with or to use as a starting point for creating your own custom patches. You
can also add new patches and edit their channel strip settings to create your own unique
sounds.
For more information about organizing and selecting patches in the Patch List, see Working
with Patches in Edit Mode.
Adding a Channel Strip
You can add channel strips to a patch to create layered sounds and keyboard splits. When
you add a channel strip to a patch, you choose the type of channel strip, the output, and
other settings. You can mix both types in a single patch.
To add a channel strip to a patch
1 Make sure the patch is selected in the Patch List.
2 Click the Add Channel Strip button (+) in the upper-right corner of the Channel Strips
area.
The New Channel Strip dialog appears. You choose settings in the Channel Strip dialog
in the same way as when you add a patch.
3 In the New Channel Strip dialog, select the type of channel strip you want to create.
4 Choose the audio output for the channel strip from the Output pop-up menu.
5 For audio channel strips, choose mono or stereo format from the Format pop-up menu
and choose the audio input from the Input pop-up menu. For external instrument channel
strips, also choose the MIDI input, MIDI output, and MIDI channel from their respective
pop-up menus.
Important: Audio channel strips can produce feedback, particularly if you are using a
microphone for audio input. When you add an audio channel strip, the volume of the
channel strip is set to silence, and Feedback Protection is turned on to alert you when
feedback occurs on the channel strip. When you add an external instrument channel
strip, the volume of the channel strip is set to silence, but Feedback Protection is turned
off.
6 Optionally, you can add multiple channel strips to a patch by typing a number in the
Number field. You can add up to the maximum number for a channel strip type.
7 Click Create.
A new channel strip appears in the Channel Strips area, highlighted in white to indicate
that it is selected. The Channel Strip Inspector appears below the workspace, showing
different parameters for the new channel strip.
34 Chapter 4 Getting Started with MainStage8 For audio and external instrument channel strips, gradually raise the volume fader until
you hear sound on the channel.
Most channel strip controls function in MainStage in the same way that they do in
Logic Pro. You can adjust channel strip output using the Volume fader, adjust pan position
using the Pan knob, and mute or solo the channel strip using the Mute and Solo buttons.
You can choose new channel strip settings, add and edit effects, add sends to busses,
and change the output in the same way as in Logic Pro. For audio channel strips, you can
switch between mono and stereo format using the Format button. For software instrument
channel strips, you can choose a different instrument from the Input pop-up menu.
You can also define the key range for a channel strip, create transform and velocity graphs,
and filter various MIDI messages to a channel strip in the Channel Strip Inspector. For
general information about working with channel strips, see the “Working with Instruments
and Effects” chapter of the Logic Pro User Manual. For more information about using
channel strips in MainStage, see Working with Channel Strips in Edit Mode.
Chapter 4 Getting Started with MainStage 35Changing a Channel Strip Setting
You can quickly change the instrument, effects, and other parameters for a channel strip
by selecting a new setting from the Channel Strip Library. The browser shows available
settings for the currently selected channel strip.
To select a new channel strip setting
1 Make sure that the channel strip you want to change is selected.
The selected channel strip is highlighted.
2 In the Channel Strip Inspector, click the Channel Strip Library tab.
Available channel strip settings appear in the Channel Strip Library. Logic Studio content
appears as a series of folders with different instrument and usage categories. If you have
GarageBand or have one or more Jam Pack collections installed on your computer, those
settings appear below the Logic Studio settings.
3 Click a category from the column on the left, then click subcategories from the columns
on the right until you see the settings you want.
Click a category in this
column to see the
available choices.
Click the channel strip
setting you want to use from
the columns to the right.
You can also search for channel strip settings by name and perform other functions using
the Channel Strip Library. For more information about the Channel Strip Inspector, see
Choosing Channel Strip Settings.
36 Chapter 4 Getting Started with MainStageLearning a Controller Assignment
When you select a patch or a channel strip setting, some channel strip parameters respond
to the controls on your MIDI device instantly. MainStage responds to notes played on a
keyboard controller, volume, pan, and expression messages, modulation and pitch bend
wheel messages, and sustain pedal messages without your having to configure any screen
controls to receive these messages. For other controls such as faders, knobs, and buttons,
you must assign these hardware controls to MainStage screen controls before you can
use them in your concert.
In MainStage, you assign hardware controls to screen controls in the Layout Inspector
using the Learn process, similar to learning controller assignments for a control surface
in Logic Pro. Learning controller assignments is a quick and easy method for assigning
hardware controls to screen controls.
Note: To be able to assign a hardware control to a screen control, the hardware control
must send standard MIDI messages. For more information, see Using MIDI Devices with
MainStage.
To learn a controller assignment
1 In the toolbar, click the Layout button.
MainStage switches to Layout mode.
2 In the workspace, select the screen control you want to learn.
The selected control appears highlighted in blue.
3 Click the Learn button in the Screen Control Inspector (or press Command-L).
Chapter 4 Getting Started with MainStage 37The Learn button glows red to indicate that the Learn process is active, and the selected
screen control is highlighted in red.
Click the Learn button to
start learning hardware
assignments.
4 On your MIDI device, move the control you want to assign. Move faders and knobs through
their full range of motion, and press buttons exactly three times (not too quickly) to
enable MainStage to correctly learn the MIDI message types sent by these controls.
The values in the Hardware Assignment pop-up menus change to reflect the type of
hardware control learned by the screen control. While the assignment is being learned,
incoming MIDI messages appear in the Activity Monitor above the workspace.
After the assignment is learned, the screen control responds when you move the
corresponding hardware control. This shows that the screen control is receiving MIDI
input and is correctly assigned.
5 While the Learn process is active, you can learn additional controller assignments by
selecting another screen control and moving the hardware control you want to assign
to it. You can learn as many assignments as you wish while the Learn button remains red.
6 When you are finished assigning controls, click the Learn button (or press Command-L)
again to turn off the Learn process.
For more information about making controller assignments, see Assigning Hardware
Controls to Screen Controls.
38 Chapter 4 Getting Started with MainStageMapping a Screen Control
After you have learned controller assignments for the screen controls you want to use,
you can map the screen controls to the parameters in your patches you will want to
control while you are performing. You will likely want to map screen controls to parameters
in each patch in a concert, so that you can easily access and modify the parameters you
want for each patch when you are performing live. You can also map parameters at the
concert level to control master volume, view master levels, or modify concert-wide effects.
There are two ways to map screen controls to parameters: by visually selecting parameters
on channel strips or plug-in windows, or by choosing parameters in the Parameter
Mapping browser. To learn how to map a screen control to a channel strip or plug-in
parameter, see Mapping Screen Controls to Channel Strip and Plug-In Parameters. To
learn how to map a screen control to an action, see Mapping Screen Controls to Actions.
Trying Out Full Screen and Perform Modes
Now you can try playing your patches as you would in a performance. MainStage provides
two modes, Full Screen mode and Perform mode, that optimize the display of the
workspace for live performance. In Perform mode, you see the workspace and the toolbar,
so you can use the toolbar buttons and access other applications. In Full Screen mode,
the workspace occupies the entire screen, presenting the screen controls as large as
possible for easy viewing in concert environments.
To switch to Full Screen mode
Do one of the following:
µ Choose View > Full Screen (or press Command-4).
µ Click the Full Screen button.
To switch to Perform mode
Do one of the following:
µ Choose View > Perform (or press Command-3).
µ Click the Perform button.
You can try both of these modes, playing the patches you added or modified, and using
the controls on your MIDI controller to modify the parameters you have mapped to screen
controls.
Chapter 4 Getting Started with MainStage 39In Edit mode, you add and edit patches to create your custom sounds, choose patch
settings in the Patch Library, organize and select patches in the Patch List, edit patch
parameters in the Inspector, and map screen controls to parameters and actions. You can
create custom patches in Edit mode and organize them in the Patch List so that you can
easily access them when you perform.
This chapter covers the following:
• Working with Patches in Edit Mode (p. 41)
• Working with Channel Strips in Edit Mode (p. 48)
• Mapping Screen Controls (p. 69)
• Editing Screen Control Parameters in Edit Mode (p. 77)
• Working with Sets in Edit Mode (p. 81)
• Working at the Set Level (p. 83)
• Sharing Patches and Sets Between Concerts (p. 84)
• Recording the Audio Output of a Concert (p. 85)
Working with Patches in Edit Mode
Patches are the individual sounds you play using your keyboard controller (for MIDI
keyboardists) and the effects setups you use with your guitar, microphone, or other
instrument (for guitarists, vocalists, and other instrumentalists). MainStage patches can
contain multiple channel strips, each with a different instrument or effects setup.
Some basic patch operations, including adding and naming patches, selecting and naming
patches, and adding channel strips to patches, are described in Getting Started with
MainStage.
If MainStage is currently in Layout, Perform, or Full Screen mode, click the Edit button in
the top-left corner of the MainStage window to begin working in Edit mode.
41
Working in Edit Mode
5Selecting Items in the Patch List
All of the patches and sets in a concert appear in the Patch List, located to the left of the
workspace. To select an item in the Patch List in Edit mode, you can click the item or use
key commands.
To select a patch in the Patch List
1 In the Patch List, located to the left of the workspace, click the patch.
Click a patch in the
Patch List to select it,
and start playing.
2 With the patch selected, you can start playing instantly.
You can also select patches in the Patch List using your computer keyboard.
To select a patch using your computer keyboard
µ Press the Down Arrow key to select the next (lower) patch in the Patch List.
µ Press the Up Arrow key to select the previous (higher) patch in the Patch List.
There are additional key commands you can use to select items in the Patch List.
Default key command Selects
Up Arrow Previous item (patch or set) in the Patch List
Down Arrow Next item (patch or set) in the Patch List
Command-Up Arrow Previous patch in the Patch List
Command-Down Arrow Next patch in the Patch List
Command-Left Arrow First patch of the previous set
Command-Right Arrow First patch of the next set
Note: When you use the Command-Arrow key commands listed above to select different
patches, the selected screen control remains selected in the workspace. This makes it
easy to see how a screen control is configured in different patches.
In addition to using key commands, you can select a patch (or set) in the Patch List by
typing the first few letters of its name.
42 Chapter 5 Working in Edit ModeTo select a patch or set by typing its name
1 Click the border of the Patch List to select it.
2 With the Patch List selected, start typing the name of the patch. Once you type enough
letters to uniquely identify its name, the patch or set is selected.
You can also select a patch by typing its name in Perform or Full Screen mode. For
information, see Selecting Patches by Typing.
You can also select a patch using your computer keyboard by typing its patch number.
Patch numbers appear to the left of the patch names in the Patch List.
To select a patch by typing its patch number
1 Click the border of the Patch List to select it.
2 With the Patch List selected, type the patch number using your computer keyboard.
Skipping Items in the Patch List
You can skip patches or sets in the Patch List. When a patch or set is skipped, you can
still select the item by clicking it. However, when you use the arrow keys together with
the Command key to select items in the Patch List, skipped items are passed over and
the next non-skipped item is selected. Skipped items are also skipped when you use the
patch selector in Full Screen or Perform mode.
To skip a patch or set
1 Select the patch or set in the Patch List.
2 Choose Skip from the Action menu for the Patch List.
The item appears as a thin line in the Patch List.
To set a skipped patch or set to no longer be skipped
1 Select the item (patch or set) in the Patch List.
2 Choose Don’t Skip from the Action menu for the Patch List.
The item returns to full size in the Patch List.
Patches and sets are skipped only when you use the arrow keys together with the
Command key. Items set to be skipped are still selected when you use the arrow keys
alone or when you click them.
Collapsing Sets in the Patch List
You can collapse sets in the Patch List. When you collapse a set, you can select the set
and use any channel strips or busses at the set level but cannot select or play patches in
the set while in Edit mode.
To collapse a set
µ In the Patch List, click the disclosure triangle for the set.
Chapter 5 Working in Edit Mode 43You can uncollapse the set by clicking its disclosure triangle again. Collapsing a set has
no effect on whether you can select patches in the set in Full Screen or Perform mode.
For information about creating and using sets, see Working with Sets in Edit Mode.
Copying and Pasting Patches
You can copy, paste, and duplicate patches in the Patch List using the standard Mac OS X
menu and key commands or by Option-dragging. When you paste or duplicate a patch,
it includes any mappings made to parameters in the original patch.
Reordering Patches in the Patch List
When you add a patch to a concert, the new patch appears below the currently selected
patch in the Patch List. You can reorder patches in the Patch List.
To reorder patches in the Patch List
µ Drag patches up or down in the Patch List until they appear in the order you want.
Moving Patches in the Patch List Repeatedly
The MainStage command set includes a Move Again command that lets you easily move
selected patches multiple times. You can use Move Again when you drag, paste, create,
or delete patches in the Patch List. By default, the Move Again command is not assigned
to a key command. To use it, you should first assign it to a key command in the Command
Editor. For information about using the Command Editor, see Using the Command Editor.
Creating a Patch from Several Patches
You can create a patch by combining several existing patches. The new patch contains
all of the channel strips of the selected patches.
To create a patch from several existing patches
1 In the Patch List, select the patches you want to use to create the new patch.
2 Choose “Create Patch from Selected Patches” from the Action menu at the upper-right
corner of the Patch List.
The new “combined” patch appears in the Patch List, labeled “Untitled Patch.”
Note: Creating a patch with more than three channel strips can affect performance,
particularly if the channel strips are audio channel strips, or if they use a large number of
plug-ins or processor-intensive plug-ins.
Setting the Time Signature for a Patch
You can set the time signature for a patch. Time signatures can be used with the Playback
plug-in and also control the beats for the metronome. When you set the time signature
for a patch, it overrides any concert- or set-level time signature.
To set the time signature for a patch
1 In the Patch Inspector, select the Attributes tab.
44 Chapter 5 Working in Edit Mode2 In the Attributes tab, select the Has Time Signature checkbox.
3 Double-click the number in the field to the right, and enter the number of beats for one
measure of the time signature.
4 Choose the beat value from the pop-up menu to the right.
Changing the Tempo When You Select a Patch
You can give a patch its own tempo setting so that when you select the patch, the tempo
changes to the patch tempo setting. MainStage uses the new tempo until you select
another patch or set with its own tempo setting, tap a new tempo, or until MainStage
receives tempo information from incoming MIDI messages. For more information about
using and changing tempo in MainStage, see Using Tempo in a MainStage Concert.
To change the tempo using a patch
1 In the Attributes tab of the Patch Inspector, set the patch tempo using the Change Tempo
To value slider.
2 Select the Change Tempo To checkbox to activate the patch tempo when the patch is
selected.
Select the checkbox
and set the tempo
using the slider.
Setting Patch Program Change Numbers
When you add a patch to a concert, the patch is given a MIDI program change number
(the lowest available number between 0 and 127) until all available program change
numbers are taken. You can select patches using program change numbers when
performing by assigning buttons on a MIDI device to send program change messages.
You can change the program change number in the Patch Inspector.
To change the program change number for a patch
1 In the Patch List, select the patch.
When you select a patch, the Patch Inspector appears below the workspace.
2 In the Attributes tab of the Patch Inspector, select the Program Change checkbox.
3 Using the value slider, set the program change number.
Chapter 5 Working in Edit Mode 45The MIDI standard allows program change numbers with values from 0 to 127. If all
available program change numbers in a concert are already in use, any new patches
added to the concert will be given program change number zero (0), but the number is
inactive (the checkbox is not selected). Bank changes are not supported.
If you set a program change number so that it duplicates an existing program change
number, the word “Duplicate” appears in red next to the Program Change value slider.
If two or more patches have the same program change number, and the numbers are
active, the patch that appears first (highest) in the Patch List or patch selector is selected
when you send the program change message with the corresponding value.
You can reset program change numbers for all active (non-skipped) patches in a concert.
When you reset program change numbers, patches are assigned program change numbers
based on their order in the Patch List, starting from the top. The program change numbers
for skipped (inactive) patches are not reset.
To reset program change numbers for active patches in a concert
µ Choose Reset Program Change Numbers from the Action menu for the Patch List (or press
Command-Shift-Option-R).
You can assign buttons and other controls to send program change messages and use
them to select patches in the concert. For information about assigning buttons, see
Assigning Buttons.
Deferring Patch Changes
By default, when you switch patches, the new patch is ready to play immediately. You
can “defer” a patch change so that the patch change occurs after the last note of the
previous patch has been released or sustained.
To defer a patch change
µ In the Attributes tab of the Patch Inspector, select the Defer Patch Change checkbox.
Note: Deferring patch change works in Perform mode and Full Screen mode but does
not work when you are editing patches in Edit mode.
Instantly Silencing the Previous Patch
Sometimes you may want the sound of the previous patch to continue after you select
a new patch, as when you want to sustain a chord pad while soloing over it. At other
times, you may want to silence the sound of the previous patch instantly when you select
a new patch.
To instantly silence the previous patch when you select a patch
µ In the Attributes tab of the Patch Inspector, select the Instantly Silence Previous Patch
checkbox.
46 Chapter 5 Working in Edit ModeChanging the Patch Icon
Each patch has an icon that appears in the Patch List next to the patch name. By default,
the patch icon shows the type of channel strip created when the patch was added. You
can choose a new icon for a patch and use icons to visually distinguish patches in the
Patch List.
To change the icon for a patch
µ In the Attributes tab of the Patch Inspector, choose an icon from the Icon pop-up menu.
Changing the Tuning for a Patch
By default, patches use the same tuning method as the concert (or the set, if they are in
a set with its own tuning method). You can change the tuning for a patch so that it uses
a different tuning. When you change the tuning for a patch, it overrides any concert- or
set-level tuning method.
To change the tuning for a patch
1 In the Patch Inspector, select the Tuning tab.
2 Choose the tuning you want the patch to use from the Method pop-up menu.
Deleting Patches
You can delete a patch if you decide you no longer want it in the concert.
To delete a patch
1 Select the patch in the Patch List.
2 Choose Edit > Delete (or press the Delete key).
Chapter 5 Working in Edit Mode 47Working with Channel Strips in Edit Mode
Channel strips are the building blocks of your patches. They contain the instruments and
effects for the sounds you use in performance. MainStage channel strips use the channel
strip interface familiar from Logic Pro. MainStage channel strips have the same structure
and many of the same functions as Logic Pro channel strips. The main features of
MainStage channel strips are shown below:
Settings menu
Pan knob
Icon
Name
Mute and solo buttons
Insert slots
Send slots
Volume fader and
level meter
In MainStage, you can use audio, software instrument, and auxiliary (aux) channel strips
in your patches and sets, and also at the concert level. You can also use external instrument
patches to “play” external hardware devices and ReWire applications. You can use channel
strips in MainStage just as you can in Logic Pro. You can adjust the volume level using
the Volume fader, adjust the pan position using the Pan knob, and mute or solo the
channel strip using the Mute and Solo buttons.
48 Chapter 5 Working in Edit ModeA MainStage concert can have a maximum of 1023 software instrument channel strips,
512 audio channel strips, 256 external instrument channel strips, and 256 auxiliary (aux)
channel strips.
As in Logic Pro, you can add effects using the Insert slots, send the signal to an auxiliary
channel (aux) using the Sends slots, and choose a different output from the Output slot.
For audio channel strips, you can change the format between mono and stereo using
the Format button. For software instrument channel strips, you can change the instrument
using the Input slot. You can also choose, copy, and save channel strip settings, choose
a different channel strip type, or reset the channel strip from the channel strip menu.
Because MainStage is designed for live performance rather than recording and arranging,
there are a few differences between MainStage channel strips and Logic Pro channel
strips:
• MainStage channel strips include an Expression dial so that you can easily see the
current MIDI Expression being received by the channel strip.
• MainStage channel strips do not have a Record Enable or Bounce button.
• MainStage audio channel strips can use automatic Feedback Protection to alert you
when feedback occurs on the channel. For information about using Feedback Protection,
see Using Feedback Protection with Channel Strips.
• MainStage audio channel strips do not have an input monitoring (i) button. You can
use the Mute button to silence the channel strip.
• In MainStage, you can use the Format button to select mono or stereo format. MainStage
does not support surround input or surround processing.
• MainStage channel strips do not have Group or Automation Mode pop-up menus.
• MainStage channel strips include a Change All option in both Input and Output pop-up
menus that you can use to change either the input or output for all channel strips in
a patch, a set, or for the overall concert.
• In MainStage, the selected channel strip is highlighted in white.
• Only one channel strip in each patch–the first audio channel strip–sends audio to the
Tuner. The channel strip that sends audio to the Tuner is indicated by a tuning fork
icon at the top of the channel strip.
• In MainStage, the name of the channel strip changes when you select a new channel
strip setting, unless you have renamed it.
• In MainStage, the channel strip number (at the bottom of the channel strip) reflects its
order in the patch, not the concert.
• Surround plug-ins are not available in MainStage.
Chapter 5 Working in Edit Mode 49• You can choose the information displayed on the channel strip, including latency
information, by Control-clicking the channel strip and choosing the information you
want to display from the shortcut menu.
• The Playback plug-in is available only in MainStage, not in Logic Pro.
For more information about working with channel strips, see the “Working with
Instruments and Effects” and “Mixing” chapters in the Logic Pro User Manual. For complete
information about the instruments and effects available in Logic Studio, see the Logic Studio
Instruments and Logic Studio Effects guides.
To learn how to add a channel strip, see Adding a Channel Strip. To learn how to change
a channel strip setting, see Changing a Channel Strip Setting.
Selecting Channel Strips
When you add a channel strip to a patch (or add a channel strip at the set or concert
level), the channel strip is selected in the Channel Strips area, and available settings appear
in the Channel Strip Settings browser. You can select a channel strip directly by clicking
it in the Channel Strips area and also select an adjacent channel strip by using key
commands:
Key command Selection
Left Arrow The channel strip to the left
Right Arrow The channel strip to the right
Showing Signal Flow Channel Strips
In addition to the channel strips in a patch, you can view and edit signal flow channel
strips in the Channel Strips area. Signal flow channel strips include the Output and Master
channel strips for the concert, auxes that are receiving signal from a channel strip in the
patch, and any set- or concert-level channel strips that are available when the patch is
selected. You can also view signal flow channel strips at the set level.
When you show signal flow channel strips, channel strips at the concert level, including
Output and Aux channel strips, include a small concert icon near the top of the channel
strip to make it easy to distinguish them from patch-level channel strips. Channel strips
at the set level include a small folder icon so they can also be easily distinguished.
You can edit signal flow channel strips in the Channel Strips area. For example, you can
adjust the volume fader or pan slider of a signal flow channel strip, or add effects to an
aux channel strip.
To show signal flow channel strips
µ Choose Show Signal Flow Channel Strips from the Action menu in the upper-right corner
of the Channel Strips area.
50 Chapter 5 Working in Edit ModeCreating an Alias of a Channel Strip
You can create an alias of a channel strip and use the alias in different patches or sets.
Aliases allow you to share highly memory-intensive plug-ins, such as third-party
multi-channel instruments and samplers, between different patches, rather than creating
multiple instances of these plug-ins. In some cases, creating an alias can be more efficient
(use fewer resources) than adding a concert- or set-level channel strip.
To create an alias of a channel strip
1 In the Channel Strips area, select the channel strip.
2 Choose Edit > Copy, or press Command-C (default).
3 In the Patch List, select the patch in which you want to use the alias.
4 Choose Edit > Paste as Alias, or press Command-Option-V (default).
The alias is pasted after the last channel strip in the patch (but before any signal flow
channel strips, if they are visible). An alias icon appears near the top of the alias to
distinguish it from the channel strips in the patch.
You can use an alias in multiple patches or sets. When you change any setting on the
original channel strip, those changes are reflected in the aliases of the channel strip. You
may want to audition each patch that uses an alias after changing the settings of the
original channel strip, to make sure it sounds the way you want.
Note: You can’t import a patch or set containing an alias, because the aliased channel
strip may not be available.
You can create an alias of a multi-output instrument, such as the EXS24 mkII, to use in
another patch or set in the concert. When you copy a multi-output instrument to create
an alias, be sure to select all of the aux channel strips for the instrument so that the
complete multi-output instrument is pasted as an alias. For information about using
multi-output instruments in MainStage, see Using Multiple Instrument Outputs in
MainStage.
Editing Channel Strips in MainStage
You can add instruments to software instrument channel strips and add effects to any
channel strip in the Channel Strips area. Adding instruments and effects to a channel
strip is the same in MainStage as it is in Logic Pro.
Chapter 5 Working in Edit Mode 51You edit channel strip parameters in the Channel Strip Inspector, which appears below
the workspace when the channel strip is selected in the Channel Strips area. You can set
the key range and velocity offset, create a controller transform, and filter MIDI control
messages to the channel strip. You can also rename the channel strip and change the
channel strip color and icon. The Channel Strip Inspector has four tabs, which provide
the following functions:
• Channel Strip Library and Plug-In Library: With a channel strip selected, you can choose
channel strip settings from the Channel Strip Library. With an Insert slot selected, you
can choose settings for the plug-in from the Plug-In Library.
• Attributes: You can rename the channel strip and choose a different channel strip color
and icon.
• MIDI Input: You can create controller transforms in the MIDI Input tab. For software
instrument and external instrument channel strips, you can also choose the MIDI input
device, filter MIDI input, transpose the instrument, and create velocity scaling graphs.
• Layer Editor: For software instrument and external instrument channel strips, you can
define the key range, set floating split points, and set the minimum and maximum
velocity for the channel strip.
Using the Channel Strip Library you can access any Logic Studio channel strip. However,
some channel strips include plug-ins, particularly Space Designer, not suited for live
performance because of their intensive CPU usage. Using these channel strips can affect
the performance of your concert, resulting in audio dropouts and other issues.
Logic Studio surround effect plug-ins cannot be used with MainStage. If you choose a
channel strip setting containing one of these effects, the unused effects are shown disabled
(gray, with a diagonal line running through the effect name).
Choosing Channel Strip Settings
You can quickly change the instrument, effects, and other parameters for a channel strip
by choosing a new channel strip setting. You can choose a new channel strip setting in
one of two ways: by using the Channel Strip Library or by using the Settings button at
the top of the channel strip.
To choose a channel strip setting from the Channel Strip Library
1 In the Channel Strips area, select the channel strip you want to change.
The selected channel strip is highlighted with a blue outline.
2 In the Channel Strip Inspector, click the Channel Strip Library tab.
Available settings for the channel strip appear in the Channel Strip Library. Logic Studio
content appears in a series of folders with different instrument categories. If you have
GarageBand installed, or have one or more Jam Packs installed on your computer, those
settings appear below the Logic Studio settings.
52 Chapter 5 Working in Edit Mode3 Click a category from the column on the left, then click subcategories from the columns
on the right until you see the settings you want.
You can select a recent channel strip setting by clicking Recent in the column on the left,
and then selecting a recent setting from the second column. As in Logic Pro, you can also
choose a new channel strip setting from the Settings menu at the top of the channel
strip.
To choose a channel strip setting from the Settings menu
µ Click the Settings button at the top of the channel strip, then choose a new setting from
the menu that appears.
When you choose new channel strip settings from the Settings menu, the selected channel
strip setting does not appear selected in the Channel Strip Library.
You can also search for channel strip settings by name.
To search for channel strip settings in the Channel Strip Library
1 In the Channel Strip Inspector, select the Channel Strip Library tab.
2 Choose Find in Library from the Action menu in the upper-right corner of the Channel
Strip Inspector.
3 In the dialog that appears, type the text you want to search for.
The channel strip with the text in its name appears selected in the library.
4 If more than one channel strip includes the search text, choose “Find Next in Library”
from the Action menu to cycle through the channel strips with names containing the
text.
5 To change the channel strip setting, click the name of the new setting in the Channel
Strip Inspector.
The Channel Strip Library shows all channel strip settings available to Logic Studio
applications, including settings that may not be useful in MainStage, such as mastering
settings. If you choose a channel strip setting containing plug-ins not usable in MainStage,
the plug-ins appear with a bold diagonal line in the Channel Strips area.
Chapter 5 Working in Edit Mode 53Renaming a Channel Strip
When you add a channel strip to a patch, the channel strip has a default name. You can
rename channel strips to distinguish your custom settings from the default ones.
To rename a channel strip
µ In the Attributes tab of the Channel Strip Inspector, select the name in the Name field
and type a new name.
Type a new name
in the field.
Changing the Channel Strip Color
Each channel strip has a color, which appears at the bottom of the channel strip and as
a layer above the keyboard screen control in the workspace and the Layer Editor. You
can change the color of a channel strip to make it easier to visually distinguish channel
strips.
To change the color of a Software Instrument channel strip
µ In the Attributes tab of the Channel Strip Inspector, choose a color from the Color pop-up
menu.
Choose a color from the
pop-up menu.
54 Chapter 5 Working in Edit ModeChanging the Channel Strip Icon
When you add a channel strip, the channel strip has a default icon, which appears above
the Settings menu. You can change the icon to help visually distinguish channel strips
with different instrument types or uses.
To change the icon for a channel strip
µ In the Attributes tab of the Channel Strip Inspector, choose an icon from the Icon well.
Choose an icon from
the menu. Using Feedback Protection with Channel Strips
You can use “Feedback Protection” on audio and external instrument channel strips in
MainStage. When Feedback Protection is turned on for a channel strip, MainStage alerts
you when it detects feedback on the channel. When the feedback alert appears, the
channel is temporarily silenced. You can then choose to mute the channel while you find
and eliminate the source of the feedback, allow feedback on the channel, or continue to
use the channel and receive alerts when feedback occurs.
Feedback protection is turned on by default for audio channels strips and turned off by
default for external instrument channel strips. You can turn Feedback Protection on or
off for a channel strip in the Channel Strip Inspector.
To turn Feedback Protection on or off
µ In the Attributes tab of the Channel Strip Inspector, select the Feedback Protection
checkbox to turn Feedback Protection on. If it is on, deselect the checkbox to turn it off.
Setting Keyboard Input for a Software Instrument Channel Strip
In the Channel Strip Inspector, you can choose the keyboard controller from which the
channel strip receives MIDI input. If you are using a multitimbral instrument, you can also
choose the input for each MIDI channel. For example, you can use the EVB3 instrument
as a multitimbral instrument, and send input to the upper and lower register and the
foot pedal using three separate MIDI channels.
Chapter 5 Working in Edit Mode 55To set the keyboard input for a software instrument channel strip
1 In the Channel Strip Inspector, click the MIDI Input tab.
2 Choose the MIDI input device from the Keyboard pop-up menu in the Input section.
The names in the Keyboard pop-up menu correspond to keyboard screen controls in the
workspace.
To set multitimbral input for different MIDI channels
1 In the Channel Strip Inspector, click the MIDI Input tab.
2 Choose Multitimbral from the Keyboard pop-up menu in the Input section.
3 In the Multitimbral Settings dialog, choose the input device for each MIDI channel you
want to receive MIDI input.
Transposing Software Instrument Channel Strips
You can transpose (change the pitch of) a software instrument channel strip. When you
transpose a channel strip, every MIDI note received by the channel strip is transposed by
the number of semitones set in the Transpose value slider.
To transpose the MIDI input of a software instrument channel strip
1 Select the channel strip in the Channel Strips area.
2 In the MIDI Input tab of the Channel Strip Inspector, set the value using the Transpose
value slider. You can click the value and drag up or down to set the value, click the up
arrow or down arrow, or double-click the value and type a new value.
Filtering MIDI Messages
You can filter some MIDI messages for a channel strip in the Channel Strip Inspector.
When you select one or more MIDI message types in the Filter section of the Channel
Strip Inspector, the corresponding MIDI message types are filtered out of any incoming
MIDI data and are not sent to the channel strip.
You can filter the following types of MIDI messages:
• Pitch Bend
• Sustain (control message 64)
• Modulation (control message 1)
• Expression (control message 11)
• Aftertouch
To filter incoming MIDI messages
1 In the Channel Strip Inspector, click the MIDI Input tab.
2 In the Filter section of the MIDI Input tab, select the checkbox for the MIDI messages you
want to filter.
56 Chapter 5 Working in Edit ModeIf you have created a controller transform, you can filter the input message type, and the
controller transform will still send its output message type. It is also possible to filter the
output message type, but in this case the output of the controller transform will be filtered.
Setting a Channel Strip to Ignore Hermode Tuning
If a patch (or the concert or set containing the patch) is set to use Hermode tuning, but
the patch contains a channel strip (for example, one with a drum or percussion instrument)
that you do not want to use Hermode tuning, you can set the individual channel strip to
ignore Hermode tuning.
To set a channel strip to ignore Hermode tuning
µ In the MIDI Input tab of the Channel Strip Inspector, select the Ignore Hermode Tuning
checkbox.
For information about using Hermode tuning, see the Logic Pro User Manual.
Working with Graphs
Using graphs, you can graphically remap the values for some MIDI control messages so
that input values from your controller produce different output values for the channel
strip or plug-in parameter. Graphs make it easier to see and modify a range of values for
a parameter, such as velocity or filter cutoff.
You can use graphs for the following types of parameters:
• Controller transforms
• Velocity scaling (both input velocity and note input)
• Parameters to which a screen control is mapped
You open a graph window by clicking the button for that type of graph in the appropriate
Inspector. The Transform and Velocity Scaling graphs for the selected channel strip are
available in the MIDI Input tab of the Channel Strip Inspector. The Parameter graph for
the selected screen control is available in the tab for the individual mapping as well as
in the Mappings tab in the (Edit mode) Screen Control Inspector.
The graph shows the range of input values on the horizontal (x) axis, moving from left
to right, and shows the range of output values on the vertical (y) axis, moving from bottom
to top.
In the graph window, you have several ways of working. You can edit the graph curve
directly, edit values numerically using the Precision Editor, or use the Curve buttons to
set the graph to one of the predefined curves.
Chapter 5 Working in Edit Mode 57Most of the ways you edit graphs are the same, regardless of the type of graph—although
there are a few features specific to one or another type. For Parameter graphs, you can
change the minimum and maximum range values for the graph using the Range Min
and Range Max value sliders. For information about controller transforms, see Creating
Controller Transforms. For information about velocity scaling, see Scaling Channel Strip
Velocity. For information about parameter mapping graphs, see Using Parameter Mapping
Graphs.
To edit a graph
1 Select the channel strip or screen control you want the graph to apply to.
2 Select the MIDI Input tab (for transform and velocity scaling graphs) or the Mapping tab
(for parameter mapping graphs).
3 Click the graph button for the type of graph you want to edit.
The graph window opens.
4 Do one of the following:
• Click one of the Curve buttons to set the graph to one of the preset curves.
• Click the curve at the point where you want to add a node, then drag the node to the
desired value. Drag horizontally to change the input value, or vertically to change the
output value.
As you drag, the current values of the node appear next to the pointer.
• Double-click the curve at the point where you want to add a node, then edit the values
for the node in the Precision Editor.
• Option-click any part of the curve (except a node), then drag the dotted part of the
curve to make the curve nonlinear.
5 Continue adding and adjusting points on the curve until you achieve the result you want.
6 When you are finished, click the close button at the upper-left corner of the graph window
to close it.
To invert the values of the graph
Do one of the following:
µ In the graph window, click the Invert button.
µ In the tab for the mapping, select the Invert Parameter Range checkbox.
To reset the graph to its default values
µ Click the Revert to Default button at the top of the graph window.
After you have edited a graph, the button for the graph in the Inspector shows the edited
shape of the graph in a dark blue color to make it easier to identify which graphs you
have edited and how.
58 Chapter 5 Working in Edit ModeTo close the graph window
µ Press Escape (Esc).
Creating Controller Transforms
Using a transform graph, you can remap the values for some MIDI control messages so
that input values from your controller produce different output values for the channel
strip. A common use of the transform is for expression scaling, where input MIDI expression
values are mapped to different output values on a graphic curve.
In addition, you can transform input values for one message type to output values for
another message type. For example, you can transform MIDI volume values from your
controller to send expression values to the channel strip, or transform input breath values
to send modulation values. The transform graph provides a very flexible way of remapping
both the values and the output destination for these MIDI control messages. In MainStage,
you can transform values for expression, modulation, MIDI volume, and breath control
messages.
You choose the input and output message types and graphically create transform curves
in the MIDI Input tab of the Channel Strip Inspector. In a transform graph, the horizontal
axis represents input values from your controller, and the vertical axis represents output
values sent to the channel strip.
To set the input and output message types for a controller transform
1 In the Channel Strips area, select the channel strip for which you want to create a controller
transform.
2 In the Channel Strip Inspector, select the MIDI Input tab.
3 In the Controllers section, choose the input message type from the Input pop-up menu.
4 Choose the output message type from the Output pop-up menu.
Click the Transform button
to edit the graph.
Choose the input and
output message types
from these menus.
Chapter 5 Working in Edit Mode 59To open the Transform graph
µ In the MIDI Input tab of the Channel Strip Inspector, click the Transform button.
The Transform graph opens.
If a patch contains more than one channel strip with a transform graph, the transform
curves for the other channel strips in the patch appear in the controller Transform graph
window behind the current curve. Each channel strip in the patch can have its own
controller transform.
For information about editing the graph, see Working with Graphs.
Scaling Channel Strip Velocity
You can scale the output velocity of a channel strip using the Velocity Scaling graphs.
You can scale output velocity based on note input or input velocity.
When you perform velocity scaling, each input velocity (regardless of the note being
played) is scaled to the output velocity.
When you perform note scaling, output velocity is scaled depending on the note in the
key range. This is useful when you want to have a parameter change in different parts of
the key range; for example, when a filter or attack parameter opens for higher note values
to give a brighter, sharper sound.
To open a velocity scaling graph
1 In the Channel Strips area, select the channel strip on which you want to perform velocity
scaling.
2 In the Channel Strip Inspector, select the MIDI Input tab.
3 In the MIDI Input tab, do one of the following:
• To open the velocity input graph, select the Velocity Input button.
• To open the note input graph, select the Note Input button.
The selected velocity scaling graph opens.
For information about editing the graph, see Working with Graphs.
Creating Keyboard Layers and Splits
If you play a keyboard controller, you can easily create keyboard layers and splits in your
MainStage patches. You create layers and splits by adding two or more channel strips to
a patch and setting the Low Key and High Key for each channel strip to define its key
range. The key range defines the range of notes on a keyboard controller that trigger
sound from a software instrument or external instrument in the channel strip. You can
define key ranges so that they overlap (for layered sounds) or are contiguous (for splits).
60 Chapter 5 Working in Edit ModeThe Layer Editor tab in the Channel Strip Inspector shows the key range for each channel
strip in a patch and in the concert or set containing the patch (if either includes a channel
strip with a key range). You can define the key range for a channel strip in one of several
ways: you can drag the edges of the layer, use the Learn buttons to define the Low and
High keys, or use the Low Key and High Key value sliders.
To open the Layer Editor
µ In the Channel Strip Inspector, click the Layer Editor tab.
To define a key range using the layers
1 In the Layer Editor, move the pointer over the left edge of the layer you want to
change/define.
The pointer changes to a resize pointer.
2 Drag the left edge of the layer to the note you want to use as the low key (the lowest
note in the key range).
3 Move the pointer over the right edge of the layer.
4 Drag the right edge of the layer to the note you want to use as the high key (the highest
note in the key range).
To define a key range using the Learn buttons
1 In the Channel Strips area, select the channel strip.
2 In the Channel Strip Inspector, click the Layer Editor tab.
3 Click the Learn button next to the Low Key value slider.
Click Learn and play the
corresponding note on
your music keyboard.
4 On your keyboard controller, press the key you want to set as the lowest key in the key
range.
5 Click the Learn button again to turn off Learn mode for the Low Key.
6 Click the Learn button next to the High Key value slider.
Chapter 5 Working in Edit Mode 617 On your keyboard controller, press the key you want to set as the highest key in the key
range.
8 Click the Learn button again to turn off Learn mode for the High Key.
When you play the patch, you hear the channel strip when you play notes inside the key
range. When you play notes outside the key range, no sound is generated from the
channel strip.
To define a key range using the value sliders
1 In the Channel Strips area, select the channel strip.
2 In the Channel Strip Inspector, click the Layer Editor tab.
3 Change the value in the Low Key value slider.
You can click the value and drag vertically, click the up arrow or down arrow, or
double-click the value and type a new value.
Set the high key and
low key using these
value sliders.
4 Change the value in the High Key value slider.
You can click the value and drag vertically, click the up arrow or down arrow, or
double-click the value and type a new value.
Setting Floating Split Points
When a key range has a floating split point, the notes that define the boundaries of the
key range ends change depending on the keys you play as you approach the boundary
of the key range. You set floating split points in the Layer Editor tab of the Channel Strip
Inspector.
62 Chapter 5 Working in Edit ModeFloating split points can be explained using an example. If you set the Low Key of a key
range to C1, set a floating split point value of 3, then play notes immediately above C1
(for example, the notes F1-Eb1-D1), and continue playing downward past C1 (for example,
the notes C1-Bb0-A0), the split point moves down to include those notes, up to the
floating split point value (3 semitones). If, however, you start by playing notes immediately
below the Low Key (for example, the notes G0-A0-B0) and continue playing upward past
C1 (for example, the notes C1-D1-E1), the split point moves up to include those notes,
up to the floating split point value. (In this example, C1 and D1 would be included, but
not E1, which is four semitones above the Low Key.)
To set floating split points for a layer/key range
1 In the Layer Editor tab, click the Low Key Floating value slider and drag vertically to change
the value, or double-click the current value and type a new value (the value is the number
of semitones used for the split).
2 Click the High Key Floating value slider and drag vertically to change the value, or
double-click the current value and type a new value.
You can also create a keyboard split by adding a channel strip at the set level and adjusting
the key range of the channel strips in the patches in the set. The channel strip at the set
level takes precedence over any channel strips in patches in the set for the notes in its
key range. For information about adding a channel strip at the set level, see Working at
the Set Level.
Setting the Velocity Range
By default, the velocity of a channel strip extends from 1 to 127. You can limit the velocity
range so that the channel strip only responds when the notes you play on your controller
fall between the Min and Max values of the velocity range.
To set the velocity range for a channel strip
1 In the Channel Strips area, select the channel strip.
2 In the Channel Strip Inspector, click the Layer Editor tab.
3 In the Layer Editor, set the minimum velocity that triggers the channel strip using the
Velocity Min value slider. (Click the value and drag vertically to change the value, or
double-click the value and type a new value.)
4 Set the maximum velocity that triggers the channel strip using the Velocity Max value
slider.
Overriding Concert- and Set-Level Key Ranges
If a software instrument channel strip exists at the concert level, the concert-level channel
strip takes precedence over any patch-level software instrument channel strips within its
key range. This means that when you play any notes in the key range of the concert-level
channel strip on a keyboard controller, you hear only the concert-level channel strip, even
when a patch is selected.
Chapter 5 Working in Edit Mode 63Similarly, if a software instrument channel strip exists at the set level, the same condition
applies for all patches in the set. That is, the set-level channel strip takes precedence over
any patch-level channel strips within its key range.
You can override concert- or set-level channel strips for a channel strip on an individual
patch, so that the patch-level channel strip takes precedence over the concert-level or
set-level channel strips.
To override concert- or set-level key ranges
1 In the Patch List, select the patch with the channel strip that you want to override the
concert- or set-level channel strip.
2 In the Channel Strips area, select the channel strip with the key range that you want to
override the concert- or set-level key range.
3 In the Channel Strip Inspector, select the Layer Editor.
4 Select the “Override parent ranges” checkbox.
The “Override parent ranges” checkbox is available only if there is a concert- or set-level
channel strip.
Using the EXS24 mkII Instrument Editor in MainStage
For channel strips using the EXS24 mkII sampler instrument, you can edit sampler
instrument zones and groups in the EXS Instrument Editor. The EXS24 mkII Instrument
Editor works exactly the same in MainStage as it does in Logic Pro, with one exception: in
MainStage, you cannot open the Sample Editor to edit individual audio samples.
In an EXS24 mkII instrument, a zone is a location into which a single sample (an audio
file) is loaded from a hard disk. You can edit zone parameters in Zone view mode. Zones
can be assigned to groups, which provide parameters that allow you to simultaneously
edit all zones in the group. You can define as many groups as desired. The Instrument
Editor has two view modes: Zones view and Groups view. You can edit zones in Zones
view and edit group parameters in Groups view.
To open the EXS24 mkII Instrument Editor
1 In a channel strip using the EXS24 mkII, double-click the EXS24 slot in the I/O section.
64 Chapter 5 Working in Edit Mode2 In the upper-right area of the EXS24 mkII plug-in window, click the Edit button.
Click the Edit button
to open the
Instrument Editor.
The Instrument Editor opens. When you play notes on the keyboard of the EXS24 mkII
Instrument Editor, the notes are played on the selected channel strip. You can switch
between Zones view and Groups view, click individual zones to view their parameters,
click notes on the keyboard to hear the samples assigned to them, create zones and
groups, and edit zone and group parameters just as you can in Logic Pro.
For in-depth information about using the EXS24 mkII Instrument Editor, refer to the
Logic Studio Instruments Help.
Using Multiple Instrument Outputs in MainStage
MainStage supports the multiple output versions of the EXS24 mkII, Ultrabeat, and some
Audio Units instruments. You can insert multi output instruments and use them to route
different outputs to different physical outputs, apply different plug-ins or processing to
different outputs, or for other uses.
Chapter 5 Working in Edit Mode 65If an instrument supports multiple outputs, one or more multi output versions are available
in the Instrument Plug-In menu for the instrument.
The Plug-In menu shows specific information about output configurations, for
example: EXS24: Multi Output (5xStereo, 6xMono).
Note: Not all instruments support multiple outputs. If no multi output version is available
in the Plug-In menu, the instrument does not support multiple outputs.
To insert a multi output instrument
1 On the channel strip in which you want to use the multi output instrument, click the
Instrument slot.
2 Choose the instrument from the Plug-In menu, and choose the multi output version from
the submenu.
The instrument name appears in the Instrument slot, and a small Add (+) button appears
below the Solo button on the channel strip. The Output for the instrument is set to Output
1-2.
3 Double-click the Instrument slot to open the instrument (plug-in) window.
You need to set up the output routing for individual sounds or samples in the instrument
(plug-in window). You set up output routing for the EXS24 mkII in the Instrument Editor,
and set up output routing for Ultrabeat in the Output menu of the Assignment section
of the Ultrabeat window.
66 Chapter 5 Working in Edit Mode4 On the channel strip, click the Add button to add additional outputs.
Each time you add an output, a new section of the channel strip is added, with the next
available pair of outputs.
Each output uses the same instrument, but each can have its own inserts, volume, pan,
and expressions settings and its own effect sends, as well as its own outputs.
For more information about using multiple instrument outputs, see the Logic Pro
User Manual and the Logic Studio Instruments manual. Information about specific
instruments (for example, Ultrabeat) can be found in the chapters covering those
instruments.
Using External MIDI Instruments in MainStage
You can add an external MIDI instrument channel strip to a patch and use it to play an
external instrument, such as a hardware synthesizer. You can also use an external
instrument to “play” a ReWire application.
When you use an external MIDI instrument channel strip, you choose the MIDI channel
to send MIDI output from MainStage to the instrument, and choose the audio inputs to
receive audio from the instrument. The audio output from the instrument is routed to
the input of the channel strip, where you can process it using MainStage effects.
To add an external instrument channel strip
1 Click the Add Channel Strip (+) button in the upper-right corner of the Channel Strips
area.
2 In the New Channel Strip dialog, select External Instrument.
You can also choose the MIDI input and output, the format, and the audio input and
output for the channel strip. You can choose an audio channel or a ReWire application
for the input, but cannot choose a bus. The MIDI input pop-up menu shows the Keyboard
or MIDI Activity screen controls (which receive MIDI note input) currently in the workspace.
Note: When using an external instrument to send MIDI to a ReWire slave application
(such as Reason or Live), you should disable any MIDI input the slave application receives
directly from the hardware controller. For information about disabling MIDI input from
a hardware device, consult the documentation for the application.
For ReWire applications, when you add an external channel strip, set the MIDI port to the
ReWire slave. The Channel list also updates based on the port. Some ReWire slaves set
up multiple ports. To use a ReWire application with MainStage, open the ReWire application
after opening MainStage.
Chapter 5 Working in Edit Mode 67When you play your keyboard controller with the patch containing the external MIDI
instrument selected, MainStage sends note and other MIDI messages to the chosen MIDI
Output and MIDI Channel, receives audio from the chosen Input, and sends the audio
output to the chosen Output. You can also send a program change message to the
external instrument when you select the patch to control which program the external
instrument uses.
To send a program change to an external instrument when you select a patch
1 In the Channel Strip Inspector, click the MIDI Out tab.
2 In the MIDI Out tab, select the Send Program Change checkbox.
The Program Change value is set to –1 by default, so that no program change is sent
when you select the Send Program Change checkbox, until you change the value.
3 Set the program change number you want to send using the Send Program Change value
slider.
4 If you want to send a Bank Change message, select the Send Program Change checkbox,
then set the most-significant byte (MSB) and least-significant byte (LSB) of the bank
change number using the Bank MSB and Bank LSB value sliders.
When you select the patch, the program change and bank change messages are sent to
the external instrument. Also note that program and bank changes are sent when you
edit the program change and bank change value sliders in the Channel Strip Inspector
(so you can be sure that the values you enter send the correct program and bank change
messages). For more information about using external MIDI instruments, see the Logic Pro
User Manual.
If you want the external instrument to respond to the program change, but do not want
it to receive note or other MIDI information from your controller, click the MIDI Input tab
and choose None from the Keyboard pop-up menu.
You can also use a knob or fader mapped to the Program Change action to send program
changes to an external instrument.
To send program changes to an external instrument using a screen control
1 In the workspace, click the screen control you want to use to send program change
messages.
2 In the Screen Control Inspector, click the Unmapped tab.
3 In the Mapping browser, select the external instrument, then select MIDI Controller folder
from the submenu.
4 In the third column from the left, select Program Change.
The screen control is mapped to the Program Change parameter. By moving the hardware
control assigned to the screen control, you can send program changes to the external
instrument.
68 Chapter 5 Working in Edit ModeNote: If the MIDI Out parameter of the external instrument channel strip is set to the
external instrument when you map the screen control to the Program change parameter,
a program change (Program 0) is sent when you create the mapping. If you are editing
the program on the external instrument, your changes may be lost. To map the screen
control without sending an immediate program change to the external instrument, choose
None from the MIDI Out slot of the external instrument before you create the mapping,
then choose the external instrument in the MIDI Out slot. No program change is sent
until you move the knob or fader.
Using the Activity Monitor
As you work on your concert in Edit mode, the Activity Monitor in the toolbar shows the
current CPU and memory information as well as received MIDI messages. The CPU section
of the Activity Monitor glows red to indicate a CPU overload condition.
The Memory section of the Activity Monitor glows yellow to indicate a low-memory
condition. If an extreme low-memory condition occurs, an alert appears, warning you to
save the concert before MainStage quits. Low-memory conditions can be caused by
having too many memory-intensive channel strips or plug-ins in a concert or by using
other memory-intensive applications (including ReWire applications) together with the
concert. If a low-memory condition occurs, try reopening the concert and consolidating
some memory-intensive plug-ins or channel strips.
Deleting Channel Strips
You can delete a channel strip if you decide you no longer want it in a patch.
To delete a channel strip
1 Select the channel strip in the Channel Strips area.
2 Choose Edit > Delete (or press the Delete key).
Mapping Screen Controls
After you have created your patches and learned controller assignments for the screen
controls you want to use, you can map MainStage screen controls to channel strip and
plug-in parameters to modify the sound of your patches while you perform, or map them
to MainStage actions to control other functions.
You map screen controls to parameters in Edit mode. After you learn controller
assignments (in Layout mode), the screen controls in the workspace do not respond to
movements of physical controls on your MIDI hardware until you map them to channel
strip parameters (in Edit mode). There are two ways to map screen controls to
parameters: by visually selecting parameters on channel strips or plug-in windows or by
choosing parameters in the Parameter Mapping browser.
Chapter 5 Working in Edit Mode 69Mapping Screen Controls to Channel Strip and Plug-In Parameters
After you have made your controller assignments, you can begin mapping screen controls
to the parameters in your patches you will want to control while you are performing. You
will likely want to map screen controls to parameters in each patch in a concert, so that
you can easily access and modify the parameters you want for each patch when you are
performing live. You can also map parameters at the concert level to control master
volume, view master levels, or modify concert-wide effects.
You can map screen controls to channel strip and plug-in parameters in one of two
ways: by mapping screen controls visually to parameters on the channel strip or in a
plug-in window or by using the Parameter Mapping browser.
You map screen controls to parameters in Edit mode. The screen controls in the workspace
do not respond to movements of physical controls on your MIDI hardware until you map
them to channel strip parameters.
To map a screen control to a channel strip or plug-in parameter
1 In the workspace, click the screen control you want to map.
The screen control is highlighted in blue. The Screen Control Inspector appears below
the workspace, showing the parameters for the selected screen control. The Screen Control
Inspector includes General and Mapping tabs as well as a tab labeled Unmapped.
2 Press Command-L.
The Screen Control Inspector opens to the Unmapped tab, showing the Parameter
Mapping browser. The Map Parameter button lights red to indicate that mapping is active.
3 To map the screen control to a channel strip parameter, click the control for the parameter
on the channel strip in the Channel Strips area.
70 Chapter 5 Working in Edit Mode4 To map the screen control to a plug-in parameter, double-click the plug-in in the Inserts
section of the channel strip to open the plug-in window, then click the parameter in the
plug-in window.
Click the parameter in
a channel strip or
plug-in window.
Click the screen control
you want to map to a
parameter.
The screen control is mapped to the selected parameter, and the Unmapped tab takes
the name of the parameter. You can continue mapping additional screen controls by
clicking them in the workspace and then clicking the corresponding parameters in a
channel strip or plug-in window.
5 When you are finished, press Command-L again (or click the Map Parameter button) to