Anda di halaman 1dari 34

# The Best Way to Teach

Folding -

## Nicholas Ray, Final Report

AP Statistics, Kiker B6

## Date: May 15th, 2017

Is there evidence that teachers giving lessons at school are not as effective as virtual lessons?

This experiment will explore this topic, as the current education system is modeled around the

classroom, and not just the projector. Will the next wave of education be from behind the screen?

Introduction ... 3

Body ... 3

Appendices

F: R Code ... 29

## I: Photo Evidence of Sample Subjects Signatures ... 33

Introduction:

The public school system is grown upon teachers giving a lecture-style lesson now to be

called physical lessons to their students. But more and more virtual lessons are being

implemented, from video lessons from Khan Academy to review topics with great success, to

online tutorials that fail to teach individuals the basic skill of tying a bowtie. So, the question to

follow this occurrence is: Are video lessons actually effective? To test this out, the lesson was set

to teaching origami: the art of paper folding - specifically, the paper crane. The population will

be of the same year, the senior class of 2017 who will give the highest response compared to the

other classes. Between the lessons, physical and virtual, the number of steps towards creating a

paper crane will be recorded and the difference computed. Then using the null hypothesis of the

assumption that there is no difference in number of steps from both lessons, and the alternative

hypothesis which would indicate that more steps were completed with the virtual lessons than the

physical lesson.

Body:

The population in interest is the senior class of 2017, once class year only to not

confound grade level and different teaching practices that have changed over years. Although

this may not be representative of all, it is the best option due to the high response rate from

previous experiments. My sample was obtained via a simple random sampling function in

RStudio of the whole senior class of 2017, containing 25. This is to not go over the 10% which

helps keep the randomness of my sample by not overpicking people without replacement,

making it representative of the class of 2017. Then randomness was also utilized to pick order of
treatments, physical lesson first or virtual lesson, via coin flip. Then once the first lesson got 13

people starting with it, to ensure both lessons have the same chance of going first, .50 chance,

because this is not the variable under investigation. This took the practice that the previous

lesson gave into account, and also if a lesson going first or last made a difference rather than the

quality of the lesson itself. There is a set number of steps it takes to fold a paper crane, 18 in the

way listed in the appendix. To see if the virtual lesson is better, a one minute recording was

created for the seniors to watch and follow along, stopping if need be. Then after five minutes

was up, the number of steps done was recorded. Then the same student got a physical lesson with

the same verbal instructions but with a physical teacher going along the process. Again five

minutes is given and the steps are recorded again. To make sure that whichever lesson going first

does not effect the latter lesson, a coin was flip for randomness to see which lesson went first

## after 13 of one was collected.

Then the data was uploaded into RStudio and the difference Virtual - Physical lesson

steps was taken to run a Two Sample Paired T Test to test the significance where the null

hypothesis was that the true mean difference being zero and the mean difference being positive,

meaning that the Virtual lesson was more successful in teaching more steps. After running the

test, the p-value came back 0.452. This means that the probability to obtain the sample mean

difference seen or more extreme (higher) would be 0.452 given that the null hypothesis of the

true mean difference being zero is true. This is very high, so this fails to reject the null

hypothesis that the true mean difference in virtual-physical lesson steps is zero. Therefore there

is no evidence to support the alternative hypothesis that the true mean difference virtual-physical
lesson steps is positive so that the true mean steps of virtual lesson is higher than the physical. In

## the appendix is the R code used to perform the test.

The use the t test, conditions had to be met. The ten percent rule says that the sample

must not be ten percent of the total population. In this case the class of 2017 is 251 in size, so the

largest sample size would be 25 that would not be considered over sampling. This rule allows the

fact that sampling without replacement does not keep the probability of selection the same, but

not dramatically changed because the sample is small enough. Then the data was plotted on a

normal quantile plot, and was approximately linear. Because of this, an approximately normal

model is acceptable for the sample distribution given. This shows the mastery of describing

patterns in the sample disruption of the experiment data. Along with a simple random sample

being obtained, the conditions for an approximate normal model is met. The approximate normal

model is shows the mastery of describing overall behaviors in the sample disruption of the

experiment data. There are no deviations from the pattern like outliers, and the overall behavior

of the sample distribution is approximate normal. This shows the mastery of observing

departures from patterns in the sample disruption of the experiment data. The inference test

performed was a two sample paired t test, because a statistic of number of steps from physical

lessons and number of steps from virtual lessons. The data is paired, or not independent, because

the same person took both lessons in a paired, but random order. Planning of the experiment was

started by getting the random sample, and then getting the subjects at times that worked with

them. This shows the mastery of planning the experiment. Then giving the test must be repeated

with each senior, and keeping the same treatment for each subject. The virtual lesson is just

giving a video with headphones, but the physical lesson is teaching the subject via dialogue and
going through the steps. This shows the mastery of conducting the experiment. After conducting

the tests for each senior, the recorded number of steps performed after the virtual and physical

lesson was uploaded into RStudio, and the difference was taken (virtual minus physical). Then

the two sample paired t test was taken on the difference vector, yielding the p-value of 0.452.

This means that the probability to obtain the sample mean difference seen or more extreme

(higher) would be 0.452 given that the null hypothesis of the true mean difference being zero is

true. Because of this high probability of getting the observed difference, there is no evidence to

say that the treatment had a significant difference on the steps taken to complete a paper crane

between the two types of lessons. This shows the mastery of analyzing and the experiment and

exploring random phenomena using probability and simulation. By assuming the null hypothesis

of no difference in number of steps between lessons and looking for evidence fro the alternative

hypothesis of a difference in number of steps in favor of virtual lessons, this shows the mastery

of estimating parameters and testing the hypotheses with tests, and granting valid conclusions on

the p-value.

Concluding this experiment, the paired t test gave an insignificant p-value. This means

there is no evidence to reject the null hypothesis that the mean difference in virtual minus

physical steps is zero, so the alternative hypothesis, our intended goal to see if virtual lessons are

more effective than physical lessons by looking at the steps performing being higher, had no

evidence. Although this does not give evidence to show that physical lessons given by teachers is

not as effective as virtual lessons when it comes to the number of steps a senior can fold a paper

crane. The goal was to see how effective virtual lesson are for teaching, so another version of

this experiment might not use origami but other education lessons like math tests or a
memorization test and observe the quantity and quality of the lesson received by the subjects.

The population was focused on the senior class of 2017, but this experiment could be expanded

to other populations to see how virtual and physical lessons change. Overall, this experiment

leaves the current teaching method without remorse, and puts no evidence against the style of

teaching.
Appendix A

Appendix B

## Revised Inquiry Pitch:

***
Is the future of teaching
behind a computer screen?
***
Difference of virtual and physical lessons on
learning origami Inquiry Pitch
Nicholas Ray
AP Statistics
B6
Wednesday, April 12, 2017
The past twelve years of my class education was full of physical lessons, with a teacher,

at school. Now there has been many supplemental education by the occasional RStudio videos

on YouTube or Khan Academy review lessons, but the main bulk of knowledge was shared

looking at another human face-to-face, rather than a LCD screen. However, with the increasing

amount of workload given over a screen rather than at class puts into question the effectivity of

this virtual lesson. If teachers could upload videos, lessons could be given anywhere, rather than

forcing a learner to sit in a school chair. Origami is the Japanese art of folding paper, and I

learned this myself in 7th grade at Kealing Middle School. After having not totally

understanding the lesson at school, I looked up a video to learn the art completely. Virtual

lessons helped me in this case, and later on in life as well, but is there a significant difference in

the effectivity of the lesson, virtual versus physical? This inquiry will attempt to find an answer.

A redditor by the name of SoundTheNote asked what is the best way to learn Origami?, and

conferences on Origami. This brought up the virtual versus physical lesson question in the

context of Origami, and a perfect place to start my inquiry on the subject of virtual versus

## physical lessons as a whole.

This inquiry is focused towards my own class, so I will be selecting from the senior class

of 251 students. To meet the 10% condition, I will only select 25 students, which is also quite

large sample size for CLT to prove normality of sample distribution. I will use the Alpha list

subsetted by seniors to select 25 seniors for the experiment. I will use a pair t test, and having

each senior take the physical lesson taught by me and the virtual lesson created by myself as
well, each person will take the lessons in random order as well. Then the time it takes for each

student to finish a paper crane will be recorded for both lessons, and the difference will be taken

(virtual - physical). Then I will run a 1 sample paired t test on the mean difference in mean time

taken to complete a crane, the null being no difference in time and the alternate being a

difference. The control will be taken care of by using a paired test and having the students take

both lessons to make sure each individual student intelligence or past knowledge of origami is

taken into account. Randomness is taken in the assignments of treatment order and in the

selection of seniors. Repetition is taken in selecting 25 seniors to take part in the experiment. I

used a 1 sample t test for difference of means to test the paired nature of the experimental data

taken, and to see if there is a significant difference in the mean time it takes to create a crane by

physical lesson and virtual lesson, by also taking into account each seniors variability from past

## knowledge to learning skills.

The sampling distribution will look approximately normal, because the sample size taken

is quite large from the population of seniors, and departures from this may lead to differences in

the usefulness of the t test performed. And if the pattern that my experiment results in is that

virtual lesson take less time on average than physical, I will find the p-value of this occurrence to

see if this variation from being no difference is so large that this is very unlikely to occur, then I

will have evidence that there actually is a difference in virtual and physical lesson effectivity.

The plan above list out the steps of my experiment, and covers the three criteria of control,

randomness and repetition to be a good experiment. If my p-value is so low (less than 0.05), then

I would have rejected the null that there is no difference, and claiming evidence that there is truly

a difference in the mean time taken to create a crane via virtual lesson or physical lesson. The
p-value is a great example to find the probability of this observed phenomena to occur, and if

this is too low we may have to check our assumptions that there was no difference to begin with.

Once we have a significant p-value, we can conclude that we reject the null hypothesis that there

is no difference in mean time to make a crane with physical or virtual lesson, and therefor also

## have evidence that there is a difference.

After learning how to fold a crane the first time, regardless of lesson type, the next time

will be quicker with the basis of practice, but having each person randomly get one lesson first

over the other, than the bias of practicing evens out with randomness. Non-response bias should

not be a problem because I am only analyzing seniors in my grade, who would know me and can

trust me when I ask them to participate in my experiment. Just because I may get significant

evidence to prove that virtual lessons have a significantly different mean time to create a crane

over physical lessons, learning how to fold a crane is only 3D manipulation rather than other

academics like mathematical thinking and rhetorical thought. Teachers and physical lessons are

still an integral part of schooling, but the assessment of virtual lessons like this experiment can

help improve the education system rather than the abolition of school teachers totally.

Work cited

(SoundTheNote. "Best Way to Learn Origami? R/origami." Reddit. Reddit, 2015. Web.

11 Apr. 2017.)
Appendix C

## Revised Inquiry Pitch

***
Is the future of teaching
behind a computer screen?
***
Difference of virtual and physical lessons on
learning origami Inquiry Pitch
Nicholas Ray
AP Statistics
B6
Wednesday, April 12, 2017
The past twelve years of my class education was full of physical lessons, with a teacher,

at school. Now there has been many supplemental education by the occasional RStudio videos

on YouTube or Khan Academy review lessons, but the main bulk of knowledge was shared

looking at another human face-to-face, rather than a LCD screen. However, with the increasing

amount of workload given over a screen rather than at class puts into question the effectivity of

this virtual lesson. If teachers could upload videos, lessons could be given anywhere, rather than

forcing a learner to sit in a school chair. Origami is the Japanese art of folding paper, and I

learned this myself in 7th grade at Kealing Middle School. After having not totally

understanding the lesson at school, I looked up a video to learn the art completely. Virtual

lessons helped me in this case, and later on in life as well, but is there a significant difference in

the effectivity of the lesson, virtual versus physical? This inquiry will attempt to find an answer.

A redditor by the name of SoundTheNote asked what is the best way to learn Origami?, and

conferences on Origami. This brought up the virtual versus physical lesson question in the

context of Origami, and a perfect place to start my inquiry on the subject of virtual versus

## physical lessons as a whole.

This inquiry is focused towards my own class, so I will be selecting from the senior class

of 251 students. To meet the 10% condition, I will only select 25 students, which is also quite

large sample size for CLT to prove normality of sample distribution. I will use the Alpha list

subsetted by seniors to select 25 seniors for the experiment. I will use a pair t test, and having

each senior take the physical lesson taught by me and the virtual lesson created by myself as
well, each person will take the lessons in random order as well. Then the time it takes for each

student to finish a paper crane will be recorded for both lessons, and the difference will be taken

(virtual - physical). Then I will run a 2 sample paired t test on the mean difference in mean time

taken to complete a crane, the null being no difference in time and the alternate being a

difference. The control will be taken care of by using a paired test and having the students take

both lessons to make sure each individual student intelligence or past knowledge of origami is

taken into account, meaning this must be a two sample test because a first and second lesson

number of steps is recorded and the difference is tested. Randomness is taken in the assignments

of treatment order and in the selection of seniors. Repetition is taken in selecting 25 seniors to

take part in the experiment. I used a 1 sample t test for difference of means to test the paired

nature of the experimental data taken, and to see if there is a significant difference in the mean

time it takes to create a crane by physical lesson and virtual lesson, by also taking into account

## each seniors variability from past knowledge to learning skills.

The sampling distribution will look approximately normal, because the sample size taken

is quite large from the population of seniors, and departures from this may lead to differences in

the usefulness of the t test performed. And if the pattern that my experiment results in is that

virtual lesson take less time on average than physical, I will find the p-value of this occurrence to

see if this variation from being no difference is so large that this is very unlikely to occur, then I

will have evidence that there actually is a difference in virtual and physical lesson effectivity.

The plan above list out the steps of my experiment, and covers the three criteria of control,

randomness and repetition to be a good experiment. If my p-value is so low (less than 0.05), then

I would have rejected the null that there is no difference, and claiming evidence that there is truly
a difference in the mean time taken to create a crane via virtual lesson or physical lesson. The

p-value is a great example to find the probability of this observed phenomena to occur, and if

this is too low we may have to check our assumptions that there was no difference to begin with.

Once we have a significant p-value, we can conclude that we reject the null hypothesis that there

is no difference in mean time to make a crane with physical or virtual lesson, and therefor also

## have evidence that there is a difference.

After learning how to fold a crane the first time, regardless of lesson type, the next time

will be quicker with the basis of practice, but having each person randomly get one lesson first

over the other, than the bias of practicing evens out with randomness. Non-response bias should

not be a problem because I am only analyzing seniors in my grade, who would know me and can

trust me when I ask them to participate in my experiment. Focus on only one class at one school

may add bias into the experiment, and must be addressed in the report. Just because I may get

significant evidence to prove that virtual lessons have a significantly different mean time to

create a crane over physical lessons, learning how to fold a crane is only 3D manipulation rather

than other academics like mathematical thinking and rhetorical thought. Teachers and physical

lessons are still an integral part of schooling, but the assessment of virtual lessons like this

experiment can help improve the education system rather than the abolition of school teachers

totally.

Work cited

(SoundTheNote. "Best Way to Learn Origami? R/origami." Reddit. Reddit, 2015. Web.

11 Apr. 2017.)
Appendix D

## The Difference Between Video

Lessons or Physical Lesson,
Which is Better to Teach
Origami?

## Nicholas Ray, Final Report

AP Statistics, Kiker B6
Date: May 15th, 2017

Is there evidence that teachers giving lessons at school are not as effective as virtual lessons?
This experiment will explore this topic, as the current education system is modeled around the
classroom, and not just the projector. Will the next wave of education be from behind the screen?
Introduction:
The public school system is grown upon teachers giving physical lessons to their
students. But more and more virtual lesson are being implemented, from video lesson from Khan
Academy to review topics with great success, to online tutorials that can teach how to tie a tie
failing for some individuals. So the question of follow this occurrence is are video lesson
actually effective? To test this out, the lesson was set to teaching origami: the art of paper folding
- specifically the paper crane. After running the experiment, the test came back without evidence
to deny the null hypothesis that both lessons are equally effective, so there is no evidence to
support that virtual lessons are more effective than physical lessons. This means that the next
education model can still be focused around the teacher, who can still keep their job. Although
the test came back with this result, it does not stop the future exploration of digital lessons like
The population in interest is the senior class of 2017. My sample was obtained via a
simple random sampling function in RStudio of the whole senior class of 2017, containing 25 to
not go over the 10% which helps keep the randomness of my sample by not overpicking people
without replacement, making it representative of the class of 2017. Then randomness was also
utilized to pick order of treatments, physical lesson first or virtual lesson, via coin flip. Then
once the first lesson got 13 people starting with it, to ensure both lessons have the same chance
of going first because this is not the variable under investigation. This took the practice that the
previous lesson gave, and also if a lesson going first or last made a difference rather than the
quality of the lesson itself. There is a set number of steps it takes to fold a paper crane, 18 to be
exact. To see if the virtual lesson is better, a one minute recording was created for the seniors to
watch and follow along, stopping if need be. Then after five minutes was up, the number of steps
done was recorded. Then the same student got a physical lesson with the same verbal instructions
but with a physical teacher going along the process. Again five minutes is over the steps are
recorded again. To make sure that whichever lesson going first does not effect the latter lesson, I
randomly flipped a coin to see which lesson went first after 13 of one was collected.
Then the data was uploaded into Studios and the difference Virtual - Physical lesson steps
was taken to run a Two Sample Paired T Test to test the significance where the null hypothesis
was that the true mean difference being zero and the mean difference being positive, meaning
that the Virtual lesson was more successful in teaching more steps. After running the test, the
p-value came back 0.452. This means that the probability to obtain the sample mean difference
seen or more extreme (higher) would be 0.452 given that the null hypothesis of the true mean
difference being zero is true. This is very high, so this fails to reject the null hypothesis that the
true mean difference in virtual-physical lesson steps is zero. Therefore there is no evidence to
support the alternative hypothesis that the true mean difference virtual-physical lesson steps is
positive so that the true mean steps of virtual lesson is higher than the physical. In the appendix
is the R code used to perform the test.
The use the t test, conditions had to be met. The ten percent rule says that the sample
must not be ten percent of the total population. In this case the class of 2017 is 251 in size, so the
largest sample size would be 25 that would not be considered over sampling. This rule allows the
fact that sampling without replacement does not keep the probability of selection the same, but
not dramatically changed because the sample is small enough. Then the data was plotted on a
normal quantile plot, and approximately linearity was seen. Because of this, an approximately
normal model is acceptable for the sample distribution give. Along with a simple random sample
being obtained, the conditions for a normal model is met. There are no deviation from the pattern
like outliers, and the overall behavior of the sample distribution is approximate normal. The
inference test performed was a two sample paired t test, because a before statistic of number of
steps from physical lessons and number of steps from virtual lessons. The data is paired, or not
independent, because the same person took both lessons in a paired, but random order. Planning
of the experiment was started by getting the random sample, and then getting the subjects at
times that worked with them. Then giving the test must be repeated with each senior, and
keeping the same treatment for each subject. The virtual lesson is just giving a video with
headphones, but the physical lesson is teaching the subject via dialogue and going through the
steps. After conducting the tests for each senior, the recorded number of steps performed after
the virtual and physical lesson was uploaded into RStudio, and the difference was taken (virtual
minus physical). Then the t test was taken on the difference vector. The 95% one sided
confidence interval is -1.56595 to + Infinitive. This means there is 95% confidence that the true
mean difference between virtual minus physical lesson steps is between -1.56595 and positive
infinity. If this experiment was repeated many times with the same size sample of 25 would
produce a confidence interval containing the true mean difference between virtual and physical
lesson steps performed. And the p-value from the t test was 0.452. This means that the
probability to obtain the sample mean difference seen or more extreme (higher) would be 0.452
given that the null hypothesis of the true mean difference being zero is true.
Concluding this experiment, the paired t test gave an insignificant p-value. This means
there is no evidence to reject the null hypothesis that the mean difference in virtual minus
physical steps is zero, so the alternative hypothesis, our intended goal to see if virtual lessons are
more effective than physical lessons by looking at the steps performing being higher, had no
evidence. Although this does not give evidence to show that physical lessons given by teachers is
not as effective as virtual lessons when it comes to the number of steps a senior can fold a paper
crane. The goal was to see how effective virtual lesson are for teaching, so another version of
this experiment might not use origami but other education lessons like math tests or a
memorization test and observe the quantity and quality of the lesson received by the subjects.
The population was focused on the senior class of 2017, but this experiment could be expanded
to other populations to see how virtual and physical lessons change. Overall, this experiment
leaves the current teaching method without remorse, and puts no evidence against the style of
teaching.
Works Cited

SoundTheNote. "Best Way to Learn Origami? R/origami." Reddit. Reddit, 2015. Web.

11 Apr. 2017.
R Code

## > #Random Sample

> sample(lr\$Student.Name, size = 25)
[1] Benitez Flores, Erik Alexander Myers, Isabella K
[3] Praderas, Leonardo K Rodriguez, Robert Christopher
[5] Huerta Olmos, Abraham Henderson, Aidan Janette
[7] Venancio, Humberto Silva Thomas, Ryan Gregory
[9] Smith, Aislinn Elizabeth Thompson, William Maclise
[11] Lopez Garcia, Antonio Hughes, Jackson Harold
[13] Kim, Andrew Hyunjoong Switek, Jacob Keola
[15] McGee, Kyle William Hamilton, Nathan Andrew
[17] Yi, Emily Y Xu, Michael
[19] Brinker, Jeremy Ray Mace Banda, Iliana Nicole
[21] Key, Gavin Sebastian Evans - Strong, Aidan Elena
[23] Laware, Jacob Harris Andersen, Lillian Renee
[25] Garza, Alec William
1111 Levels: Aardema, Gabriel Lucas Abdalla, Yaseen Ahmed ... Zuckerman, Emma Taylor
>
> #Data
> Pdata<-c(13,14,18,18,18,12,8,3,10,12,18,10,13,13,18,12,8,4,6,12,16,18,18,5,7)
> Vdata<-c(13,11,18,13,13,3,3,6,10,18,14,9,10,8,15,12,8,12,18,14,18,18,18,9,16)
> diff<-Vdata-Pdata
>
> #Infernce Test - 2 Sample Paired T Test
> qqnorm(diff, main = "Normal Quantile Plot of Virtual - Physical Steps Preformed", xlab =
"Number of steps Virtual - Physcical lesson")
> t.test(diff,alternative="greater")

## One Sample t-test

data: diff
t = 0.12177, df = 24, p-value = 0.452
alternative hypothesis: true mean is greater than 0
95 percent confidence interval:
-1.56595 Inf
sample estimates:
mean of x
0.12
Graph of NQP:
Signature:

## UPLOAD PICTURE OF PAPER HERE LATER (Not home...)

Steps of Physical Lessons:

## 2 The turn the paper over a make an x

3 Then, to preform the squash fold, pick up the paper by the + and fold the x crease down.

## Now for the coffin:

5, 6, 7, 8 First fold the right flap to line up with the midline. Repeat for the left side. Flip the

paper over and repeat. Stop the video is you need time.

9, 10, 11, 12 Now fold the tip down and back to create a hard crease. Then for the tricky part,

open the new folds on the left and right you just created, but lift the top layer up. Then repeat for

the other sides. Stop the video if you need more time.

13, 14, 15 Now for the head and tail, lift the right side up to a point, the same of the left. Pick a

## side, and fold the point down once to make a head.

16, 17 Now for the final step, fold the wings down.

18 Now pick up the crane by the tail and the front legs, and pull to fly.
Photo-clip of Virtual Lesson
Works Cited

SoundTheNote. "Best Way to Learn Origami? R/origami." Reddit. Reddit, 2015. Web.

11 Apr. 2017.
Appendix E

Appendix F

R Code

## 1111 Levels: Aardema, Gabriel Lucas ... Zuckerman, Emma Taylor

> #Imputing Data into R for Physical and Virtual lesson steps taken

> Pdata<-c(13,14,18,18,18,12,8,3,10,12,18,10,13,13,18,12,8,4,6,12,16,18,18,5,7)

> Vdata<-c(13,11,18,13,13,3,3,6,10,18,14,9,10,8,15,12,8,12,18,14,18,18,18,9,16)

> #Calculating the difference between the Physical lesson steps and Virtual

paired
> diff<-Vdata-Pdata

## > #Infernce Test - 2 Sample Paired T Test

> t.test(diff,alternative="greater")

data: diff

## 95 percent confidence interval:

-1.56595 Inf

sample estimates:

mean of x

0.12
Appendix G

Appendix H

## 2 The turn the paper over a make an x

3 Then, to perform the squash fold, pick up the paper by the + and fold the x crease down.

## Now for the coffin:

5, 6, 7, 8 First fold the right flap to line up with the midline. Repeat for the left side. Flip the

paper over and repeat. Stop the video is you need time.

9, 10, 11, 12 Now fold the tip down and back to create a hard crease. Then for the tricky part,

open the new folds on the left and right you just created, but lift the top layer up. Then repeat for

the other sides. Stop the video if you need more time.

13, 14, 15 Now for the head and tail, lift the right side up to a point, the same of the left. Pick a

## side, and fold the point down once to make a head.

16, 17 Now for the final step, fold the wings down.

18 Now pick up the crane by the tail and the front legs, and pull to fly.
Appendix I

Signitures