What Is a Theme || Shopify Help Center 2018

What Is a Theme || Shopify Help Center 2018


In this video, we’ll discuss what a theme
is and why customizing your theme is beneficial for your store. You can access the theme section in your Shopify
admin by clicking “online store”. The default theme added to every Shopify
store, is called Debut. Before we venture any further, let’s discuss
what a theme is. A theme is a layout that determines the way
your online store looks and feels. Different themes have different styles and
layouts, and offer a different experience for your customers. The theme is essentially your website builder
which you can customize as you see fit. You can view what the published theme is at
the top of the page in the “current theme” area. The published theme means that it is live. This is what the public see’s after you
launch your store. The area under the title, gives a quick preview
of what the published theme looks like on a smartphone or laptop in its current state. By clicking “customize”, you will be brought
to the theme editor page. It is in this area where you will make changes
like font style, colour scheme, and layout of your website. We will discuss how to use each feature section
in further detail, with individual videos. Customizing your theme is how to create your
brand identity; it will set you apart from your competition. Before you decide on a theme and start customizing,
start thinking of features and designs you’d like to see on your online store. There are a wide variety of themes to choose
from, depending on your business needs. Stay tuned for more tutorials on themes in
Shopify.

Sign up to Shopify for a Free 14 Day Trial Store || Shopify Help Center 2018

Sign up to Shopify for a Free 14 Day Trial Store || Shopify Help Center 2018


To start your free 14 day trial Shopify store,
head over to Shopify.com. Here you see material available like pricing,
blogs, and help resources for getting started. Start by clicking “get started”. Enter your email address, password, and what
you’d like to name your store. Your store name is the domain attached to
the account you’re creating, and will end in .myshopify.com For example, by naming a store AlyAttire,
the domain will be alyattire.myshopify.com Once your store is created, you are not able
to change the .myshopify.com domain; it’s always attached to this account moving
forward and is used as your login details. However, you are able to purchase a custom
domain to replace this one if you wish. Check out the link below for more details
on custom domains. Once ready, click “create your store”
. Now enter some details about yourself. The first question reads “are you already
selling?” – in this example, you see the selection “I’m not selling products yet”
selected. Choose the option that best represents your
business at this point in time. Next, enter your approximate revenue for this
business. If you are creating a store for yourself then
leave the bottom box unchecked. This area is specifically for developers or
contractors creating stores on behalf of clients. These people are referred to as “Experts”
who specialize in different areas of the Shopify platform. If you’re interested in having an expert
set up your store for you, see the link in the description below for more details. Then, click next. Now, fill out your personal information. These details will be used as the default
business address, but you can always adjust these details at a later time in your admin
settings. After verifying the information is correct,
click “enter my store”. Congratulations! You took the first steps to starting your
eCommerce business. You are now in the Shopify admin of your personal
online store.You’ll receive an email shortly after confirming the store registration. For complete setup, be sure to check out the
general checklist from our Help Center listed below. This checklist walks you through the necessary
steps of getting started beyond the details in this video. For more tutorials on navigating the Shopify
admin, be sure to subscribe to the Shopify Help Center Youtube channel for weekly updates. Feel free to comment below, we’d love to
hear from you!

How Close Are We to a Self-Driving World?

How Close Are We to a Self-Driving World?


IMAGINE A WORLD WHERE YOU WAKE UP, GRAB YOUR
CUP OF COFFEE, AND HOP IN YOUR CAR TO DRIVE TO WORK… EXCEPT YOU’RE NOT DOING THE DRIVING. YOU HAVE MORE TIME TO SLEEP, READ A BOOK,
OR EVEN GET A PHYSICAL. THIS IS A WORLD WE ALL WANT TO LIVE IN. AND
ALTHOUGH WE’RE NOT QUITE THERE YET, PEOPLE ALL OVER THE WORLD ARE WORKING ON DEVELOPING,
TESTING, AND PLANNING FOR A FUTURE WITH AUTONOMOUS VEHICLES. BECAUSE, WHO DOESN’T WANT THAT EXTRA HOUR
OF SLEEP? SO, HOW CLOSE ARE WE TO A SELF-DRIVING WORLD? YOU MAY HAVE SEEN SELF-DRIVING CARS ON THE
NEWS, SPLASHED ACROSS THE INTERNET, OR EVEN TESTING AROUND YOUR CITY. BUT MOST OF THOSE CARS STILL HAVE A HUMAN
IN THE DRIVER’S SEAT. AND THAT MEANS IT’S PROBABLY A LEVEL 2 OR 3
CAR, WHICH DEFINITELY MORE INDEPENDENT THAN THE CAR YOU MIGHT DRIVE, WHICH IS PROBABLY A LEVEL 0 OR 1, BUT IT’S STILL A FAR CRY FROM OUR DREAM RIDE, WHICH WOULD BE A LEVEL 5. OR 4. LET ME EXPLAIN. SAE INTERNATIONAL HAS DIVIDED AUTONOMY INTO
FIVE STAGES. LEVEL ONE IS “DRIVER ASSISTANCE,” AND
LEVEL TWO IS “PARTIAL AUTOMATION,” WHICH YOU CAN ALREADY FIND
IN CARS WE DRIVE TODAY. HERE, THE CAR CAN DO SOME OF THE STEERING,
BRAKING, AND ACCELERATING, BUT STILL NEEDS A DRIVER WITH HANDS ON THE WHEEL, BECAUSE
LEVELS 1 AND 2 ARE STILL JUST “DRIVER SUPPORT.” So lane keeping, collision warning, even active
interventions that will swerve the vehicle if you’re about to get into an accident. LEVEL THREE IS “CONDITIONAL AUTOMATION,”
WHICH MEANS THAT THE CAR IS PRETTY MUCH IN CONTROL, BUT REQUIRES HUMAN INTERVENTION IN AN EMERGENCY, OR WHEN PROMPTED BY THE SYSTEM. REMEMBER LEVEL THREE, BECAUSE THIS IS WHERE
IT CAN GET STICKY. BUT THE ULTIMATE SELF-DRIVING CAR WOULD BE
OPERATING AT LEVEL 4 OR 5, WHERE IT CAN STEER, BRAKE, ACCELERATE, MONITOR THE ROAD, RESPOND
TO RANDOM EVENTS, CHOOSE TO CHANGE LANES, TURN, AND OF COURSE… USE ITS BLINKER LIKE ANY DECENT CITIZEN. THE YELLOW BRICK ROAD TOWARD SELF-DRIVING
TECHNOLOGY HAS BEEN A WINDING ONE. DR. DEAN POMERLEAU HAS BEEN NAVIGATING IT FOR
A LONG TIME. YOU COULD CALL HIM THE GRANDFATHER, OR AT
LEAST THE COOL UNCLE, OF AUTONOMOUS VEHICLES. BACK IN 1995, DEAN AND HIS GRADUATE STUDENT
MADE A PILGRIMAGE ACROSS THE COUNTRY “LOOK MA, NO HANDS”-STYLE, AFTER THEY TRICKED
OUT A STYLISH MINIVAN WITH CAMERAS AND COMPUTER VISION ALGORITHMS. About 98.2% of the trip, as I recall, was
hands-off, feet-off, with the system controlling the vehicle all on its own. It was a proof of concept, basically, for
some of the technologies that we’re seeing finally being deployed today. IN THE YEARS THAT FOLLOWED, RESEARCH TEAMS
COMPETED TO DEVELOP THAT TECHNOLOGY FURTHER. IT WASN’T UNTIL 2005, AFTER SOME… CATASTROPHIC FAILURES, THAT DARPA’S GRAND
CHALLENGE TO BUILD A SELF-DRIVING CAR FINALLY AWARDED FIRST PLACE TO A STANFORD TEAM, LED
BY SEBASTIAN THRUN. YEAH, YOU MIGHT’VE SEEN HIM AROUND. FAST FORWARD TO 2009, WHEN HE STARTS A LITTLE
PROJECT CALLED WAYMO. IN SECRET. IN 2016, WAYMO SPINS OFF FROM GOOGLE, AND
IN A FEW SHORT YEARS, THE INDUSTRY’S ERUPTED, WITH ESTABLISHED TECH AND CAR COMPANIES JUST
AS EAGER AS STARTUPS TO GET IN ON THE ACTION. Waymo is probably the recognized leader. GM bought Cruise Automation; Argo AI here
in Pittsburgh is one leading player; BMW Mercedes, are all working on their own projects
for self-driving cars. It remains to be seen whether it’s a good
investment or not. FOR THAT INVESTMENT TO PAY OFF, DRIVERLESS
TECHNOLOGY MUST BE REFINED TO THE POINT WHERE IT’S BOTH RELIABLE AND FLEXIBLE ENOUGH TO HANDLE A COMPLEX JOURNEY. THAT MEANS SOPHISTICATED SENSORS, ROBUST COMPUTER HARDWARE, AND INTELLIGENT DECISION-MAKING SOFTWARE. TO START WITH, AUTONOMOUS VEHICLES RELY ON SOMETHING NOT ALL HUMAN DRIVERS ARE EQUIPPED WITH: A SENSE OF DIRECTION. The companies that are building these self-driving
cars build their own maps. Very much like Google has its street-view
cars that drive through neighborhoods and collect map data, they have another fleet
with many additional sensors to drive through a city and map it in great detail –static
obstacles, like telephone poles or the curbs around the road, that it should be aware of
and avoid. BUT TO BE TRULY ADAPTIVE, THE CAR NEEDS TO
BE ABLE TO GATHER REAL-TIME INFORMATION ABOUT A DYNAMIC, UNPREDICTABLE ENVIRONMENT. ELON MUSK THINKS WE CAN ACCOMPLISH THIS WITH
CAMERAS ALONE. BUT IF YOU’VE EVER TAKEN A SELFIE IN THE
CLUB, YOU KNOW THAT CAMERAS PROBABLY AREN’T GOING TO CUT IT, BECAUSE THEY STILL STRUGGLE
WITH DARKNESS, DEPTH, AND REFLECTIONS. So self-driving-car companies are investigating
many different sensors, for example, millimeter wave radars for long-range sensing, and short-range,
often ultrasound, sensors that see things that are very close to the vehicle. LiDAR is probably the most common and most
impressive technology currently being used. LiDAR is a laser-based technology that shoots
a laser beam out into the environment, scans it very quickly, and detects the range to
objects and other vehicles. LiDARs are both a great sensor but also a
weak link; they’re very expensive and break down fairly often. AND THIS HAS BEEN A MAJOR ROADBLOCK TO FULLY
AUTONOMOUS ROADS. LIDAR HAS HUGE POTENTIAL, BUT IT’S JUST
TOO DELICATE AT THE MOMENT, BECAUSE IT’S MADE UP OF FRAGILE, MOVING PARTS. BUT SOMETHING CALLED SOLID-STATE LIDAR, WHICH SCANS THE ENVIRONMENT USING NO MOVING PARTS COULD CHANGE ALL THAT. AND THESE SENSORS, WHILE IN THEIR INFANCY,
ARE IN SUCH DEMAND THAT MANUFACTURERS LITERALLY CAN’T MAKE THEM FAST ENOUGH TO SUPPLY THE
DEMANDS OF COMPANIES LIKE FORD AND BAIDU. It’s much more reliable and also much cheaper
to manufacture, which is very important if you’re going to do this at scales on thousands
of vehicles. OKAY, SO SAY WE CAN BUILD A DRIVING ROBOT. THAT IS, AN ENTITY THAT CAN PERCEIVE ITS ENVIRONMENT, JUDGE, AND ACT ON THE ROAD BASED ON A COMPLEX NETWORK OF REAL-TIME DATA ANALYSIS. IN A WAY, IT’S STILL ONLY PREPARED TO DRIVE
ON A MAP. TO BE ABLE TO NAVIGATE IN THE REAL WORLD,
AND SHARE THE ROAD, AND THE STEERING WHEEL, WITH HUMAN DRIVERS, WE’LL NEED TO TAKE IT
TO DRIVER’S ED. Some of the biggest safety concerns are involved
with perception and behavior of drivers or pedestrians or cyclists. Slushy roads covered with ice and snow are
very hard to cope with, and there’s really been very little effort or progress in self-driving
cars in these very challenging environments. TO MAKE THAT PROGRESS, WE HAVE TO STUDY HOW
HUMAN DRIVERS ACTUALLY RESPOND – BOTH TO RISKY ROAD CONDITIONS, AND TO AUTONOMY ITSELF. SO TO FIND OUT MORE ABOUT THE HUMAN IN THE
WHOLE EQUATION, WE HEADED TO STANFORD’S AUTOMOTIVE INNOVATION LAB. We are working together to really get a detailed
understanding of the human as we move forward in designing active safety systems
and automated vehicles. So, we’re gonna set up this NIRS cap on her right now. It will be shining a little bit of infrared
light onto her motor cortex. We’ll be able to see as she’s turning left,
turning right, using the gas pedal and the brake pedal, all in our data streams back
there. The majority of accidents that we do see do
come down to human error in either recognition, decision, or performance. So when we can get to the point where the
system does a better job at those three things than humans, then I think it’s clear that
our roads will be safer. This is X-1, our experimental test vehicle. The flexibility in steering allows us to set
up all sorts of experiments. We can emulate driving on
an unexpected change of friction. Going from snow to ice, for example. There are studies going on in the dynamic
design lab, measuring the inputs that professional drivers make, so that we can try and understand
what they’re doing differently to drive right at the limits of the vehicle. We can use that
to inform the way that the autonomous vehicle control algorithms are designed, so that hopefully,
your autonomous vehicle will drive as well as the very best human driver. LENE’S MOST RECENT PROJECT INVESTIGATED
A SCENARIO THAT MIGHT POP UP IN SOMETHING LIKE LEVEL 3 AUTONOMY, WHERE THE CAR’S BEEN
ROLLING SOLO WHEN SUDDENLY, IT ENCOUNTERS SOME SCENARIO IT CAN’T MAKE SENSE OF, AND
THE HUMAN DRIVER IS ASKED TO INTERVENE. What our studies of brain and behavior tell
us is that it’s important to consider a period of time when people’s driving behavior may
be significantly different if they’ve taken control of a vehicle after a certain amount
of time out of the loop. We can see almost in real time the cognitive
resources being deployed. They may have more limited cognitive resources to deal with an
emergency situation under those conditions. It’s potentially a quite dangerous situation
if you’re handing off control back and forth with the system. THOUGH IT MAY SEEM EXTREME FOR CONSUMERS TO
MAKE THE JUMP FROM CRUISING AROUND IN A LEVEL 1 CAR TO HOPPING IN A FULLY AUTONOMOUS ONE, MANY RESEARCHERS AGREE
THAT PARTIAL AUTONOMY SHOULD ONLY BE RESERVED FOR TESTING PURPOSES. AND UNFORTUNATELY, MOST OF THE
ACCIDENTS THAT HAVE ALREADY OCCURRED HAVE PROVEN THEM RIGHT. I think the next five years or so of autonomous
vehicle design is actually going to focus more on the ways in which we can implement
full autonomy in a much smaller, more controlled environment, and sort of do it that way rather
than necessarily going through this partial autonomy stage to get there. People are easily distractible, and that’s
the underlying problem that autonomous vehicles are setting out to solve. So NACTO cities believe that it needs to be
really full automation to achieve the safety benefits that are the major promise behind
autonomous vehicles. THE NATIONAL ASSOCIATION OF CITY TRANSPORTATION OFFICIALS REPRESENTS 68 CITIES AND 11 TRANSIT AGENCIES ACROSS NORTH AMERICA. NACTO RECENTLY CONVENED TO DISCUSS HOW THE
WORLD WILL PREPARE FOR FULLY AUTONOMOUS CARS TO BECOME A REALITY. It’s unrealistic to expect city governments
to redesign streets to accommodate autonomous vehicles. THIS MEANS, WHEN SELF-DRIVING CARS DO HIT
THE ROAD, THEY’LL NEED TO TRAVEL AT LOW SPEEDS, AND MAKE USE OF OUR EXISTING INFRASTRUCTURE. BECAUSE OVER THE PAST CENTURY, WE’VE MADE
COUNTLESS COMPROMISES TO ACCOMMODATE THE SHINY NEW TECHNOLOGY OF THE TIME… THE AUTOMOBILE. BUT URBAN PLANNERS THINK WE CAN DO BETTER
THIS TIME AROUND. We’ve seen neighborhoods cut off from opportunities,
we’ve seen congestion, greenhouse gas emissions, pollution, decreases in public health… we
risk repeating a lot of those same mistakes with autonomous vehicles. The Bblueprint for Autonomous Urbanism came
about because we were seeing too many visions for driverless cars in a people-less city. The Blueprint is imagining how cities can
structure their streets to prioritize walking and biking and transit and public space, to
really maximize those benefits of living and being in a city, while using autonomous vehicles
to help achieve those goals. THE IMAGES YOU SEE HERE ARE JUST SKETCHES
AND SUGGESTIONS FOR THE FUTURE. THE REALITY IS, REGULATION FOR THESE KINDS
OF VEHICLES IS PRETTY NEW – AND IT’S DIFFERENT IN EVERY STATE. BUT PLANNERS AND TRANSPORTATION OFFICIALS
LARGELY AGREE ON THE NEED FOR EQUAL ACCESS, SAFETY, AND SUSTAINABILITY. It requires really thoughtful and intentional
policies to make that vision and that promise a reality. SO, HUMANS ARE ADAPTABLE LEARNERS. AND WE’RE DESIGNING SYSTEMS THAT CAN WORK
THAT WAY TOO. BUT ON A GRANDER SCALE, WE AS A SOCIETY HAVE
TO BE WILLING TO ASSESS RISK, AND STEER IN THE RIGHT DIRECTION BEFORE WE CHANGE LANES
AND CHARGE FULL SPEED AHEAD. SO… HOW CLOSE ARE WE TO A DRIVERLESS WORLD? If you look at particular geographical locations,
it’s already happening. Over time, I believe that those islands will
grow in number and expand, and that’s how you will see the expansion of fully automated
vehicles on the road. I think in the next year or two we will see
companies like Waymo and GM Cruise deploying maybe a few hundred of these vehicles for
the general public to ride in. Probably by early 2020s, we’ll see cars without
drivers giving rides and then driving empty to pick up the next passenger. It’ll be probably at least a decade, I would
say, before you can walk into a showroom and buy a car at an affordable price that can
do, say, level four or five autonomy, which means you don’t have to do anything. The fact that Waymo’s CEO said we’re still
quite a ways off makes me think that that’s probably true. But in the near term, I think there are some
applications, especially for transit, to use autonomous technology to achieve some of our goals. Access to affordable, convenient transportation
is really important. We all have a grandparent or a friend of our
grandparents who had to give up driving and lost a lot of their independence. I think it can change lives and save lives
across the board as long as we take into consideration everyone across the spectrum as we as a society
move forward with automated vehicles. SOUNDS LIKE AS LONG AS WE TAKE CARE OF SOME
POTHOLES FIRST… WE’VE GOT A GREEN LIGHT. TO MAKE SURE YOU NEVER MISS AN EPISODE, DON’T
FORGET TO SUBSCRIBE. AND FOR MORE HOW CLOSE ARE WE? CHECK OUT THIS PLAYLIST HERE. THANKS FOR WATCHING AND I’LL SEE YOU NEXT
TIME ON SEEKER.

Business Blueprint Success Story – Kody Thompson

Business Blueprint Success Story – Kody Thompson


I’m Kody Thompson and I run a graphic design and marketing business. This has so many people ringing for work and I was like I don’t have any idea how to scale my business. I don’t know how to hire staff. I’d never run a business before A friend of mine had been to one of Dale’s New Rules of Business and said hey you should check out this guy Dale Beaumont and just really loved his presentation and the thing that set Business Blueprint apart from the other groups that I was looking at was that he was able to give me help right now that was exactly what I needed. I needed some help systemising my business I mean I was ready to rock and roll so I went to the Ontraport conference and we use that now for our CRM system. We’re now integrating all of those forms into our website. We’ve now Flowcharted all our
major business systems so now we have a workflow to work through. Now with that workflow I’m able to put timer markers down so for example if one designer takes three days to do a job that another designer can do in one I can sort of start to see the value of my staff and how productive they are. That’s probably one of the major things our Google Sites probably got 150 pages in the now with all different systems. My value to my customers has improved so much. When I when I started I was selling WordPress websites and branding packages and stuff but now I’m able to put a more comprehensive marketing strategy package together. I’m starting to be able to sell my products for a lot more because they don’t just get a website they get they get a website that actually delivers so I’m able to set their CRM systems up I’m able to set up their autoresponders. I hired my first VA after the New Rules of Business conference and then a friend of mine had to let go of one of his VAs she’s an admin assistant Beth and he’s like mate you should hire this chick she’s really good and they essentially just run the back end of my business we hired our first web developer who builds custom WordPress plugin and then three weeks ago we hired our first graphic designer. I was getting better work out of him than most Australian subcontractors I was getting and he’s like a quarter of the price. Our business has only been operating two years. We’ve got some clients like for example Bendigo marketplace and Shopping Center we work for and I’m you know they were with that large Melbourne agency, we’re able to compete with them within two years and I started thinking that that would have been impossible for us to compete that quickly if I didn’t have the help of Business Blueprint. You know I’m saying just feel released with my staff being able to run stuff without me. Our sales have started increasing. Our profit margin on our projects is increasing now just it’s starting to pick up. I know that it’s really just going to be exponential from here on.

86th Knowledge Seeker Workshop/KFSSI Blueprint Teaching Week Continued! AM November 5th 2015

86th Knowledge Seeker Workshop/KFSSI Blueprint Teaching Week Continued! AM November 5th 2015


Good morning, good day to you
whenever and whereever you are and listening to these teachings
As usual, Thursday is a public teaching and in these teachings, we usually make
some announcements which is the pattern of our work, where we
develop, where we are and how we doing We don’t have much announcements
in the way, but some pieces of information which we have to pass on to you. As part of the KF and being an
international organisation, we are responsible for every aspect of the
KF’s around the world. Everyone of you who work, use our systems,
use our technology, develop our tech you spend time listening to these
teachings and others we are responsible for you to guide you or in a way to inform you what is
happening or what it is and Thu. morning has been traditional
way for us to do this for a coupl of years One of the main announcements which I
have to make is the following as you are aware, we are a SP aACE program
organization, so being in that environment we’re developing new technologies, new
concepts, new tools, new materials and it’s our job to try to set standard
it’s our job trying to set limitation and understanding of the new technology In the blueprint week which today is part
of it, the 2nd part of the 2nd week we explained in the first week how to use
these magrav systems, how to build them we explained to you last Friday the
dangers, the possibilities the cautiousness you have to have
in understanding the new technology and applying the new technology The point goes another step. Part of development of this technology
which is the coating for coated materials is what we call GANS. It’s this material
which you create from the separation of the molecular to atomic and then
atomic plasma. Or molecular plasma as CO2 or as a copper on its own.>From the beginning, when we establish
the production of nanomaterials this way which was unknown, and we explain
how we produce these nano materials on a copper plate or the way you do it
with caustic, these are very highly energy compact atomic structures. We always advise you to use gloves.
Don’t touch them, don’t breathe on them or whatever. When we produced the separation of the
atomic structure to atomic plasma, what we call GANS or Gases in nano state or molecular structure of the same thing we ask you not to touch, not to eat, be
cautious with it because it can have side effects.
It’s a massive energy package and a lot of you’ve listened and you
follow the procedure. I see most of you carrying gloves touching these things,
handling these things which is the correct thing. In the KFSSI, we had one person who tried
to break the rules and he came with the name ormus.
There is something called ormus or whatever, and he tried to push this
to create separation and division from the KF beginning Institute. And one of the reasons I let him go
was because of his arrogance. And I’ve seen nowadays this push
that this is ormus you can drink it and people are drinking GANS
or touching GANS as “this is ormus, nothing wrong”
and we have seen somebody who’s put it on the internet
taking a small amount of GANS he started bleeding when he went to toilet
We see and he said I’m producing GANS in my urine. Red , containing blood, and
then he was afraid of whatever. I’ve seen it, the pictures are on the
internet. So, you have to understand: whoever talks about ormus which is
something which has been researched for a long time, and nothing to do with
the GANS. Because, if all you’ve been drinking ormus
and there’s been no side effect, and if you drink GANS and you bleed,
or you see different side effects, and you can create energy out of it…
There are two different things. You have to understand: you deal with a
new material called GANS the way we produce it: separation of
nanomaterials, separation of atomic structure…
DO NOT EAT IT. You can use its energy, but individually
itself it’s so powerful that it can harm you. You do it on your own
accord. The people encouraging you to drink it
in the name of ormus, These are what I call ************
People who are ignorant, have no knowledge and they’re trying to write in the back
of KF. And when we inform them, it’s just what we wanted to tell people.
They’re playing with your lives. The ormus, whatever they call it,
is not GANS, because there’s so much litterature on it, nobody ever understood
it. And people who touch GANS, you see the energy, you see the bleeding,
and you see everything else with it. DO NOT TOUCH. Gans is not ormus, Gans
is what it is, it’s individual atomic structure,
of a plasma, and it can harm you. It can seriously harm you it you do
handle it the correct way. It can give a lot of pleasure in energy,
in medicine and everything else. So my advice to you is be very careful
and keep away from touching the GANS, eating it or whatever,
and at the same be aware that it can be a lethal weapon in arsenal of the people
who want to hurt, to bring danger to you and the foundation. This is very clear: Please do not touch,
do not drink, do not consume, do not give it to your dog, don’t let
the dog eat it, because we don’t know what’s gonna happen. Because we’ve seen
seen people who their dog had drunk and they say they’re not eating or
whatevever. So it’s an energy pack.
It’s all about KF knowledge seeker his dog and drank. So please understand: Gans is Gans, it’s
on his own, it’s an unkown material to the world of science, in a way that you
are used to use matter, and do not listen to the people who have
no knowledge of the science and claim to be because they used to be
student in the KFSSI. The reason they’re not here is because
they never understood and ********* KF keeps people, scientists, people who
work around us, who are intelligent enough to understand the technology. The ones who come here for a short time
and they claim they’ve been here=they have a right, we don’t keep because
they have not understood the process it’s no use keeping people who have no
intelligence or understanding of the new technology. But they attach themselves
and we see a lot of these in the future, people who’ve been here, have been to the
institute, have been a student. If you were somebody who understood,
we keep. Marco is here, Armen is here Other people who understand the totality
and the learning, we keep. The ones who do not have the capability
to do what transition and they start different things, we let them go. But be aware, Gans is not Gans,
Gans is not … ormus there is a lof of paper on it **********
has never shown any proof of anything. With Gans we show, we develop, we use
the system, we understand the way the nanomaterial the way i”s produced
by it, and you see the tools which is getting developped by it.
So please do not touch nanomaterials the way you produce through the
nanolayers using caustic because you create a high energy
matter state, and do not drink or use for others fun for you
to put your life in danger. The second point comes to the position
which the KF stands in Belgium. We received emails we have passed to the
interpol and the international police who’s looking after the case of the KF
with the … ECJ. We have and we stand fair in our position.
There are four or five people in Belgium who are putting KF supporters in danger.
We have received emails, we’ve seen how the access to the old files and the
support of KF Holland and Belgium has been abused, and I do not see any
reason for having such a huge support in Belgium which has been our home,
our base, for a few terrorists. Literally terrorists that are in the hands
of the police the same situation which was in the pedofile case in the US
These are cases which are running We’re getting informed on a regular basis
what’s happening with them [sound chopped] We go back to our blueprint teaching.
We have managed to complete, and as I said, if you were not in the
teachings of yesterday, you should start receiving your units,
magrav units, the car units today, tomorrow, in the next week, maximum
by end of next Friday. If you don’t receive your units by next
Friday or so, please get in touch back with us, we give you a tracking number.
We do not release the tracking unless we hear that you haven’t received your
unit. This is for security reasons. And in turn, when you received it, please
go back on the teaching of the Magrav which we’ve done, look at what you have
to do, and follow the procedure. Connect it the right way, load it
the right way understand what you can do with it
and how to apply it, and let us know what you see,
what we have to change, what we can add to improve the situation.
There are a number of ways which are going to be done. First of all, when you receive
your box, you have a two page instruction on it.
One the first it tells you, go to Magrav at Keshefoundation.org,
listen to the teaching, listen to the connection, how you have to connect.
Understand the first ything ou have to do is establish your phase or live line and
non live line. It’s very important that the first you get, or at least when
you get your system, is to establish your phase and and out of phase,
which one is positive, and which one is negative.
[background noise] So the next thing you need to do, is
when you connect your system, connect it to a plug, don’t connect it
directly to the main. Connect it to a plug at the furthest
point in the house from the line of connection. What this
means is that you do the following: if this is your system, this is your main
and as I said, this is the last room in your house, upstairs or furthest
from the meter, plug your system on this point. And leave it with a very
small load, maybe a LED light, maybe with a small resistor of 400 or 500W
and let it run for 3, 4 or 5 days. What happen: the system you use the power
start nanocoating along the line around the house. So what it does it prepares the
wirework in a very soft way. So, if you have the room here, or
if you have a room here, As the energy goes through it, you start
nanocoating the lines. The lines get ready to be able
to accept plasma. After a week or so, bring it closer to
the meter, to your power supply, The main grid. After a couple of weeks,
bring it closer. Here, try to use 500 W. Here, as you come closer,
put one KW. [microphone check] So you start with 500 Watts, here and then
this is the first week. And by the third week, we go that way. Be careful, if you have a washing machine,
try to work on your plug circuit if you want. Do not work on your light circuit, unless
everything in the house is non-resistive light bulbs, what we call the saving
lights. If you have any saving light, more or less
all your power consumption is zero. It means you take the energy from
the plasma of the unit. You are not stealing electricity from
nobody, because the same rogue elements
in Belgium started the same thing in Antwerp, that stealing power
from the government. None of the people who see who are
on the internet, who go against the Foundation, have ever built
a single GANS material yet. How can you guide somebody when
you’ve done it none yourself and you’ve been with the Foundation
for ten years. They have not produced a single nano-
material element themselves. Ignorant hypocracy. So what you’ve got
to remember when you put your system on, is how to produce, how to
develop the system to be able to work. So, do not use power, high power units
at the beginning. Use low power, start the system nanocoating, and the
system has a habit of nanocoating the systems which you are using, and
nanocoating the wiring backwards. And then, what you do, as I said,
you are not stealing energy from anywhere, you are using the power of the plasma.
You’re releasing the energy of the plasma which you use. So, in that bracket, direct
ly, you start building the plasma power. If you remember, if you go back to the
teaching in Descenzano, go back on the teaching how we built up to 129 T.
We started with a very few Teslas, and then over the week, we came 10 T,
we went 25 T, and then we came to 129 T. The longer you activate, the more GANS
cells, the more energy you’re releasing. This is where your energy comes from.
This is the process of releasing power from nanomaterials. This is the way
you release the power in GANS materials. So the time you need, as you increase
the power, you activate more plasmas to release more energy, and that’s how
it works. You don’t tap into nobody’s light, but in fact what you do, from the
point when you connect yourself to the… under the fuse box, after 3 or 5
weeks, which is got to be done by a electrician, by a man who knows what
he’s doing, then you cover all your electrical needs, the power and the
lighting. But remember, we keep on saying, and it’s on your
instruction, maximum limit is 2 kW total resistive of all the
appliances you have in the house. So, aredoes not mean you plug in a 2 kW
unit, because it’s 2 kW maximum, you have to consider other units are creating
resistance in the circuit too. So one and a half kW is good per unit. We have seen in our test, and we’re
gonna confirm it in the coming time as some people have been told
they have reported in their test if you use one unit… once you…
this has to be done in sequence. Once you’ve gone to the back, you add
as second auxiliary point back to where you’ve started (unit 2), then there is a
possibility that the totality cover can be increated to 5 to 7.5 kW
of energy consumption. But this has to be continuous. Every time you switch on switch off, you
take to balance, and in that balancing, you see a meter reading. Because now,in that switching time,
you use energy from the grid. So be very careful with what you do,
you can take a second unit, we call it the auxiliary unit, is exactly
the same as the first unit, what we’re calling this, we put an
applicance for the donation you already made your donation. The auxiliary unit is for the cost
of what it is, that you can increase your power. It’s not that we’re putting you to…
You understood the way the KF works. The first unit, what you pay, pay for
the KF, for all the people around the world we can support, to receive
units, and to create jobs in what we are exchanging for new technology. When you take a second unit, you can come
and say this is my purchase number of the 1st unit, can I have a auxiliary
unit, you pay what we put at a cost of about 200-150 €, and the you get it.
None of you at the moment need a auxiliary unit. Because you need
another 2-3 weeks. Work with it, learn with it, you’re happy
with it, come and take the 2nd auxiliary, and push your power unit up. And again, when you do this, go through
the same process with your 1st/2nd unit. Slowly load it up, ignite,
and bring into power the plasma. And then, you build it up,
and what happens, here, at your point of connection to your main
grid fuse box, you start putting energy onto the grid, something like 4 to 6 times
what you use. This is what I’ve discussed on Monday
in Rome, with the Italian authorities, that we have no choice. Subtitles by the Amara.org community

Adding a Free Theme || Shopify Help Center 2018

Adding a Free Theme || Shopify Help Center 2018


This videos covers where you can access free
themes for your online store. Start by clicking “online store” in your
Shopify admin. You can view what theme is published at the
top of the page in the current theme area. While you are only able to publish one theme
at a time, you can browse more themes available by scrolling down. By clicking “Explore free themes”, you
will see the selection of themes developed by Shopify. When you choose a Shopify developed theme,
it means that it is both free and supported by Shopify should you have any questions. You can click on a theme to see more features
and styles available. For example, the “minimal” theme has three
different styles; “modern”, “vintage” and “fashion”. You can select each one individually to preview
what it looks like. In this example click “add minimal” to
add this theme. The new theme will be added to your admin,
but it will not replace the current published theme. To read more about Shopify created themes,
click the “learn more about themes” link at the bottom of the page. This will open a new tab that directs you
to Shopify’s help center. From the newly opened page, you can read more
about how themes work, or you can type in the theme title you have chosen. For example, by typing “minimal” into
the search bar and opening the article, you will see more details about the theme’s
features, dimensions, and sections. You can view the same details for all free
Shopify themes on the left-hand side. If you have any questions or need assistance
editing your free Shopify theme, you can contact support directly. Explore the different themes available and
see what works best for the brand you’re creating.

Shopify Shopcodes Step by Step Full Tutorial || Shopify Help Center 2017

Shopify Shopcodes Step by Step Full Tutorial || Shopify Help Center 2017


In this video, we will show you how to add shop codes to your Shopify store. From your Shopify admin click “Apps”. Click “Shopcodes”. Click “Create shopcode”. Choose your product or search to refine the list and click “Select product”. Enter a title for the QR code and provide a link for where you would like the QR code to take your customers when they scan it. You can direct customers to a product description page or send them directly to your checkout page with the selected product in the cart. When you send your customers to the checkout you can also add an automatic cart discount. Optional: Select the checkbox under Help Text to add explanatory copy, to the QR Code. Click “Create shop code”. You can download the QR code as a .png file if you will be using it on the web or as an .svg if you will be printing the QR code or editing it with graphic design software. Click “Download shop codes”. In this video, we will show you how to add shop codes to your Shopify store from the product page. From the admin click “Products”. Select your product. Click the “More actions” drop-down. Select “Create a shop code”. Enter a title for the QR Code and choose a link that you want the QR code to take your customers to when they scan it. You can direct customers to a product description page or send them directly to your checkout page with the selected product in the cart. When you send your customers to the checkout you can also add an automatic cart discount. Click “Create”. You can download the QR Code as a .png file or as an .svg to print or to edit it with graphic design software. Click “download”. In this video, we will show. You how to edit your shop codes from your Shopify admin. From the Admin click “Apps” Click “Shop codes”. Select the code you want to edit. Update the information you want to change you can also choose a different product to associate with your QR code. Then click “Save”. In this video, we will show you how to delete your shop codes from your Shopify admin. From the admin click “Apps”. Click “Shop codes”. Select the code click “Delete shop code”. In this video, we will show you how to download your shop codes from your Shopify admin. From the admin click “Apps”. Click “Shop codes”. Select the code. You can download the QR code as a .png file or as an .svg to print or to edit it with graphic design software. click “Download”. [Music]

Jeremy Bailenson, Infinite Reality: Revealing the Blueprints of our Virtual Lives | Talks at Google

Jeremy Bailenson, Infinite Reality: Revealing the Blueprints of our Virtual Lives | Talks at Google


>>Jeremy Bailenson: Thank you for hosting
and it’s an absolute pleasure to be here. I’ve lived in Mountain View for about eight
years now and have been walking my dogs in this neighborhood for quite some time. Obviously
you’re at the center of the technology universe and I’m just excited to be here and to share
my work with you. And so, I’m a professor and I’m part social scientist and part engineer.
For the last 15 years I have been studying the psychology of avatars, meaning I build
virtual environments. I put people in these representations I call avatars and I study
the psychological effects of spending time inside an avatar. I’ve been doing this work
for a long time and it’s real exciting for me that the work you guys are doing the work
that others are doing in Silicon Valley that has made the work that I do from this weird
science fiction stuff, — think about a guy until the late 90s who is studying avatars
to really a household name. It’s especially fun for me to talk about this work that I’ve
been doing for so long to you guys. Everything I’m going to talk about today — I’m going
to give you a very broad sense of what I’m doing. All the work I’m talking about is published.
This is the URL of my lab. You can go on the lab’s website and see all the PDFs of the
published work. I’m going to give you a really broad perspective. I’m going to cover 50 experiments
today in 35 minutes time. I’m not going to bore you with details about the control condition,s
about the measures that we used. Feel free to ask that in the question and answer session.
I’ve also been asked to tell you to hold your questions until I stop speaking at about 12:45.
At that point we’ll do a formal question and answer using the microphone. Okay. So the
big question we’re going to ask today is when I am in an avatar in virtual space, there’s
certain rules in the physical world that govern all social interaction. When you enter a virtual
world, all of those rules are thrown out the window. When you interact with a virtual version
of someone, say you’re in hangout in Google plus or you’re Skypeing or in a place called
Second Life as a virtual world or a video game, you’re interacting with virtual representations
of other people. And the rules that govern face to face, –my identity, how attractive
I am, how tall I am, my gender, even my species–, all of these rules can be thrown out the window.
What I’m going to do today is describe to you a world where social interaction is changed
in a way that we’ve never really seen before as a human specie. My book is called Infinite
Reality and it’s written with Meredith’s father, Jim Blascovich, who has been a great mentor
to me, and this book really covers the history of our work, the 15 years of our work. We
write it as scientists but we also write it as journalists. I have the luxury of being
in Silicon Valley, getting to go to Jaron Lanier’s attic and look through his materials.
I really interviewed the main players in the history of this. I really think you should
view this book as–. If you’re curious about what avatars are, or what virtual worlds,
virtual reality is, and want to understand the psychology behind it, the social science
behind it, and how it’s really going to change the world, that’s what this book does. So
please, if you enjoy this talk do read the book. How many of you, show of hands, have
worn a helmet and walked around in a virtual reality before? About 20 percent of you, great.
So I’m going to show you a video of a gentleman in my lab and what is he doing?>>Male #1: A tight rope?>>Jeremy: A tight rope is close. We’re forcing
him to walk the virtual plank. So what he’s seeing when he looks down is the Wile E. Coyote-type
chasm with a rickety plank. And how does virtual reality work? The way virtual reality works
is by tracking and rendering. So we’ve got sensors in the room that are tracking the
way that his head moves, where his legs are, where his body position is, where his head
is turned. ,Physical cameras in the room track those positions. About 100 times a second,
a computer senses his movement and redraws the virtual world to reflect that movement.
So if he walks one meter in the physical world, the virtual world is drawn one meter behind
him so that every action is accompanied by a virtual reaction. That’s all sent to some
audio-visual display. In this case, he’s wearing what we call the HMD, the headed mounted display,
and the cycle of tracking and rendering happens about a hundred times a second. So he walks,
the world is updated and it feels very realistic as you’re walking around. As you watch to
this guy, you can see he’s feeling very high what we call ‘presence’, meaning he forgets
that he’s in the physical room as he’s walking this virtual plank. To give you a sense, I’ve
probably run over 5,000 people over this virtual plank in the last 15 years through demonstrations,
through experiments. On average, when you look at adults, that is people my age 20s,
30s, 40s, 50s, one in three people won’t step within a meter of the edge of that pit. They’ll
see that plank and they just won’t go there. It’s really compelling. You’re all welcome
to come to my lab and do this experience. Stanford has rebuilt my lab. It’s really cutting
edge and we do tours once a week on Fridays for the informed public. What makes, I think,
this book very relevant and timely and also relevant to the Google audience, is that it’s
no longer, VR is no longer for people with expensive labs at big Universities. So Jaron
Lanier, who is the pioneer in many ways of the field, who coined the term ‘virtual reality’,
has said for decades, as long as you’ve got to wear crap on your head and have a dedicated
room to run this thing, virtual reality is never going to be important. But what is virtual
reality? We just talked about VR as tracking your motions and then
rendering the world, drawing the world, and showing it to you. And tracking your motions,
a great team from Tel Aviv, a company called tracking your motions PrimeSense , using infrared
cameras has not solved this problem but all of a sudden made tracking–. By leaps and
bounds, we can now track very easily. So how many of you guys have used the Microsoft Kinect?
Okay, only three or four of you. This is, if you have children or grandchildren, this
is–. They’re all going to be clamoring for that this Christmas. Many kids have this in
their homes already. This uses a single camera to track all of your body motions. The hardest
part about virtual reality, knowing what your body is doing physically that used to require
a room with zillions of cameras, is now in every living room in the country , or half
the living rooms in the country right now. The Kinect is actually the fastest selling
consumer electronic product ever. Everybody has this half of VR. The other half of VR
is something we’re calling autostereo. This is something called a lenticular display which
gives you stereoscopic vision without you even having to wear glasses such as when you
go to the theater. How many of you have used the Nintendo 3DS? Okay. Nintendo 3DS has–,
beams of hologram half way between you and the game that you’re holding. It uses technology
called lenticular displays that refracts light in various directions. It’s very similar,
you know those postcards that you use, those cheap post cards that give you a sense of
stereo. Had it for decades. It’s a very expensive version of that. Everything I’m going to talk
about has been done in a fancy lab. But this year, in particular, VR, tracking your movements,
and showing you a immersive perceptual experience, is in the living room. So everything that
we study as a lab and that we talk about in this book is now being done on the school
buses, in the classrooms, and in your homes. But I’m a social scientist and I’m not a technologist
as my primary field and what I study is social interactions. So when you’re a subject in
my lab we take two photographs of you. We then have a group of undergraduates in a room
networked down the hallway and they feverishly drag these points onto your face in something
called photogrammetric software and after I get a front on and a side shot of you 25
minutes later when you enter virtual reality you’re wearing a 3D model of your own head.
So what does this look like? This is my colleague Jack Loomis. Those are two photographs of
him. And this is a 3D model of his head and face. It’s a digital model. It can be animated
so any set animation I can fathom we can make his head do. It lasts forever the same way
anything digital does. I can make a million copies at a whim. A little digression. How
many of you guys have seen the commercial with virtual Orville Redenbacher? When you
get done with this, go to Youtube and type in virtual Orville Redenbacher and you’ll
see that Orville Redenbacher passed away some years ago. His grandson gave permission to
an advertising agency to use the hundreds and hundreds of minutes of video that they
had from these television commercials. Hours and hours of video. They reconstructed a 3D
model of him from beyond the grave. Orville now acts in commercials without having given
permission to do so because once you built a virtual representation, it can do anything
an animator can dream of. So Orville from beyond the grave is dancing around with an
iPod and saying this iPod is light and fluffy just like my popcorn. It holds 30 gigs. It
really raises questions. When I build your avatar, it lasts for longer than you do and
I have complete control over its actions. So one of the themes of this talk today is
in a world where I’ve built a version of you, what is that world like and what happens when
I’ve got a version of you that can do things that you don’t necessarily do. And it’s really
fascinating space as a social scientist, as a technologist. But it’s not enough to make
an avatar look like you. The avatar also has to behave like you. And what I’m going to
show you here is software that we use,without putting anything on your face, tracks your
facial expressions. So this is the model that I built in the previous slide. This is the
real-time video that we get from a crappy $30 webcam and this is me animating that model
based on a computer vision algorithm that tracks 22 points on your face. We’re deforming
this model based on knowing what the face is doing. So you walk into my lab. We build
this model of you from those photographs. 30 minutes latter when you enter virtual reality
you are being seen by others not only with a model that looks like you but literally
expresses every facial expression that you’re doing and captures all the nuances of your
emotions. And it’s a really powerful experience. I want to bolster the point about who is using
avatars? Because again, it used to be an academic question and it’s no longer an academic question.
And what I really want to point you toward is this stat on the bottom. So a colleague,
my colleague Don Roberts from Stanford studies children with media use. He did a study with
Kaiser Family Foundation that was published last year. He presented this to Congress.
And he surveyed kids’ media use. So kids between the ages of 8 and 18 are spending, on average,
outside the classroom almost eight hours a day consuming digital media. Of those 8 hours
a day 1.25 hours on average are spent using avatars and video games. To put in in perspective,
the amount of time they spend looking at print is 38 minutes a day, okay? So they’re spending
over three times as much time in an avatar as they’re reading newspapers, books, everything
combined. So the idea of, the question of what do avatars do to society is no longer
one that we need to think about in terms of the future. Our kids are living, breathing,
and eating digital media all day and a lot of these are spent wearing these avatars which
can be very different from them. Why is this important? Sometime ago we developed this
theory of transformed social interaction which is not a new idea. A lot of these ideas have
been taken from science fiction novels. It’s as old as humans, this idea of behaving. But
in the physical world right now what I do you guys see. So, if I make a social blunder,
I make a bad facial expression, I curse by accident, I trip and fall, what I do you guys
see. When we’re interacting via avatars — let’s revisit how virtual reality works — The way
that it works is, I take a step a meter forward. In my system, it tracks my motion. My computer
automatically sends you that I’ve moved. And then on your computer you’ve got a version
of my digital model, you draw me in a different location. So the idea is I perform on my end.
My computer sends you high level information about the nature of that behavior. You draw
it locally on your end. And that happens hundreds of times a second and it happens with you
guys, too. So if one of you guys yawn, your computer senses your mouth opened, sends that
to my computer and I draw you yawning. That’s fundamentally different from a video conference
because in a video conference all you’re doing is sending pixels over the network. So whatever
I’m doing, you capture that image and that image is automatically sent in a non-smart
way over the network. Is that difference clear? Why is that important because it allows transformed
social interaction. Instead of seeing the actual Jeremy, you will see an optimized,
strategically-transformed version of me that has been chosen specifically to accomplish
some conversational goal. So everyone in this room can be seeing a different version of
me one that I’ve chosen because it’s going to help me sell you guys books, okay? And
that sounds very abstract now. I’m going to now take it through lots and lots of empirical
examples that should convince you quite robustly that this is something we need to think about.
So first one I want to talk about me transforming my avatar to influence other people. And the
first study that Jim Blascovich and I did in the late 90s deals with what we call augmented
gaze. As humans we’ve known for a long time eye contact is important. As psychologists
for the last hundred years, about, we’ve run experiments looking at this. And other than
me poking you in the chest physically me, looking you in the eye is the most important
nonverbal tool I have. It makes your heart beat faster. It makes you listen more to what
I say. It makes you remember what I say. It makes you more likely to look back at me and
to speak to me. We know from tons of research that eye gaze is important. But in a physical
room, I have a problem. So right now there’s about 65 of you in this room and I can only
look at each of you in the eye for less than three percent of the time, two percent of
the time. I can look at you all in the eye but only one at a time. In the virtual world,
it’s free for me to send all of you varying information about where my eyes are pointing.
It could be the case as I’m giving this talk every single person in this audience can feel
as if I’m just looking at you and perceiving that I’m giving the talk in that manner. So
to test this idea, we brought in three people. We put them all in virtual reality. And this
guy randomly get assigned to be the teacher. These two guys randomly get assigned to be
the learners. Unbeknownst to the teacher who’s looking trying his best to look at both of
them we’ve taken his head movements and each one of these guys sees the teacher looking
at him for 100 percent of the social interaction. Okay? Why did we do that? The answer was,
it’s the late 90s. I’m not a fantastic programmer and this was the easiest thing we could think
of. I always like to say as a programmer of artificial intelligence, we start with something
really simple and we examine the social effects. So if an algorithm that was so dumb which
is simply make this guy stare at both people for the entire time, if that works, then you
can imagine an algorithm that added a little bit of noise or only looked at you if you
weren’t talking to them or did it only for 50 percent of the time. So what does this
look like for learners? Imagine I’m giving this talk and the entire time I’m talking,
everything I say, I’m looking right at you for this whole time. You can imagine this
is a strange social interaction, right. So the way we run social experiments, we typically
bring in triads. We bring in 100 triads and we examine their behavior. We’ve now replicated
this study four separate times, hundreds and hundreds of people. And we get four things
consistently. The first thing that you get is people can’t stand this, okay? [audience
chuckles] They’re extremely uncomfortable. Humans rarely look at each other for more
than 7 seconds. Ever, unbroken. And that’s typically if you’re intimate with someone.
These are people that for a 20 minute lecture got stared at the entire time. And people
are not comfortable at all with this. However, not a single person has ever detected that
it’s not real gaze. So no one has said, “you know that guy stared at me the whole time
but I think it’s computer gaze not real gaze”. Not a single person has ever thought that.
You don’t like it but you think it’s real. The third thing that we get is attention.
So if I’m a teacher, if I’m a salesman, if I’m a politician, if I’m trying to get the
message to you, half the battle is getting you to listen and with virtual reality one
point I want to return to toward the end of the talk is whenever you’re in a virtual simulation,
everything you do is tracked. Meaning recorded. Everywhere your eye points; where your body
is standing; every little behavior you can ever imagine doing gets dumped to a file so
we can go through that file and actually look at where you’re looking. When I give you guys
‘super-gaze’ as a group, you guys return my gaze more often than when I do my normal gaze.
So, just with a simple change to where my head is pointed, all of you are getting different
versions of me. It took me literally eleven lines of code to do this, all right? This
is not hard to do. You take the packet saying where the eye direction is and you change
it a little bit. Not hard to do. We’re changing where you’re looking, where you’re pointing
your head. But the fourth thing and the most important thing is social influence. So in
this experiment, we designed a survey that Stanford students had to sign when they left
which was very unpopular. If you are caught on Stanford’s campus without your identification.
you are going to get in trouble. So you can imagine nobody wants to sign this. But the
odds of you signing this are higher if you get ‘super-gaze’ than if you get a normal
type gaze where you can look at people one at a time. So with eleven lines of code, I’m
changing the amount of time you’re listening to me. And then I’m also increasing the probability
that you’re going to be influenced by what I say in a persuasive manner. Next line of
studies is called digital chameleons. This takes a powerful idea called of mimicry. A
woman named Tanya Chartrand in the late 90s, ran a very famous psychology experiment. She
had subjects go into interviews and the subjects either mimicked the gestures of the interviewers.
And so, if the interviewer crossed his legs, then the subject crossed her legs. If the
interviewer, if she leans forward then the subject, he would lean forward. So we just
told people face to face to go in and mimic the gestures of the interviewers. Nobody consciously
recognized the mimicry, but people who went in and mimicked the interviewer were more
likely to get the job than people who didn’t. One of the overriding themes of social psychology
is people love themselves. And if you can subtly manipulate that, you are more effective.
We said wait a minute how does virtual reality work? VR works because I’m sending you information
about my behavior so you can draw me locally on your machine doing it. It is trivial. It
is literally four lines of code for me to take that packed you sent me which is telling
you your behavior, hold on to it for a few seconds, put my name on it and send it right
back to you. So the technology is built for mimicry. And so what does this look like.
This is Nick Yee. He’s a brilliant PhD student who’s done much of this research and all we’re
doing here is that she’s trying to persuade him about this policy to have your ID card.
And what he’s doing, he’s moving and listening. Her head movements are a four second mirror
of his. Okay? So he looks up. One, two, three, four, she looks up. Everything he does is
a — everything she does is a slow mirror of him. Why do we do it this way? Again, we
start off real simple on the artificial intelligence. Let’s start easy. Whatever he does, she’s
going to do four seconds later. At this rate, a four second lag with exact mimicry, only
one in 20 people consciously recognize that the avatar was mimicking them. So even though
I’m stealing your exact head movements, barely anyone has the ability to explicitly realize
this is happening. And those instances tended to be if somebody spun around in their chair,
okay? [laughter] Or something very extreme. We didn’t build for the exorcist. So what
happened? The exact same things as in the previous studies. Nobody detects this consciously,
but when I mimic you, when my avatar mimics you, you look at me more and you’re more likely
to sign this petition later on that you wouldn’t have signed otherwise. You also rate me as
more attractive, more credible, more trustworthy, and you like me more. So really simple tweaks
I can do to my avatar, automatically, that are having drastic changes on your nonverbal
behavior, on what you say, and ultimately how you decide to live certain aspects of
your life. Which brings me to this next study which doesn’t look at facial mimicry but — sorry
doesn’t look at nonverbal mimicry but looks at your face. So I want to show you a video
now and I want you to be extremely impressed by this video. The reason I want you to be
impressed is because this video e built in 2003 using $19 software in less than 20 minutes
by a freshman communication major, okay? It took no training. And it was cheap and it
took no time. And you can see it is a morph from Arnold Schwarzenegger to Gray Davis.
And all we’re doing is blending one face to another and it’s really simple to do this.
And we wanted to ask the question, if my avatar takes on your facial characteristics, is that
going to change how you interact with me? So think about this Brady Bunch grid of faces
that we have on a hang out and one could do with this. What do I know about if your face
looks like mine from the social psychology literature? If you look more like me, you’re
more willing to buy something from me. Studies by Dovidio have shown that if I’m lying on
the ground clutching my knee in agony, the odds of you stopping to help me increase if
you look like me. And Lisa DeBruine in England has demonstrated that if your face looks more
like mine, I’m more likely to hand my children over to you to babysit for them. As humans,
we like to think that we’re this very deep race where we look to the inner beauty of
people but it turns out subtle characteristics of your face and body have drastic consequences
to how you interact with others. I worked my colleague Shanto Iyengar who is a political
scientist and we wanted to look at the 2004 presidential election. Okay? And what we did
is we had three groups of subjects. One group of subject had pictures of George Bush and
John Kerry that were just like the actual pictures of Bush and Kerry. So this middle
row here are the actual pictures of Bush and Kerry. We then got a focus group to collect
from about 600 people to passively acquire photographs of voters from across the country.
And we got a national random sample. Meaning it wasn’t just college kids. We paid money
to get variance in socioeconomic status, location, age, all these important demographic variables.
Then over the next six months, we had undergraduates morph the faces of the potential voters with
Bush and Kerry. That is a picture of me before I got tenure. [laughter] You can see I was
much more clean cut then. This is a picture of George Bush and on the top right is the
60/40 blend between me and Bush. We’ve now run this experiment five separate times. Over
1,000 people. We’ve asked them to write a paragraph about the picture of George Bush.
Write a paragraph about what his face looks like. To date, not a single person if you
stable them 40% on the blend, on the has ever consciously detected that his own face has
been put in the candidate. Nobody has any idea consciously that this is occurring. This
is a woman in my lab, Julia. This is her with John Kerry. Three groups of subjects. If you
had the normal Bush versus Kerry meaning just the actual photographs of them. For a third
of our subjects, his or her own face was morphed with George Bush. And for another third of
our subjects, his or her face was morphed with John Kerry. And what we simply did is,
a week before the election, we asked which of them they were going to vote for. And what
we demonstrated is that in the control condition actual photos we exactly replicated the national
sample. It was 46/44 in favor of Bush. Bush wins by a landslide when the subject’s face
was morphed with Bush by 20 points and Kerry actually wins by 7 among the people with whom
Kerry’s face was morphed. So think about this. A week before the election, who you say you’re
going to vote for is based on whether or not the candidate looks more like you. So we got
this data. And as citizens we were terrified but as social scientists we were very excited
[laughter] and we sent this to the best journal in political science and they rejected it
without even sending it out for review. They said the voter is not rational. Sorry. They
said the voter is rational. They don’t make decisions based on this. The whole field of
political science is based on that notion. In order to publish this work, we actually
had to replicate it in five separate studies over a period of five years with different
candidates., 2008 elections, Small elections. This effect doesn’t go away. One of the most
powerful effects we demonstrated in the lab is if my face steals portions of your face
,you’re going to like me better and choose me in ways of social influence. And what does
this mean for the world of Google, the world of online social interaction? It’s trivial
to grab someone’s face and to morph my face with it. What we demonstrated is very important
decisions can be based on a simple blend of my face with yours. Next line of studies we
did was, can I train a teacher to be a better teacher with a simple tracking algorithm?
So, in this room, I’m doing my best to spread my eye gaze as best as I can but it’s hard.
As I am talking, it’s very hard to hit everyone in the room and as we’ve demonstrated that
top graph there is a U shaped birds eye view of a classroom where the teacher is is the
blue T. The white cubes are pupils. This graph here is the percentage of time people get
ignored And what you see is that you guys on the corner, you guys get ignored for about
one third of my lecture. What if you gave a teacher a tool which simply showed me when
I am not looking at you. And all we did is take the tracking data that you naturally
get from a virtual reality simulation and if I’m not looking at you, you start to disappear.
So you’ve actually start to turn transparent and simply gave the teacher this tool as they
gave their lessons. What you see there is this graph here where not a single person
in the room gets ignored, ever. And just using a simple tool of spreading awareness of eye
gaze, you’re demonstrating better learning in the classroom. So again, really cheap easy
ways using things you can glean from the virtual world as a way of increasing someone’s performance.
I’ve taken this exact paradigm and I’ve worked with Peter Mundy who’s at UC Davis, who works
with autistic children that have a hard time maintaining eye contact. And what we’ve done
is we’ve actually given them training simulations to help remind them to look people in the
eye. And we’ve run two pilot studies now with very encouraging results. So I think there’s
some great applications of using VR towards helping people that have issues with social
interaction. All right. I’ve talked a lot and hopefully I’ve convinced you that I can
use VR, my avatar, to influence you guys. That there’s things I can do that are going
to make you more likely to agree with me. Well, what happens to me? We now all understand
that I can influence you guys. But when I am changing my avatar, what are the implications
to me? How does that affect me? And that’s a separate line of studies I want to talk
about now. So the paradigm here that we use here is a stranger in the virtual mirror.
So a typical subject will walk in the room and we’re tracking his motions as he walks
around. This is kind of an old virtual world but it’s good to get it used again. This is
Nick Yee. He’s walking around. He walks up to a virtual mirror. and a typical subject
in our studies we’ll have them spend two minutes in front of the mirror gesturing back and
forth. So here you can see he’s moving his head back and forth. And he’s a white man.
He now is going to bend down and lose his mirror image for a second. We track all of
his movements. He pops up. And he’s now a woman of color. And the question we ask is,
if I change your identity in a virtual world, do you come to embody the social category
of your avatar compared to the social category of yourself. So can you use an avatar — or
does an avatar change your identity? Small point of diversion here. Have you guys heard
World of Warcraft? It’s a video game where you have avatars. The average player of World
of Warcraft is in his late 20s. How many hours a week do you think he plays?
>>Male #2: 40>>Jeremy: 40 is on the high end. On average
it’s close to 20. There are many people who spend 40. I’m saying on average the player
is close to 20. So think about, I put somebody in this for about 90 seconds. A World of Warcraft
player, he’s spending about 20 hours a week, in an avatar that could be a different gender,
it could be different species. There’s all sort of things that could be transformed
about this. So the idea of your virtual self being very different from your physical self,
is a common one. So in our typical experiments, you’ve got the subject. He walks up to the
virtual mirror and sees himself in the mirror as something. And I’ll talk about what that
is in a second. He turns around and there’s another person to see, we’ll call the confederate.
He’s part of the study and he has a set script that he talks to the subject about. So the
idea is I’m the subject, I look in the mirror, I see myself differently. I turn around and
then I talk to someone else and I think that other person sees me with a new feature. And
that sounds very abstract. Let me talk you through one of them. The first study that
Nick wanted to do was attractiveness. We think as humans this enlightened species but it
turns out if you’re good looking, you’re more extroverted. There’s lots of changes in your
behavior. Being good looking does change lots of psychological attributes that people have.
But in the virtual world, beauty is free. So everyone can be good looking. So Nick wanted
to ask if I make you attractive or unattractive in the virtual world, how does that change
your behavior? What he did is a study where subjects walk to the mirror. They saw themselves
in an avatar that had carefully been chosen to be either one standard deviation more attractive
than average or one standard deviation less attractive than average. Keep in mind this
is very subtle. It’s not as if — not a single person has any idea they’ve got the good looking
one or the bad-looking one. All this is working on a subconscious level. And it’s all — no
one’s told they’re in a good-looking or bad-looking avatar. The confederate then says to the subject,
“come on, over here.” “come over here I want to talk to you.” And the first thing we measure
is how close the subject will stand to the confederate. When you have a good looking
avatar, you will step a meter closer to the confederate than when you have a bad looking
avatar. So a fundamental way about how you space yourself in public spaces is changed
by a very subtle tweak to your avatar. We then tape record everything that you say in
a 20 minute discussion with the confederate. We take those tape recordings and we mail
them off to nonverbal judges who are trained in actually seeing — detecting how personable
you are by your voice. When you have an attractive face, for only two minutes in a mirror, you
speak more personably, you reveal more information about yourself. So a tiny tweak to your avatar
is changing how you interact nonverbally and what you say. In the next study Nick ran,
he looked at height. So in the physical world, in the United States, height is correlated
with income. It’s also correlated with confidence in situations. A very depressing study out
of the University of Chicago got back-end data to online dating. And it turns out, for
men, for every inch you are under 5’10”, in order to get the same number of hits as a
guy who’s an inch taller, you have to make an extra $30,000. [laughter] . So you can
actually put a price to height in the world of online dating. But again, in the virtual
world, height is free. And so what Nick the did is he had the subject either be ten centimeters
taller, very subtle, nobody notices this consciously. Or ten centimeters shorter than the confederate.
He then had them do a money-splitting game called the ‘ultimatum game’ which is a task
used by economists, a standard way of gauging economic performance. When I am ten centimeters
shorter than you, — again I don’t consciously notices this– No one’s actually perceiving
the height difference in an explicit way–. When I am ten centimeters shorter than you,
I am three times as likely to lose the negotiation than when I am the same height or taller than
you. This also occurs independent of your physical height. So regardless of my physical
height, it’s my virtual height which is changing my behavior. So real economic ramifications
of a tiny tweak of your avatar’s height. All these studies, by the way, I know they sound
a little bit out there. They’re all published and just about every one of them has been
replicated more than once. Meaning, when we try to go publish it the reviewers say, “are
you sure?” And they make us go back and do it again. And yes, we’re sure. These are really
powerful effects. The big question everyone asks is, “okay, that’s fine, but I’m not spending
my whole life in avatars. What happens when I leave?” So with the attractiveness study,
Nick ran a study, a follow-up, where you were attractive in the virtual world. He then takes
you out, pays you for the study, walks you down the stairs and you’re about to leave
the building. Right as you’re about to leave, another graduate student runs up to you and
says “wait, wait, wait. I’ve got to get ten more subjects for my dissertation. If you
fill out this questionnaire for five minutes, I’ll pay you $20.” Or $10. Everybody does
it. No one has any idea it’s related to this study. The survey is an online dating form.
And part of it is choosing people you think will go out with you. So people with whom
you think you have a shot. Half an hour later, if I had a good looking avatar, the confidence
that you got from that actually changes who I’m going to pick. So this experience in the
virtual world doesn’t stay there; it changes at least for half an hour in this instance
who I am outside. With the height study, Nick did a very clever manipulation where you’re
taller or shorter than the avatar in the virtual world. He took the helmets off of you and
then you met each other on chairs designed to ensure that your height was exactly the
same and then repeated the negotiation and got the exact, identical results. So this
hierarchy that was formed from a height differential extends face to face and you still lose the
negotiation. Very powerful in that sense. I want to shift now and I want to talk about
a different phenomenon that deals with the self and I call this Veja Du. I feel like
I’ve never been here before. And this takes the idea of Orville Redenbacher. What would
happen if someone built that when he was alive? As a human species, avatars are giving us.
It’s weird for this to happen actually. In the history of media, avatars are giving us
something we’ve never, ever had before as a perceptual experience. To me, it’s flabbergasting.
I think it should be exciting to you as well. We’ve always had mirrors. We’ve been able
to see ourselves. We’ve always been able to go up to water and look at ourselves. In the
last 50 years, we’ve been able to look at ourselves asynchronously. Meaning, I can look
at something I did on video tape twenty years ago. For the first time ever, as people, we
have the ability to see ourselves doing something we’ve never physically done. Somebody builds
my avatar, any animator can make it do anything he or she wants. For the first time ever,
I can look at a screen and see something that looks just like me doing something I’ve never
[physically done. I can’t underestimate, as a social scientist, how powerful this is or
how — just how strange this is going to make the world. We can talk about applications
later. The first study we wanted to do — this is Jesse Fox’s dissertation deals with health
behaviors. So Americans we know we’re supposed to exercise more. We’re supposed to be healthy,
but that message doesn’t get through. Jesse wanted to make it visceral. And so, what she
did is, she built–. All you guys walk in the lab. We built your faces. And then you
have to exercise in physical space. So here’s the video of that. Every three times that
your knees cross your chest the avatar which looks just like you. It’s got your exact face
— loses a pound. So after 20 minutes, you watch your avatar lose weight and get to what
Jesse defined as an ideal weight based on some health, government standard. For ten
minutes you do that. We then force him to stand still and all the weight comes back.
So what we’re doing in a very visceral way. From here on out, what we can talk about is
VRs as a really cool tool to use avatars to change behaviors that are difficult to change
and entrenched. So 24 hours later if you got this simulation, we asked you to keep a diary
and then turn in the diaries to us. We — you know that we’re reading them. There’s nothing
personal in there. If you got this simulation, over the next 24 hours, you would exercise
on average an hour more than people in control conditions. For example, people that got the
same simulation with someone else’s face instead of your own. Something real powerful about
watching yourself gain weight as a function of inaction and then losing it when you get
to –. Because you’re connecting the physical behavior with what you all want to be. So
she’s changed your behavior up to a day later. But that was self-report. We don’t always
trust it. So in a follow up study, she brought subjects in, gave them this type of simulation,
then took the helmet off, paid the subject. Said, “you know what? Next guy’s not coming
in for 20 minutes. You got some weights over there if you want to work out you can.” People
in this situation simulation worked out for ten minutes longer on average than control
conditions. So people stayed in the lab in their school clothes working out voluntarily
because this idea of fitness had been driven home in such a powerful manner. We’ve replicated
this a number of times including eating. So Jesse ran a study where subjects sat down
and were forced — we took control of their virtual arm and they were forced to either
eat Reese’s pieces and watch their thighs grow and their guts grow over or eat carrots
and watch it come off. And we showed that change in eating behaviors. A really nice
idea there. On the negative side Kathryn Segovia brought in preschoolers to the lab. And elementary
school children. Young children. The first time anyone has ever taken my very expensive
virtual reality equipment and subjected it to hundreds of children. Very fascinating
study. And she wanted to ask about false memories. So if a child sees herself swimming with a
whale does that become an actual memory of something she thinks she does in a physical
world? So in psychology a fairly large study on false memory formations in the legal literature
and it’s something that’s very important to understand if you’re spending time in virtual
spaces, how is that changing your cognitive structure? And so children that came into
the lab and saw themselves swimming with whales compared to subjects who saw another child
swimming with a whale, in the treatment condition of the self, 50 percent of our children, a
week later when we brought them back, thought they had been to Sea World, described the
hot dog they ate beforehand, the smell of the fish tank, had encoded this as a physical
memory. One in two of our children believe they’d done this physically. Compared to controlled
conditions, this is much higher. For children using or spending all this time with digital
media, it really raises some concerns. Grace Ahn ran a series of studies on what she called
sell self-endorsing. This takes a big idea from science fiction that if I’m wandering
down the street and a camera takes a picture of my face then you encounter a billboard
that has me using your product and loving it. How is that is that going to change my
attitude to the product. In a line of four five studies that just got published in the
Journal of Advertising, you’ll like the product more. You won’t have a conscious memory of
me being in the billboard later on but you’ll still think the product is better for some
reason. So very powerful. And then connection with a future self. A line of studies we ran
with Laura Carstensen at the Center for Longevity at Stanford. She’s trying to have people have
a good time in their older age; to have a very enriching life. Children today aren’t
saving any money. It’s going to be a crisis hen when they do the math about saving. So
when we brought subjects into the lab and we aged them. And so, 18 year old subjects
went to the virtual mirror, saw themselves as they’re going to look when they’re 65 compared
to seeing themselves at their own age. And the idea is that an 18yearold can’t even imagine
what it’s like to be a 30, let alone 65. If I’m trying to get him to save money, let’s
show him what he’s going to look like when he’s 65. And so, we did this and then the
dependent variable was we can give them some money now or they can wait and get more money
later to put it in savings. And what we’re showing is that when you show the future self
compared to when you ask them to imagine the future self, you get more savings and very
exciting results. So I’m going to shift and conclude very quickly because I want to leave
time for Q and A, about some work that I’m very excited about because I think it’s got
some potential to make the world a better place. This asks about does it help if I can
walk in someone else’s shoes? And the first study was done with environmentalism. So again,
VR can can be thought of as a way to help those very entrenched, hard to change behaviors.
We all know we’re not supposed to drive alone to work. We all know we’re supposed to take
shorter showers. And what we did here there’s an op ed that came out in the New York times
that said if you’re using that beautiful, fluffy, non-recyclable toilet paper, over
the course of your life, you’re sawing down two virgin trees. And so, what we did is half
our subjects came in. All were told that statistic. Half our subjects read a beautifully written
narrative about what it would be like to cut down a tree. The other half were forced to
use this apparatus, a haptic device, that simulated a chain saw and literally cut down
a tree. And hear it thunderously crash around them and the birds fly away. And basically
step into the shoes of a logger and experience what it’s like to do that. And then what we
did was Grace Ahn runs in, she’s visibly pregnant, 45 minutes later while the subjects were leaving.
She knocks down a glass of water and says “I’m pregnant. Can you help me please clean
this up?” Subjects who got the virtual experience will clean the mess using 20 percent less
paper than subjects who got the written experience. So think about it. Environmentalism , we all
know we’re supposed to be green, but it’s these hard to change experiences that this
disembodied virtual avatar experience can change. The next one deals with diversity
training. And we’ve done a lot of work with racism, with ageism, with sexism. The study
I want to just briefly talk about deals with attitudes toward the handicapped. We made
subjects color blind which is not too traumatic of a handicap. Half of our subjects imagined
what it would be like to not be able to differentiate between red and green. The other half we just
robbed the ability to see red and green by a filter on the virtual world. And so they
had to go through some tasks where it was very difficult to do the task because you
couldn’t tell the colors. Then later on we asked them to volunteer their time surfing
the web and writing down websites that were hard for red- green color blind people to
see. And people in the VR condition, compared to the imagine you were color blind condition,
spent 20 minutes longer on average helping color blind people compared to others. Let
me conclude. Avatars can be more human than human. That’s actually a chapter in our book.
Really talking about how avatars can not make me as good a teacher as I am physically but
a better teacher. It’s a really special cool ways that we can make the world a better place.
And immersion is going to be a game changer. If people are spending so much time online
right now and it’s a flat 2D screen, what’s going to happen when, you know, we are we’re
literally immersed in the content and we see everything perceptually everything feels real
which is what we’re moving toward. And finally ethics are a great thing to talk about. All
these things can be viewed as deception. If I transforming my avatar, I am really hiding,
in some ways, who I am. All of these things may not be worth doing. It’s a fantastic debate.
So I would love to get at least 13 minutes of your questions and thank you for your attention. [Applause]>>Male presenter: That was a very good talk.
And my goal has always been to get to 6 feet. But since I’m two inches short, my goal is
to make 60,000 dollars more now. So if you have any questions, please come up to the
mic so that the folks on VC can also hear it.>>Female #1: So I have a question about several
of the experiments. I can remember mirroring and face blending. There are subjects are
fairly unsophisticated in the techniques that can be created in a virtual world. Do you
get any sense that over time humans will become more sophisticated about what’s happening
in this world and what’s being done and will that make a difference to their reactions
in this space? Because we seem to get more sophisticated about other technology.>>Jeremy: It’s a great question. So the question
is subjects in my studies may not have known anything about VR and as we get used to the
technology, none of this is going to work. I think there’s a good truth to that which
is there is a novelty effect what we’re going to see is an arms race very similar to spam
and spam blockers. People using these transformations, people realizing they’re not real, and then
someone having to go to the next level and going back and forth. One thing I like to
say is my purpose is to inoculate. With the political morphing, I get approached by political
consultants sometimes. I just don’t do that work. I view my goal as informing the public.
And when you consciously are aware of this stuff, so when I tell you that candidate has
stolen your face, that backfires. And so, this is truly a case of knowledge is power.
And then the question is how sophisticated are the users compared to the people trying
the transformations.>>Male #3: So, could you give us an idea
what things avatars don’t work for? You must have had some negative results over the years.>>Jeremy: So, great question. There’s many
things that don’t work. Those, academia doesn’t reward you for publishing things that don’t
work. One thing we tried was whites and blacks. We took white people and put them in black
avatars and forced them to walk a day in the shoe of someone who’s black with the idea
we wanted them to gain an empathy. And that was the case where we didn’t actually reduce
racism. We actually increased it. So there’s a fairly large literature on racial priming
which is that some whites, when you remind them of blacks, that changes their behavior
in a negative way just because it’s called priming. Unfortunately American whites generally
have negative associations in general with blacks. We thought if we put you in an avatar
and show you what it’s like to be a different race that would help and this is one instance
where it backfired on us and we’re still doing follow ups on that. I can talk for hours about
the studies that haven’t worked.>>Male #4: The studies that rely on mainly
on looks like Veja Du and face morphing, I’m wondering if you’ve even looked at identical
twins and if they’re still susceptible to the results.>>Jeremy: It’s a great question. So the question
was do they work on identical twins in a different way. I’d love to do that study. It’s hard
finding enough identical twins. There are scholars who do that for the people who study
evolution. But I’ll put that on the list. That’s a great idea.>>Male #5: So you talked about the trade off
between real inches and real money. Have you thought about virtual money versus virtual
inches and all those other possible permutations of that.>>Jeremy: The question is have we thought
about how this thing deal with virtual money as opposed to to real money. My colleague,
Jim Blascovich, quote unquote real money U.S. dollars are supposedly a stand in for gold.
Currency by definition is representation of something else. But we have looked at specifically
Linden dollars in some of our studies and found similar effects. Great question, thanks.>>Male #6: So you’ve done a number of different
studies. Which was a result that so surprised you that it was like “oh” but you were completely
surprised.>>Jeremy: Great question. The question is
which study surprised us the most. And there’s a line of work I didn’t get to talk about
in this talk that we call Digital Footprints and this how many guys have seen this New
Yorker comic on the Internet “nobody knows you’re a dog” that shows a dog on the Internet.
We’ve run a study actually the military is actually sponsoring a huge program with lots
and lots of people studying this which is based on what I call your digital footprint
which is the stuff you leave behind in a virtual world. So on a web you leave behind cookies.
In a virtual world you leave behind where you walked, everything you’ve said. So in
this online world Second Life you can record everything everybody does; who they’re standing
next to, if they’re flying or walking. We’ve run studies now — I ran a study where I took
80 Stanford students. They spent 10 hours a week in Second Life. And they put on these
buttons that recorded everything they ever did in Second Life. But I also knew who they
were in the first world. They were my students. I knew their gender, I knew their age, I knew
their GPA, I knew their major. I knew they fill that personality metrics. And then what
we then did is we used machine learning algorithms which are statistical methods where you can
correlate data with a right and wrong answer. And what we did is we said can I use this
digital footprint, this massive amount of data we’re collecting on a second/second basis.
Think about this: 80 student students,ten hours a week, every second collecting about
50 parameters on what they’re doing in the virtual world. Can you take all that data
and find me some pattern that I can detect a man from a woman. Can how you walk in Second
Life, does that tell me your gender or your age. And what we demonstrated with extremely
high accuracies much higher than I thought possible, I can tell a lot about you, your
personality, your race, based on things we thought were anonymous. So the joke is yes
you may think that on the Internet nobody knows you’re a dog but it turns out we know
you’re a dog. We know quite well. The Internet and virtual worlds in particular have this
illusion of being anonymous. Everything — it’s not me it’s an avatar. But it turns out the
digital footprint you leave behind in a virtual world, is more damning, in many ways, or depending
on the point of view more wonderful, than any face to face encounter could be because
there’s a record of every single behavior done by every person. Whereas face to face,
I can do something embarrassing but I can lobby later on for all of you not to tell
anybody; but in a virtual world that doesn’t work.>>Male presenter: Other questions.>>Jeremy: Thanks guys. Thanks a lot. Appreciate
it. [Applause]

Business Blueprint Success Story   Alex Henderson

Business Blueprint Success Story Alex Henderson


I’m Alex from Captain Cabinets. What we do is we help joiners and builders simplify their life and their business. We take as
much off their plate as we can. Make their life a lot easier. Take the stress out of them doing their business. When I started I had no plan I have no idea where I wanted to go
with it and I certainly had no idea how to get out of it. So you find yourself
working 80 to 90 hours a week. The tail starts to wag the dog and I thought I need help. I don’t know what I’m doing. I’ve got my apprenticeship. I’ve
got my trade training. I’ve not an idea when it comes to running business. I mean there’s more to marketing than putting out an ad and mailing daily. When I found
Blueprint the beauty of it for me is they tell you how to do it and they
don’t only tell you how to do it they help you how to do it. So like during the
conferences we sit down and have workshops and let you doing it on the spot
which for me is great. So now I’ve got a really clear goal where I want to go
with the business what I want to do with it. I’ve got an exit strategy in place.
What I want my business to look like in 10 years, knowing what I want it to look
in two years, knowing what I need to do to get to that position. And then how that
involves the rest of my life as well. We’ve identified an area of the market
which needed servicing. We’ve identified our A grade clients and we’ve gone
looking for them. We might only have four or five really good clients but they
make the place you know up to five or six orders a week with us. We’ve doubled
sales from before we had like two or three jobs when now we’d have up to 50
jobs a month so it’s all on cloud-based software so anyone can check it from
their phones, tablet, laptop, whatever and then we’ve also linked that with our
accounting software so now we can turn a job straight into an invoice
without having to duplicate everything technology is certainly not my strong point but it’s been a huge time saver. We sat down with Sabiha who’s just awesome when it comes to staff and HR and it’s not just finding staff it’s finding good
stuff. So now we’ve got a really good group of guys I’d recommend Business Blueprint to anyone who’s running a small business who is not really sure on how to run a business the content and the quality of content that you get out of Blueprint it really is great. It’s not only the content, it’s the people that you meet and the blueprint community is
actually pretty strong and other hand one of the best assets but you can’t put a cause on that either i think the company name almost does itself any justice coz it’s more of a life blueprint really because it has affected so many other things It’s life changing.

How to install dreamweaver cs3 for making html or css pages less than 60 mb

How to install dreamweaver cs3 for making html or css pages less than 60 mb


copy the software where you have not to remove or delete open software wait for some seconds to extract it as it is not opening means it has not well extracted let it extract now it has opened it means it has extracted well now check it is working or not check by make a html and open it with your browser like and subscribe to my channel