The Iron Man Mark XLII, Tony Stark’s iconic main armor in Marvel’s Iron Man 3, is making a return in a “big” way! Hot Toys is delighted to create more amazing collectible figures for all the Iron Man fans around the world and very excited to officially introduce today the stunningly detailed 1/4th scale Mark XLII collectible figure from Iron Man 3!

The movie-accurate 1/4th scale Mark XLII Collectible Figure is strikingly detailed and meticulously crafted based on the image of Robert Downey Jr. as Tony Stark/Mark XLII in Iron Man 3. This collectible figure stands 49cm (20 inches) tall, featuring a newly sculpted battle damaged Tony Stark helmeted head sculpt, interchangeable helmeted head, specially applied metallic gold, red and silver colors on armor with weathering effects, a number of interchangeable battle damaged armor parts, a battery booster Tony Stark used to recharge the armor as seen in the film, LED light-up functions on eyes, arc reactor and repulsors, LED lights shine through various areas of armor, and figure stand.

Pre-order

Scroll down to see the rest of the pictures.Click on them for bigger and better views.

The question to ask is: What if the Joker was a girl in Christopher Nolan’s “The Dark Knight”, the second part of Nolan’s The Dark Knight Trilogy and a sequel to 2005′s Batman Begins? No disrespect to Heath Ledger, who made the character his very own a… Continua a leggere →

The Emperor’s Royal Guard, known under the Republic as the Red Guard and under the Empire as the Imperial Royal Guard or Imperial Guard, were the personal bodyguards and assassins of Sheev Palpatine. Armed with force pikes and fully clad in their anony… Continua a leggere →

The human brain is a sophisticated learning machine, forming rules by memorizing everyday events (“sparrows can fly” and “pigeons can fly”) and generalizing those learnings to apply to things we haven’t seen before (“animals with wings can fly”). Perhaps more powerfully, memorization also allows us to further refine our generalized rules with exceptions (“penguins can’t fly”). As we were exploring how to advance machine intelligence, we asked ourselves the question—can we teach computers to learn like humans do, by combining the power of memorization and generalization?

It’s not an easy question to answer, but by jointly training a wide linear model (for memorization) alongside a deep neural network (for generalization), one can combine the strengths of both to bring us one step closer. At Google, we call it Wide & Deep Learning. It’s useful for generic large-scale regression and classification problems with sparse inputs (categorical features with a large number of possible feature values), such as recommender systems, search, and ranking problems.

How Wide & Deep Learning works.Let’s say one day you wake up with an idea for a new app called FoodIO*. A user of the app just needs to say out loud what kind of food he/she is craving for (the query). The app magically predicts the dish that the user will like best, and the dish gets delivered to the user’s front door (the item). Your key metric is consumption rate—if a dish was eaten by the user, the score is 1; otherwise it’s 0 (the label).

You come up with some simple rules to start, like returning the items that match the most characters in the query, and you release the first version of FoodIO. Unfortunately, you find that the consumption rate is pretty low because the matches are too crude to be really useful (people shouting “fried chicken” end up getting “chicken fried rice”), so you decide to add machine learning to learn from the data.

The Wide model.In the 2nd version, you want to memorize what items work the best for each query. So, you train a linear model in TensorFlow with a wide set of cross-product feature transformations to capture how the co-occurrence of a query-item feature pair correlates with the target label (whether or not an item is consumed). The model predicts the probability of consumption P(consumption | query, item) for each item, and FoodIO delivers the top item with the highest predicted consumption rate. For example, the model learns that feature AND(query=”fried chicken”, item=”chicken and waffles”) is a huge win, while AND(query=”fried chicken”, item=”chicken fried rice”) doesn’t get as much love even though the character match is higher. In other words, FoodIO 2.0 does a pretty good job memorizing what users like, and it starts to get more traction.

The Deep model.Later on you discover that many users are saying that they’re tired of the recommendations. They’re eager to discover similar but different cuisines with a “surprise me” state of mind. So you brush up on your TensorFlow toolkit again and train a deep feed-forward neural network for FoodIO 3.0. With your deep model, you’re learning lower-dimensional dense representations (usually called embedding vectors) for every query and item. With that, FoodIO is able to generalize by matching items to queries that are close to each other in the embedding space. For example, you find that people who asked for “fried chicken” often don’t mind having “burgers” as well.

Combining Wide and Deep models.However, you discover that the deep neural network sometimes generalizes too much and recommends irrelevant dishes. You dig into the historic traffic, and find that there are actually two distinct types of query-item relationships in the data.

The first type of queries is very targeted. People shouting very specific items like “iced decaf latte with nonfat milk” really mean it. Just because it’s pretty close to “hot latte with whole milk” in the embedding space doesn’t mean it’s an acceptable alternative. And there are millions of these rules where the transitivity of embeddings may actually do more harm than good. On the other hand, queries that are more exploratory like “seafood” or “italian food” may be open to more generalization and discovering a diverse set of related items. Having realized these, you have an epiphany: Why do I have to choose either wide or deep models? Why not both?

Finally, you build FoodIO 4.0 with Wide & Deep Learning in TensorFlow. As shown in the graph above, the sparse features like query=”fried chicken” and item=”chicken fried rice” are used in both the wide part (left) and the deep part (right) of the model. During training, the prediction errors are backpropagated to both sides to train the model parameters. The cross-feature transformation in the wide model component can memorize all those sparse, specific rules, while the deep model component can generalize to similar items via embeddings.

Wider. Deeper. Together.We’re excited to share the TensorFlow API and implementation of Wide & Deep Learning with you, so you can try out your ideas with it and share your findings with everyone else. To get started, check out the code on GitHub and our TensorFlow tutorials on Linear Models and Wide & Deep Learning.

It’s one of the most iconic outfits in Star Wars, and sci-fi in general: The metal bikini Princess Leia is forced into by Jabba the Hutt as seen in Star Wars Episode VI: Return of the Jedi. But it’s not without controversy — and Disney may be making moves to remove the skimpy outfit from future Star Wars marketing and merchandise for good.

Science Ninja Team Gatchaman (科学忍者隊ガッチャマン Kagaku Ninjatai Gatchaman) is a five-member superhero team composed of the main characters of several anime created by Tatsuo Yoshida, originally produced in Japan by Tatsunoko Productions and later adapted into several English-language versions. The team is also known as Gatchaman.

The original series, produced in 1972, was eponymously entitled Kagaku Ninja Tai Gatchaman and is best known in the English-speaking world as the adaptation entitled Battle of the Planets (1978). The series had additional English adaptations with G-Force: Guardians of Space (1986).

This is Play Toy HK VINART Gatchaman 11-inch Tall Vinyl Figures (G1 – G5). The Vinyl Figure comes with main body, removable mask and weapon. Most are approximately 11 inches tall except for G4 Jinpei, the Swallow, who is about ten or eleven years old. The rest of the team are in their late teens.

Scroll down to see the rest of the pictures.Click on them for bigger and better views.

VINART Gatchaman Ken Vinyl Figure – G1 / the Eagle. Ken Washio (鷲尾 健 Washio Ken), a pilot, is a leader of the Science Ninja Team. “Gatchaman” designates the team leader. Ken’s father disappeared during a flight, becoming Red Impulse. Ken did not know his father, and was raised by Dr. Nambu. He is called Mark in Battle of the Planets. His Weapon is the Razor Sonic boomerang. His mecha is an airplane.

VINART Gatchaman Joe Vinyl Figure – G2 / the Condor. Joe Asakura (ジョー 浅倉) is an Italian of Japanese descent. A race car driver, he is a sub-leader of the team. Joe was born George Asakura (ジョージ 浅倉 Jōji Asakura), the son of Giuseppe Asakura and his wife Caterina (members of Galactor, who were killed by a Galactor rose bomb when they tried to escape). Dr. Nambu rescued the boy, named him Jō to hide him from Galactor and raised him as his son. He is called Jason in Battle of the Planets. His Weapon is the Harpoon pistol, and also the shuriken. His mecha is a Race Car.

VINART Gatchaman Jun Vinyl Figure – G3 / the Swan. Jun (ジュン) is an American of Japanese descent. Raised in an orphanage, her last name is not disclosed in the anime. In her free time, she enjoys riding her motorcycle and runs Snack Bar J. She is called Princess in Battle of the Planets. Her Weapon is the Yo-yo. Her mecha is the Motorcycle.

VINART Gatchaman Ryu Vinyl Figure – G5 / the Owl. Ryu Nakanishi, a fisherman’s son, is the manager of a yacht harbor and the main pilot of God Phoenix. He is the only person in the team who has a family (parents and a younger brother). He is called Tiny Harper in Battle of the Planets. His Weapon is the Harpoon pistol, but mostly fists. His mecha is God Phoenix.

VINART Gatchaman Jinpei Vinyl Figure – G4 / the Swallow. Jinpei (甚平) was also an orphan, and grew up with Jun. His last name is not disclosed in the anime either, and he lives in Snack Bar J with Jun. He is called Keyop in Battle of the Planets. His Weapon is Bolas. His mecha is a dunebuggy.

This week, Las Vegas hosts the 2016 Conference on Computer Vision and Pattern Recognition (CVPR 2016), the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. As a leader in computer vision research, Google has a strong presence at CVPR 2016, with many Googlers presenting papers and invited talks at the conference, tutorials and workshops.

We congratulate Google Research Scientist Ce Liu and Google Faculty Advisor Abhinav Gupta, who were selected as this year’s recipients of the PAMI Young Researcher Award for outstanding research contributions within computer vision. We also congratulate Googler Henrik Stewenius for receiving the Longuet-Higgins Prize, a retrospective award that recognizes up to two CVPR papers from ten years ago that have made a significant impact on computer vision research, for his 2006 CVPR paper “Scalable Recognition with a Vocabulary Tree”, co-authored with David Nister, during their time at University of Kentucky.

If you are attending CVPR this year, please stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving interesting problems for hundreds of millions of people. The Google booth will also showcase several recent efforts, including the technology behind Motion Stills, a live demo of neural network-based image compression and TensorFlow-Slim, the lightweight library for defining, training and evaluating models in TensorFlow. Learn more about our research being presented at CVPR 2016 in the list below (Googlers highlighted in blue).

Kingsman: The Secret Service is a 2014 British-American spy action comedy film directed by Matthew Vaughn, and based on the comic book The Secret Service, created by Dave Gibbons and Mark Millar. It follows the recruitment and training of a potential secret agent, Gary “Eggsy” Unwin (Taron Egerton), into a secret spy organisation. Eggsy joins a mission to tackle a global threat from Richmond Valentine (Samuel L. Jackson), a wealthy megalomaniac. Colin Firth is cast as Harry Hart / Galahad, Eggsy’s mentor and a Kingsman agent.

At Google, we’re passionate about empowering children to create and explore with technology. We believe that when children learn to code, they’re not just learning how to program a computer—they’re learning a new language for creative expression and are developing computational thinking: a skillset for solving problems of all kinds.

Today, we’re happy to announce Project Bloks, a research collaboration between Google, Paulo Blikstein (Stanford University) and IDEO with the goal of creating an open hardware platform that researchers, developers and designers can use to build physical coding experiences. As a first step, we’ve created a system for tangible programming and built a working prototype with it. We’re sharing our progress before conducting more research over the summer to inform what comes next.

Physical codingKids are inherently playful and social. They naturally play and learn by using their hands, building stuff and doing things together. Making code physical – known as tangible programming – offers a unique way to combine the way children innately play and learn with computational thinking.

However, designing kits for tangible programming is challenging—requiring the resources and time to develop both the software and the hardware. Our goal is to remove those barriers. By creating an open platform, Project Bloks will allow designers, developers and researchers to focus on innovating, experimenting and creating new ways to help kids develop computational thinking. Our vision is that, one day, the Project Bloks platform becomes for tangible programming what Blockly is for on-screen programming.

The Project Bloks systemWe’ve designed a system that developers can customise, reconfigure and rearrange to create all kinds of different tangible programming experiences.

A birdseye view of the customisable and reconfigurable Project Bloks system

The Project Bloks system is made up of three core components the “Brain Board”, “Base Boards” and “Pucks”. When connected together they create a set of instructions which can be sent to connected devices, things like toys or tablets, over wifi or Bluetooth.

The three core components of the Project Bloks system

Pucks: abundant, inexpensive, customisable physical instructionsPucks are what make the Project Bloks system so versatile. They help bring the infinite flexibility of software programming commands to tangible programming experiences. Pucks can be programmed with different instructions, such as ‘turn on or off’, ‘move left’ or ‘jump’. They can also take the shape of many different interactive forms—like switches, dials or buttons. With no active electronic components, they’re also incredibly cheap and easy to make. At a minimum, all you’d need to make a puck is a piece of paper and some conductive ink.

Pucks allow for the creation and customisation of endless amount of different domain-specific physical instructions cheaply and easily.

Base Boards: a modular design for diverse tangible programming experiencesBase Boards read a Puck’s instruction through a capacitive sensor. They act as a conduit for a Puck’s command to the Brain Board. Base Boards are modular and can be connected in sequence and in different orientations to create different programming flows and experiences.

The modularity of the Base Boards means they can be arranged in different configurations and flows

Each Base Board is fitted with a haptic motor and LEDs that can be used to give end-users real time feedback on their programming experience. The Base Boards can also trigger audio feedback from the Brain Board’s built-in speaker.

Brain Board: control any device that has an API over WiFi or BluetoothThe Brain Board is the processing unit of the system, built on a Raspberry Pi Zero. It also provides the other boards with power, and contains an API to receive and send data to the Base Boards. It sends the Base Boards’ instructions to any device with WiFi or Bluetooth connectivity and an API.

As a whole, the Project Bloks system can take on different form factors and be made out of different materials. This means developers have the flexibility to create diverse experiences that can help kids develop computational thinking: from composing music using functions to playing around with sensors or anything else they care to invent.

The Project Bloks system can be used to create all sorts of different physical programming experiences for kids

The Coding KitTo show how designers, developers, and researchers might make use of system, the Project Bloks team worked with IDEO to create a reference device, called the Coding Kit. It lets kids learn basic concepts of programming by allowing them to put code bricks together to create a set of instructions that can be sent to control connected toys and devices—anything from a tablet, to a drawing robot or educational tools for exploring science like LEGO® Education WeDo 2.0.

What’s next?We are looking for participants (educators, developers, parents and researchers) from around the world who would like to help shape the future of Computer Science education by remotely taking part in our research studies later in the year. If you would like to be part of our research study or simply receive updates on the project, please sign up.

If you want more context and detail on Project Bloks, you can read our position paper.

Finally, a big thank you to the team beyond Google who’ve helped us get this far—including the pioneers of tangible learning and programming who’ve inspired us and informed so much of our thinking.