OTN TechBlog

Subscribe to OTN TechBlog feed
Oracle Blogs
Updated: 4 hours 31 min ago

Why Your Developer Story is Important

Tue, 2018-11-06 02:45

Stories are a window into life. They can if they resonate provide insights into our own lives or the lives of others.They can help us transmit  knowledge, pass on traditions, solve present day problems or allow us to imagine alternate realities. Open Source software is an example of an alternate reality in software development, where proprietary has been replaced in large part with sharing code that is free and open. How is this relevant to not only developers but people who work in technology? It is human nature that we continue to want to grow, learn and share.

 

With this in mind, I started 60 Second Developer Stories and tried it out at various Oracle Code One events, at Developer Conferences and now at Oracle OpenWorld 2018/Code One. For the latter we had a Video Hangout in the Groundbreakers Hub at CodeOne where anyone with a story to share could do so. We livestream the story via Periscope/Twitter and record it and edit/post it later on YouTube.  In the Video Hangout, we use a green screen and through the miracles of technology Chroma key it in and put in a cool backdrop. Below are some photos of the Video Hangout as well as the ideas we give as suggestions.

Oracle 60 Second Developer Story 2IMG_3756.JPG

Oracle 60 Second Developer Story.jpg

60 Sec with Background.png

  •     Share what you learned on your first job
  •     Share a best coding practice.
  •     Explain how  a tool or technology works
  •     What have you learned recently about building an App?
  •     Share a work related accomplishment
  •     What's the best decision you ever made?
  •     What’s the worst mistake you made and the lesson learned?
  •     What is one thing you learned from a mentor or peer that has really helped you?
  •     Any story that you want to share and community can benefit from

 

 

 

 

Here are some FAQs about the 60 Second Developer Story

 

Q1. I am too shy, and as this is live what if I get it wrong?

A1. It is your story, there is no right or wrong. If you mess up, it’s not a problem we can do a retake.

 

Q2. There are so many stories, how do I pick one?

A2. Share something specific an event that has a beginning, middle an end. Ideally there was a challenge or obstacle and you overcame it. As long as it is meaningful to you it is worth sharing.

 

Q3. What if it’s not exactly 60 seconds, if it’s shorter or longer?

A3. 60 Seconds is a guideline. I will usually show you a cue-card to let you know when you have 30 secs. and 15 secs. left. A little bit over or under is not a big deal.

 

Q4. When can I see the results?

A4. Usually immediately. Whatever Periscope/Twitter handle we are sharing on, plus if you have a personal Twitter handle, we tweet that before you go live, so it will show up on your feed.

 

Q4. What if I am not a Developer?

A5. We use Developer in a broad sense. It doesn’t matter if you are a DBA or Analyst, or whatever. If you are involved with technology and have a story to share, we want to hear it.

 

 

Here is an example of a  a 60 Second Developer Story.

We hope to have the Video Hangout at future Oracle Code and other events and look forward for you to share your 60 Second story.

New in Developer Cloud - Fn Support and Wercker Integration

Mon, 2018-11-05 12:19

Over the weekend we rolled out an update to your Oracle Developer Cloud Service instances which introduces several new features. In this blog we'll quickly review two of them - Support for the Fn project and integration with the Wercker CI/CD solution. These new features further enhance the scope of CI/CD functionality that you get in our team development platform.

Project Fn Build Support

Fn is a function-as-a-service open-source platform lead by Oracle and available for developers looking to develop portable functions with a variety of languages. If you are not familiar with Project Fn a good intro on why you should care is this blog, and you can learn more on it through the Fn project's home page on GitHub.

In the latest version of Developer Cloud you have a new option in the build steps menu that helps you define various Fn related commands as part of your build process. So for example if you Fn project code is hosted in the Git repository provided by your DevCS project, you can use the build step to automate a process of building and deploying the function you created.

Fn Build Step

Wercker/ Oracle Container Pipelines Integration

A while back Oracle purchased a docker native CI/CD solution called Wercker - which is now also offered as part of  Oracle Cloud Infrastructure under the name Oracle Container Pipelines. Wercker is focused on offering CI/CD automation for Docker & Kubernetes based micro services. As you probably know we also offer similar support for Docker and Kubernetes in Developer Cloud Service which has support for declarative definition of Docker build steps, and ability to run Kubectl scripts in its build pipelines.

If you have investment in Wercker based CI/C, and you want a more complete agile/DevOps set of features - such as the functionality offered by Developer Cloud Service (including free private Git repositories, issue tracking, agile boards and more) - now you can integrate the two solutions without loosing your investement in Wercker pipelines.

For a while now Oracle Containers Pipeline provides support for picking up the code directly from a git repository hosted in Developer Cloud Service. 

Wercker selecting DevCS

Now we added support for Developer Cloud Service to invoke pipelines you defined in Wercker directly as part of a build job and pipelines in Developer Cloud Service. Once you provide DevCS with your personal token for logging into Wercker, you can pick up specific applications, and pipelines that you would like to execute as part of your build jobs.

Wercker build step

 

There are several other new features and enhancements in this month's release of Oracle Developer Cloud you can read about those in our What's New page.

 

Making an IoT Badge – #badgelife going corporate

Thu, 2018-11-01 15:33

By Noel Portugal,  Senior Cloud Experience Developer at Oracle

 

Code Card 2018

For years I’ve been wanting to create something fun with the almighty esp8266 WiFi chip. I started experimenting with the esp8266 almost exactly four years ago. Back then there was no ArduinoLua or even MicroPython ports for the chip, only the C Espressif SDK. Today it is fairly easy to write firmware for the ESP given how many documented projects are out there.

IoT Badge by fab-lab.eu

Two years ago I was very close to actually producing something with the esp8266. We, the AppsLab team,  partnered with the Oracle Technology Network team (now known as Oracle Groundbreakers Team) to offer an IoT workshop at Oracle Open World 2016. I reached out to friend-of-the lab Guido Burger from fab-lab.eu and he came up with a clever design for an IoT badge. This badge was the swiss army knife of IoT dev badge/kits.  Unfortunately, we ran out of time to actually mass produce this badge and we had to shelve the idea.

Instead, we decided that year to use an off-the-shelf NodeMcu to introduce attendees to hardware that can talk to the Cloud. For the next year, we updated the IoT workshop curriculum to use the Wio Node board from Seeedstudio.

Fast forward to 2018.  I’ve been following emerging use cases of e-ink screens, and I started experimenting with them. Then the opportunity came.  We needed something to highlight how easy it is to deploy serverless functions with Fn project. Having a physical device that could retrieve content from the cloud and display it was the perfect answer for me.

I reached out to Squarofumi, the creators of Badgy, and we worked together to come up with the right specs for what we ended up calling the Code Card. The Code Card is an IoT badge powered by the esp8266, a rechargeable coin battery, and an e-ink display.

I suggested using the same technique I used to create my smart esp8266 button. When either button A or B are pressed it sets the esp8266 enable pin to high, then the first thing the software does is keep the pin high until we are done doing an HTTP request and updating the e-ink screen.  When we are done, we set the enable pin to low and the chip turns off (not standby). This allows the battery to last much longer.

To make it even easier for busy attendees to get started, I created a web app that was included in the official event app. The Code Card Designer lets you choose from different templates and assign them to a button press (short and long press).

You can also choose an icon from some pre-loaded icons on the firmware. Sadly at the last minute, I had to remove one of the coolest features: the ability to upload your own picture. The feature was just not very reliable and often failed. With more time the feature can be re-introduced.

After attendees used the Code Card designer they were ready for more complex stuff. All they needed to do was connect the Card to their laptops and connect via serial communication. I created a custom Electron Terminal to make it easier to access a custom CLI to change the button endpoints and SSID information.

A serverless function or any other endpoint returning the required JSON is all that is needed to start modifying your Card.

 

I published the Arduino source code along with other documentation. It didn’t take long for attendees to start messing around with c codearray images to change their icons.

Lastly, if you paid attention you can see that we added two Grove headers to connect analog or digital sensors. More fun!

Go check out and clone the whole Github repo. You can prototype your own “badge” using off-the-shelf e-ink board similar to this.

#badgelife!

Oracle Code One, Day Four Updates and Wrap Up

Fri, 2018-10-26 18:18

It’s been an educational, inspirational, and insightful four days at Oracle Code One in San Francisco. This was the first time Oracle Code One and Oracle Open World were run side-by-side.  Attendees chose from the 2500 sessions, a majority of them featuring customers and partners who overcame real-world challenges. We also had an exhibition floor with Oracle Code One partners, and the Groundbreakers Hub, where attendees toyed around with Blockchain, IoT, AI and other emerging technologies. Personally, I felt inspired by a team of high school students who used Design Thinking, thermographic cameras, and pattern recognition to help with detecting early-stage cancer.

Java Keynote

The highlight from Day 1 was the Java Keynote. Matthew McCullough from Github talked about the importance of building a development community and growing the community one developer at a time. He also shared that Java has been the 2nd most popular language on Github, behind Javascript. Rayfer Hazen, manager of the data pipelines team at Github, shared similar views on Java:

“Java’s strengths in parallelism and concurrency, its performance, its type system, and its massive ecosystem all make it a really good fit for building data infrastructure.”

Developers from Oracle then unveiled Project Skara, which can used for the code review and code management practices for the JDK (Java Development Kit).

Georges Saab, Vice President at Oracle, announced the following fulfilled commitments, which were originally made last year:

  • Making Java more open: remaining closed-source features have been contributed to Open JDK
  • Delivering enhancements and innovation faster: Oracle is adopting a predictable 6-month cadence so that developers can access new features sooner
  • Continuing support for the Java ecosystem: specific releases will be provided with LTS (long-term support)

Mark Reinhold, Chief Architect of the Java Platform then elaborated on major architectural changes to the Java. Though Oracle has moved to a six-month release cadence with certain builds supported long-term (LTS builds), he reiterated that “Java is still free.” Previously closed-source features such as Application Class-Data Sharing, Java Flight Recorder, and Java Mission Control are now available as open source. 

Mark also showcased Java’s features to improve developer productivity and program performance, in the face of evolving programming paradigms. Key projects to meet these two goals include Amber, Loom, Panama, and Valhalla.

Code One Keynote

The Code One Keynote on Tuesday was kicked-off by Amit Zavery, Executive Vice President at Oracle, who elaborated on major application trends:

  • Microservices and Serverless Architectures, which provide better infrastructure efficiency and developer productivity

  • DevSecOps, with a move to NoOps, which requires a different mindset in engineering teams

  • The importance of open source, which was also highlighted in Mark Reinhold’s talk at the Java keynote

  • The need for Digital Assistants, which provide a different interface for interaction, and a different UX requirement

  • Blockchain-based distributed transactions and ledgers

  • The importance of embedding AI/ML into applications

Amit also covered Oracle Cloud Platform’s comprehensive portfolio, which spans across the application development trends above, as well as other areas.

Dee Kumar, Vice President for Marketing and Developer Relations at CNCF, talked about digital transformation which depends on cloud native computing and open source. Per Dee, Kubernetes is second only to Linux when measured by number of authors. Dee emphasized that containerization is the first step in becoming a cloud native organization.

For organizations considering cloud native technology, the benefits of cloud native projects, per the CNCF bi-annual surveys include:

  • Faster deployment time

  • Improved scalability

  • Cloud portability

Matt Thompson, Vice President of Developer Engagement and Evangelism, hosted a session about “Building in the Cloud.” Matt Baldwin and Manish Kapur from Oracle conducted live demos featuring chatbots/digital assistants (conversational interfaces), serverless functions, and blockchain ledgers.

Groundbreaker Panel and Award Winners

Also on Tuesday, Stephen Chin led a talk on the Oracle Groundbreakers Awards through which Oracle seeks to recognize technology innovators. The Groundbreakers Award Winners for 2018 are:

  • Neha Narkhede: co-creator of Apache Kafka

  • Guido van Rossum: Creator of Python

  • Doug Cutting: Co-creator of Hadoop

  • Graeme Rocher: Creator of Grails

  • Charles Nutter: Co-creator of JRuby

In addition, Stephen recognized the Code One stars, individuals who were the best speakers at the conference, evangelists of open source and emerging technologies, and leaders in the community.

Duke’s Choice Award Winners

The Java team, represented by Georges Saab, also announced winners of the Duke’s Choice Awards, which were given to top projects and individuals in the Java community. Award winners included:

Customer Spotlight

We had customers join us to talk further about their use of Oracle Cloud Platform:

  • Mitsubishi Electric: Leveraged Oracle Cloud for AI, IoT, SaaS, and PaaS to achieve 60% increase in operating rate, 55% decrease in manual processes, and 85% reduction in floor space

  • CargoSmart: Used Blockchain to integrate custom ERP and Supply Chain on Oracle Database. Achieved 65% reduction in time taken to collect, consolidate, and confirm data.

  • AllianceData: Moved over 6TB to Oracle Cloud Infrastructure – PeopleSoft, EPM, Exadata, and Windows, thereby saving $1M/year in licensing and support

  • AkerBP: Achieved elastic scalability with Oracle Cloud, running reports in seconds instead of 20 minutes and eliminating downtimes due to database patching

Groundbreakers Hub

The Groundbreakers Hub featured a number of interesting demos on AI and chatbots, personalized manufacturing leveraging Oracle’s IoT Cloud, robotics, and even early-stage cancer detection. Here are some of the highlights.

Personalized Manufacturing using Oracle IoT Cloud

This was one of the most popular areas in the Hub. Here is how the demo worked:

  • A robotic arm grabbed a piece of inventory (a coaster) using a camera. The camera used computer vision to detect placement of the coaster.

  • The arm then moved across and dropped the coaster onto a conveyer belt

  • The belt moved past a laser engraver, which engraves custom text, like your name, on the coaster

Oracle Cloud, including IoT Cloud and SCM (Supply Chain Management) Cloud, were leveraged through this process to monitor the production equipment, inventory and engraving. Check out the video clip below.

3D Rendering with Raspberry Pis and Oracle Cloud

Another cool spot was the “Bullet Time” photo booth. Using fifty Rasperry Pis equipped with cameras, images were captured around me. These images were then sent to the Oracle Cloud to be stitched together. The final output -- a video -- was sent to me via SMS.

Cancer Detection by High School Students

We also had high school students from DesignTech, which is supported by the Oracle Education Foundation. Among many projects, these students created a device to detect early-stage cancer using a thermographic (heat-sensitive) camera and a touchscreen display. An impressive high school project!

Summary

Java continues to be a leading development language, and is used extensively at companies such as Github. To keep pace with innovation in the industry, Java is moving to a 6-month release cadence. Oracle has a keen interest in emerging technologies, such as AI/ML, Blockchain, Containers, Serverless Functions and DevSecOps/NoOps. Oracle recognized innovators and leaders in the industry through the Groundbreakers Awards and Duke’s Choice Awards.

That’s just some of the highlights from Oracle Code One 2018. We look forward to seeing you next time!

 

All Things Developer at Oracle Code One - Day 3

Thu, 2018-10-25 15:25

 

Community Matters! The Code One Avengers Keynote. When it comes to a code conference, it has to be about the community. Stephen Chin and his superheroes proved that right on stage last night with their Code Avengers keynote. The action-packed superheroes stole the thunder of Code One on Day 3.

Some of us were backstage with the superheroes, and the excitement and energy were just phenomenal. We want to tell this story in pictures, but what are these avengers fighting for?

We will, of course, start with Dr. Strange's address to his fellow superheroes of code which brought more than a quarter million viewers on Twitter. And then his troupe follows! The mural comic strips, animations, screenplay, and cast came together just brilliantly! Congrats to the entire superheroes team!

Here are some highlights from the keynote to recap:

https://www.facebook.com/OracleCodeOne/videos/2486105168071342/

The Oracle Code One Team Heads to CloudFest18

The remaining thunder was stolen by the Portugal Man, the Beck, and the Bleachers at the CloudFest18 rock concert in AT&Park. Jam-packed with Oracle customers, employees, and partners from TCS, the park was just electric with Powerade music!

Hands-on Labs Kept Rolling!

The NoSQL hands-on lab in action here delivered by the crew. One API to many NoSQL databases!

The Groundbreakers Hub was Busy!

The Hub was busy with pepper, more Groundbreaker live interviews, video hangouts, Zip labs, Code Card pickups, bullet time photo booths, superhero escape rooms, hackergarten, and with our favorite Cloud DJ  - Sonic Pi! Stephen Chin recaps what's hot at the Hub right here.

And a quick run of the bullet time photo booth. Rex Wang in action!

Sam Craft, our first Zip lab winner!

Code One Content in Action

Click here for a quick 30 second recap of other things on Day 3 at Oracle Code One.

Groundbreaker live interviews with Jesse Butler and Karthik Gaekwad on cloud native technologies and the Fn project -

https://twitter.com/OracleDevs/status/1055169708192751616

Groundbreaker live interview on AI and ML

https://twitter.com/OracleDevs/status/1055183021316292608

Groundbreaker live interviews on building RESTful APIs - 

https://twitter.com/OracleDevs/status/1055230551324483584

Groundbreaker live interviews with the design tech school on The All Jacked Up Project - 

https://twitter.com/OracleDevs/status/1055224092456960000

Groundbreaker live interviews on NetBeans

https://twitter.com/OracleDevs/status/1055206463809843200

And interesting live video hangouts on diversity in tech and women in tech

https://twitter.com/groundbreakers/status/1055161816333017088

https://twitter.com/groundbreakers/status/1055147716903239681

All Things Developer at Oracle Code One - Day 2

Wed, 2018-10-24 02:27
Live from Oracle Code One - Day Two

There was tons of action today at Oracle Code One. From Zip labs and challenges to an all-women developer community breakfast,  and the Duke Choice awards, to the Oracle Code keynotes and the debut Groundbreaker awards, it was all happening at Code One. Pepper was quite busy, and so was the blockchain beer bar!

Zip Labs, Zip Lab Challenges and Hands-on Labs

Zip labs are running all four days. So, if you want to dabble with the Oracle Cloud, or learn how you can provision the various services, go up to the second floor on Moscone West and sign-up for our cloud.

You can sign-in for a 15-minute lab challenge on Oracle Cloud content and see your name on the leaderboard as the person to beat. Choose from labs including Oracle Autonomous Data Warehouse, Oracle Autonomous Transaction Processing, and Virtual Machines.

Lots of ongoing hands-on labs everyday but the Container Native labs today were quite a hit.

Oracle Women's Leadership Developer Community Breakfast

A breakfast this morning with several women developers from across the globe. It was quite insightful to learn about their life and experiences in code.

The Duke Choice Awards and Groundbreaker Live Interviews

Georges Saab announced the Duke Choice Award winners at Code One today. 

Some exciting Groundbreaker live interviews:

Jim Grisanzio and Gerald Venzl talk about Oracle Autonomous Database

Bob Rubhart, Ashley Sullivan and the Design Tech Students discuss the Vida Cam Project

The Oracle Code One Keynotes and Groundbreaker Awards in Pictures

Building Next-Gen Cloud Native Apps with Embedded Intelligence, Chatbots, and Containers: Amit Zavery, Executive Vice President, PaaS Development, Oracle talks about how developers can leverage the power of the Oracle cloud.

Making Cloud Native Computing Universal and Sustainably Harnessing the Power of Open Source: Dee Kumar, Vice President, Cloud Native Computing Foundation congratulates Oracle on successfully becoming a Platinum member of CNCF.

 

 

Building for the Cloud: Matt Thompson, Developer Engagement and Evangelism, Oracle Cloud Platform talks about how a cloud works best - when it is open, secure, and all things productive for the developer. 

 

 

 

Demos: Serverless, Chatbots, Blockchain...

 

Manish Kapur, Director of Product Management for Cloud Platform showed a cool demo of a new serverless/microservices based cloud architecture for selling and buying a car.

 

 

Matt Baldwin talked about the DNA of Blockchain and how it is used in the context of selling and buying a car.

 

 

And the Oracle Code One Groundbreaker Awards go to:

 

Stephen Chin, Director of Developer Community, announces the debut Groundbreaker awards and moderates a star panel with the winners.

 

 

We had more than 200K viewers of this panel on the Oracle Code One Twitter live stream today! There were lots of interesting and diverse questions for the panel from the Oracle Groundbreaker Twitter channel. For more information on Oracle Groundbreakers, click here. And now, moving on to  Day 3 of Code One!

 

Oracle Database 18c XE on Oracle Cloud Infrastructure: A Mere Yum Install Away

Tue, 2018-10-23 14:38

It's a busy week at OpenWorld 2018. So busy, that we didn't get around to mentioning that Oracle Database 18c Express Edition now available on Oracle Cloud Infrastructure (OCI) yum servers! This means it's easy to install this full-features Oracle Database for developers on an OCI compute shape without incurring any extra networking charges. In this blog post I demonstrate how to install, configure and connect to Oracle Database 18c XE OCI.

Installing Oracle Database 18c XE on Oracle Cloud Infrastructure

From a compute shape in OCI, grab the latest version of the repo definition from the yum server local to your region as follows:

cd /etc/yum.repos.d sudo mv public-yum-ol7.repo public-yum-ol7.repo.bak export REGION=`curl http://169.254.169.254/opc/v1/instance/ -s | jq -r '.region'| cut -d '-' -f 2` sudo -E wget http://yum-$REGION.oracle.com/yum-$REGION-ol7.repo

Enable the ol7_oci_included repo:

sudo yum-config-manager --enable ol7_oci_included

Here you see the Oracle Database 18c XE RPM is available in the yum repositories:

$ yum info oracle-database-xe-18c Loaded plugins: langpacks, ulninfo Available Packages Name : oracle-database-xe-18c Arch : x86_64 Version : 1.0 Release : 1 Size : 2.4 G Repo : ol7_oci_included/x86_64 Summary : Oracle 18c Express Edition Database URL : http://www.oracle.com License : Oracle Corporation Description : Oracle 18c Express Edition Database

Let's install it.

$ sudo yum install $ yum info oracle-database-xe-18c Loaded plugins: langpacks, ulninfo No package $ available. Package yum-3.4.3-158.0.2.el7.noarch already installed and latest version Package info-5.1-5.el7.x86_64 already installed and latest version Resolving Dependencies --> Running transaction check ---> Package oracle-database-xe-18c.x86_64 0:1.0-1 will be installed --> Finished Dependency Resolution Dependencies Resolved ========================================================================================================= Package Arch Version Repository Size ========================================================================================================= Installing: oracle-database-xe-18c x86_64 1.0-1 ol7_oci_included 2.4 G Transaction Summary ========================================================================================================= Install 1 Package Total download size: 2.4 G Installed size: 5.2 G Is this ok [y/d/N]: y Downloading packages: oracle-database-xe-18c-1.0-1.x86_64.rpm | 2.4 GB 00:01:13 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : oracle-database-xe-18c-1.0-1.x86_64 1/1 [INFO] Executing post installation scripts... [INFO] Oracle home installed successfully and ready to be configured. To configure Oracle Database XE, optionally modify the parameters in '/etc/sysconfig/oracle-xe-18c.conf' and then execute '/etc/init.d/oracle-xe-18c configure' as root. Verifying : oracle-database-xe-18c-1.0-1.x86_64 1/1 Installed: oracle-database-xe-18c.x86_64 0:1.0-1 Complete! $ Configuring Oracle Database 18c XE

With the software now installed, the next step is to configure it:

$ sudo /etc/init.d/oracle-xe-18c configure Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts: Confirm the password: Configuring Oracle Listener. Listener configuration succeeded. Configuring Oracle Database XE. Enter SYS user password: ************** Enter SYSTEM user password: ************ Enter PDBADMIN User Password: ************** Prepare for db operation 7% complete Copying database files 29% complete Creating and starting Oracle instance 30% complete 31% complete 34% complete 38% complete 41% complete 43% complete Completing Database Creation 47% complete 50% complete Creating Pluggable Databases 54% complete 71% complete Executing Post Configuration Actions 93% complete Running Custom Scripts 100% complete Database creation complete. For details check the logfiles at: /opt/oracle/cfgtoollogs/dbca/XE. Database Information: Global Database Name:XE System Identifier(SID):XE Look at the log file "/opt/oracle/cfgtoollogs/dbca/XE/XE.log" for further details. Connect to Oracle Database using one of the connect strings: Pluggable database: instance-20181023-1035/XEPDB1 Multitenant container database: instance-20181023-1035 Use https://localhost:5500/em to access Oracle Enterprise Manager for Oracle Database XE Connecting to Oracle Database 18c XE

To connect to the database, use the oraenv script to set the necessary environment variables, entering the XE as the ORACLE_SID.

$ . oraenv ORACLE_SID = [opc] ? XE ORACLE_BASE environment variable is not being set since this information is not available for the current user ID opc. You can set ORACLE_BASE manually if it is required. Resetting ORACLE_BASE to its previous value or ORACLE_HOME The Oracle base has been set to /opt/oracle/product/18c/dbhomeXE $

Then, connect as usual using sqlplus: $ sqlplus sys/OpenWorld2018 as sysdba SQL*Plus: Release 18.0.0.0.0 - Production on Tue Oct 23 19:13:23 2018 Version 18.4.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. Connected to: Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production Version 18.4.0.0.0 SQL> select 1 from dual; 1 ---------- 1 SQL> Conclusion

Whether you are a developer looking to get started quickly with building applications on your own full-featured Oracle Database, or an ISV prototyping solutions that require an embedded database, installing Oracle Database XE on OCI is an excellent way to get started. With Oracle Datbase 18c XE available as an RPM inside OCI via yum, it doesn't get any easier.

All Things Developer at Oracle CodeOne - Day 1 Recap

Tue, 2018-10-23 00:45

Live from Oracle CodeOne - Day One

A lot of action, energy, and fun here on the first day at Oracle CodeOne 2018. From all the fun at the Developers Exchange to the cool things at the Groundbreakers hub, we've covered it all for you! So, let's get started! 

Here's a one minute recap that tells the day one story, in well, a minute!

The Groundbreakers Hub.

We announced a new developer brand at Oracle CodeOne today, and it is...the Groundbreaker's Hub - yes, you got it! Groundbreakers is the lounge for developers, the nerds, the geeks and also the tech enthusiasts. Anyone who wants to hang out with the fellow developer community.

There's the Groundbreaker live stage where we got customers talking about their experience with the Oracle Cloud, and we got over 30 great stories on record today. Kudos to the interviewers - Javed Mohammed and Bob Rhubart.

The video hangouts was a casual, sassy corner to share stories of the code you built, the best app you created, the best developer you met, or the most compelling lesson you've ever learned. 

Don't forget to chat with Pepper, our chatbot that will tell you what's on at CodeOne or anything at all.

Also, check out the commit mural that commemorates the first annual CodeOne and the new Groundbreaker community.

There's some blockchain beer action too! Select the beer you want to taste using the Blockchain app and learn all about its origins! 

The Keynotes and Sessions

Keynote: The Future of Java is Today

The BIG keynote first! The Future of Java is today! An all things Java keynote by Mark Reinhold (Chief Architect of Java Platform Group at Oracle), and Georges Saab (VP Development at Oracle). It's a full house of developers who flocked to a very informative session on the evolution of Java to meet the needs of developers to become more secure, stable, and rich.

 A lot of insight into the new enhancements around Java with recent additions to languages and the platform. Matthew McCullough (Vice President of Field Services, GitHub) and Rafer Hazen (Data Pipelines Engineering Manager, GitHub) also talked about GitHub, Java, and the OpenJDK Collaboration. 

We streamed this live via our social channels with a viewership about half a million developers worldwide!! Here are some snippets of the backstage excitement from the crew.

Big Session Today: Emerging Trends and Technologies with Modern App Dev

Siddhartha Agarwal took the audience through all things app dev at Oracle - Cloud-native application development, DevSecOps, AI and conversational AI, Open Source software, Blockchain platform and more!

And he was supported by Suhas Uliyar, (VP, Bot AI and Mobile Product Management), and Manish Kapur (Director of App Dev, Product Management) to tell this modern app dev story via demos.

The Developer's Exchange

Lots of good tech (and swag) on the Oracle Developers Exchange floor that developers could flock to. Pivotal, JFrog, IBM, Redhat, AppDynamics, DataDog...the list goes on. But check out a few fashionable. booths right here.

Now, onto day two - Tuesday (10/23) ! Lots of keynotes, fireside chats, DJ and music, demos, hubs, and labs await! Thanks to Anaplan. They provided delicious free food, snacks, and drinks to all the visitors who checked-in with them!

 

All Things Developer at Oracle CodeOne. Spotlight APIs.

Fri, 2018-10-19 20:59
Code One - It’s Here!

We’re just a few days away from Oracle’s biggest conference for Developers that’s now known as Code One. Java One morphed into Code One to extend support for more developer ecosystems - languages, open-source projects, and cloud-native foundations. So, first, the plugs - if you’d like to be a part of the Oracle Code One movement and have not already registered, you can still do it. You can get lost, yes! It’s a large conference with lots of sessions and other moving parts. But we’ve tried to make things simple for you here to plan your calendar. Look through these to find the right tracks and session types for you. 

There are some exciting keynotes you don’t want to miss - The Future of Java, the Power of Open Source, and Building Next-Gen Could Native Apps using Emerging Tech, the Ground Breaker Code Avenger sessions, and Fireside chats! And now for the fun stuff, cause our conference is not complete without that - there’s the Cloud Fest! Get ready to be up all night with Beck; Portugal; the Man; and the Bleachers. And if you are up, get your nerdy kids to the code camp over the weekend. It’s Oracle Code for Kids time inspiring the next generation of developers!

The prelude to Code One wouldn’t be complete without talking about the Groundbreaker’s Hub. A few things that you HAVE to check out are: the Blockchain Beer - Try beers that were mixed using Blockchain technology that enabled the microbrewery to accurately estimate the correct combination of raw materials to create different types of beer. Then vote for your favorite beer on our mobile app - it’s pretty cool! Experience the bullet time photo booth, the chatbot with pepper, code card ( IoT card that you can program using Fn project serverless technologies. It will have a wifi embedded chip, e-link screen, and a few fun buttons). Catch all the hub action if you're there!

The Tech that Matters to Developers: Powerful APIs

We’ve talked about a lot of tech here, but there are a few things that are closer to the developer’s heart! Things that make their life more straightforward, and stuff that they use on an every hour basis. And one such technology is API. I am not going to explain what APIs are because if you are a developer, you know this. APIs are a mechanism that help to dial down on the heavy duty code and add powerful functionality to a website, app, or platform, without extensive coding, and only including the API code - there I said it.

But even for developers, it is essential to understand the system of engagement around designing and maintaining sound and healthy APIs. The cleaner the API, the better the associated user experience and performance of the app or platform in contention. Since APIs are reusable, it is essential to understand what goes into making the API an excellent one. And different types of APIs require a different type of love. 

API Strategy with Business Outcomes

First, there is a class of APIs that are powering the chatbots, and the digital experience of customers and the UX becomes one of the most significant driving factors. Second, APIs help to monetize existing data and assets. Here’s where there are organizations with API as a product and dealing with performance, scale, policy and governance around them so that the consumers have an API 360 experience. 

Third and fourth - APIs are used for operational efficiency and cost savings, and they are also used for creating exchange/app systems like the app stores!  So now, taking these four areas and establishing a business outcome is critical to driving the API strategy. And the API strategy entails good design as you’ll hear in Robert’s podcast below.

Design Matters Podcast by Robert Wunderlich

Beyond Design - Detailing the API Lifecycle

Once you have followed the principles of good API Design and established the documentation based on the business outcome, it then literally comes to the lifecycle management - the building, deployment, governance, and then managing them for scale and performance, and looping back the analytics to deliver the right expected experience. And then on the other side, there is the consumption, where developers now should be able to discover these APIs and start using them. 

And then there’s the Oracle way with APIs. Vikas Anand, VP of Product Management for SOA, Integration, and API Cloud tells how this happens.

API 360 Podcast by Vikas Anand

API Action at Code One

A lot is happening there! Hear from the customers directly on how Oracle’s API Cloud has helped to design and manage world-class APIs. Here are a few do-not-miss sessions, but you can always visit the Oracle Code page to discover more. See you there!

How Rabobank is using APICS to Achieve API Success

How RTD Connexions and Graco are using the API Success

How Ikea is using APICS to Achieve API Success 

Keynote: AI Powered Autonomous Integration Cloud and API Cloud Service

API Evolution Challenges by NFL

Evolutionary Tales of API by NFL

Vector API for Java by Netflix

Using Kubernetes in Hybrid Clouds -- Join Us @ Oracle OpenWorld

Thu, 2018-10-18 21:03

By now you have probably heard of the term cloud native. Cloud Native Computing Foundation (CNCF) defines cloud native as a set of technologies that “empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.” Cloud native is characterized by the use of containers and small, modular services – microservices -- which are managed by orchestration software.

In the following blog post, we will cover the relationship between Containers, Kubernetes and Hybrid Clouds. For more on this topic, please join us at Oracle OpenWorld for Kubernetes in an Oracle Hybrid Cloud [BUS5722].

Containers and Kubernetes

In the most recent CNCF survey among 2400 respondents, use of cloud native technologies in production has grown to over 200%, and 45% of companies run 250 or more containers.

Leveraging many containerized applications requires orchestration software that can run, deploy, scale, monitor, manage, and provide high availability for hundreds or thousands of microservices. These microservices are easier and faster to develop and upgrade since development and updates of each microservice can be independently completed, without affecting the overall application. Once a new version of a microservice is tested, it can then be pushed into production to replace the existing version without any downtime.

Hybrid Clouds

Hybrid clouds can reduce downtime and ensure application availability. For example, in a hybrid cloud model you can leverage an on-premises datacenter for your production workloads, and leverage Availability Domains in Oracle Cloud for your DR deployments to ensure that business operations are not affected by a disaster. Whereas in a traditional on-premises datacenter model you would hire staff to manage each of your geographically dispersed datacenters, you can now offload the maintenance of infrastructure and software to a public cloud vendor such as Oracle Cloud. In turn, this reduces your operational costs of managing multiple datacenter environments.

Why Kubernetes and Hybrid Clouds are like Peanut Butter and Jelly

To make the best use of a hybrid cloud, you need to be able to easily package an application so that it can be deployed anywhere, i.e. you need portability. Docker containers provide the easiest way to do this since they package the application and its dependencies to be run in any environment, on-premises datacenters or public clouds. At the same time, they are more efficient than virtual machines (VMs) as they require less compute, memory, and storage resources. This makes them more economical and faster to deploy than VMs.

Oracle’s Solution for Hybrid Clouds

Oracle Cloud is a public cloud offering that offers multiple services for containers, including Oracle Container Engine for Kubernetes (OKE). OKE is certified by CNCF, and is managed and maintained by Oracle. With OKE, you can get started with a continuously up to date container orchestration platform quickly – just bring your container apps. For hybrid use cases, you can couple Kubernetes in your data center with OKE, and then move workloads or mix workloads as needed.

To get more details and real-world insight with OKE and hybrid use cases, please join us at Oracle OpenWorld for the following session where Jason Looney from Beeline will be presenting with David Cabelus from Oracle Product Management:

Kubernetes in an Oracle Hybrid Cloud [BUS5722]

Wednesday, Oct 24, 4:45 p.m. - 5:30 p.m. | Moscone South - Room 160

David Cabelus, Senior Principal Product Manager, Oracle

Jason Looney, VP of Enterprise Architecture, Beeline

Podcast: On Microservices Design and Implementation

Tue, 2018-10-16 23:00

"Like buying a Ferrari and towing it around with a horse."

That's how Java Champion and Microservices Patterns (Manning Publications) author Chris Richardson describes the approach some organizations take to implementing microservices. It's often a matter of faulty motivation. 

In helping organizations around the world get started with microservices, the first question Chris asks his clients is, "Why do you want microservices?" The responses are often surprising. "I've talked to people who viewed microservices as magic pixie dust. You sprinkle it on things and everything will be better," Chris reports.

Another problem is the mistaken belief that the goal is microservices. "Microservices is a means to an end," Chris says. From a DevOps perspective, "there are two really good metrics," Chris explains. "One of them is deployment frequency, how often you're deploying into production. The other one is lead time, the time from commit to deploy. To me, those are the two metrics that you should be optimizing. Microservices is the way to get there."

Oracle Developer Champion Lucas Jellema, CTO and consulting IT architect with AMIS Services, also advises against diving blindly into microservices. "First step back and consider why you are thinking about microservices in the first place," Lucas says. "You choose microservices because there are issues you want to overcome, challenges you want to deal with. In the end, the only thing that really matters is that IT provides business value, and you can only provide business value if you have properly running applications that provide you with the right business functionality. The typical challenge that we try to address using microservices is to evolve that functionality in an agile fashion without much effort and without too much cost. If microservices can help with that then we want to have them. But we don't want to have them because everyone is talking about them."

Fellow Developer Champion Luis Weir, CTO of the Oracle Practice at Capgemini sees the human factor as the greatest challenge in microservice adoption and development. One of his customers has "a very clear need to implement this style of architecture," Luis says. "But it's been a nightmare to align the teams and make them work in a way that's aligned with the this architecture style, as opposed to operating in a very traditional ITL style where you need to handoff between three different teams and and do everything in a waterfall way."

Luis finds that many organizations are stuck in the past. "Not all organizations are into the DevOps way of doing things," Luis explains. Some departments may be adopting agile practices and the like. "But the IT side of the organization, in many cases, is not. They're trying to digitalize a lot of legacy. So it's a little more complicated. You're dealing with people, educating them on how to become more agile or how to think about breaking an elephant into smaller pieces." Luis admits that it's not easy.

What makes the transition so challenging, according to Oracle ACE Sven Bernhardt, a solution architect with OPITZ Consulting, is the need for "total cultural change" within the organization. "They must embrace the adaptability, the changeability. They must have a different fault tolerance, a different culture for dealing with software failures and also with failures on decisions with respect to technologies which are used for implementing a specific business functionality as a microservice," Sven explains. Too many organizations underestimate the necessary cultural change, and find that switching to a DevOps mindset is tough slog. 

But that's just a fraction of what you'll learn in this podcast, as the panel addresses ways to meet the various challenges found in getting to microservices. Listen!

BTW: Each of the panelists will present sessions are Oracle Code One and Oracle OpenWorld 2018. See the list of sessions, below. 

The Panelists

Listed alphabetically

Sven Bernhardt
Oracle ACE
Solution Architect, OPITZ Consulting
Twitter LinkedIn

Code One Sessions
  • Integration Reloaded: Integration Solutions Based on Reactive Principles [DEV5306]
    With Arturo Viveros, Principal Consultant, Sysco AS
    Thursday, Oct 25, 11:00 a.m. - 11:45 a.m. | Moscone West - Room 2008
  • Implementing a Low TCO Poly-Cloud Microservices Solution with Oracle Cloud [BUS2272]
    With Lucas Jellema, CTO, AMIS Services BV
    José Rodrigues, BPM And Webcenter Business Manager, Link Consulting
    Monday, Oct 22, 5:45 p.m. - 6:30 p.m. | Marriott Marquis (Golden Gate Level) - Golden Gate C2
Lucas Jellema
Oracle Developer Champion
Oracle ACE Director
CTO, Consulting IT Architect, AMIS Services
Twitter LinkedIn  

Code One Sessions
  • A Cloud- and Container-Based Approach to Microservices-Powered Workflows [BOF4977]
    Tuesday, Oct 23, 7:30 p.m. - 8:15 p.m. | Moscone West - Room 2006
  • 50 Shades of Data: How, When, Why—Big, Relational, NoSQL, Elastic, Graph, Event [DEV4976]
    Monday, Oct 22, 10:30 a.m. - 11:15 a.m. | Moscone West - Room 2007
  • Implementing Microservices on Oracle Cloud: Open, Manageable, Polyglot, and Scalable [BOF4978]
    Monday, Oct 22, 7:30 p.m. - 8:15 p.m. | Moscone West - Room 2012
  • Oracle Cloud Soaring: Live Demo of a Poly-Cloud Microservices Implementation [DEV4979]
    With Guido Schmutz, Principal Consultant - Technology Mangager, Trivadis AG
    Luis Weir, CTO - Oracle Practice, Capgemini UK Plc
    Wednesday, Oct 24, 2:30 p.m. - 3:15 p.m. | Moscone West - Room 2018
  • Implementing a Low TCO Poly-Cloud Microservices Solution with Oracle Cloud [BUS2272]
    With Sven Bernhardt, Solution Architect, OPITZ Consulting
    José Rodrigues, BPM And Webcenter Business Manager, Link Consulting
    Monday, Oct 22, 5:45 p.m. - 6:30 p.m. | Marriott Marquis (Golden Gate Level) - Golden Gate C2
Chris Richardson
Java Champion
Founder, Eventuate, Inc.
Twitter LinkedIn  

Code One Session
  • Developing Asynchronous, Message-Driven Microservices [DEV5252]
    Wednesday, Oct 24, 11:30 a.m. - 12:15 p.m. | Moscone West - Room 2001
Luis Weir
Oracle Developer Champion
Oracle ACE Director
CTO, Oracle Practice, Capgemini
Twitter LinkedIn  

Code One Sessions
  • The Seven Deadly Sins of API Design [DEV4921]
    Tuesday, Oct 23, 4:00 p.m. - 4:45 p.m. | Moscone West - Room 2020
  • Oracle Cloud Soaring: Live Demo of a Poly-Cloud Microservices Implementation [DEV4979]
    With Lucas Jellema, CTO, AMIS Services BV
    Guido Schmutz, Principal Consultant - Technology Mangager, Trivadis AG
    Wednesday, Oct 24, 2:30 p.m. - 3:15 p.m. | Moscone West - Room 2018
Additional Resources Coming Soon

Groundbreakers Neha Narkhede, Charles Nutter, Graeme Rocher, Guido van Rossum Guido van Rossum, and Doug Cutting examine the forces shaping IT in this special panel discussion recorded at Oracle Code One.

Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

Get going quickly with Command Line Interface for Oracle Cloud Infrastructure using Docker container

Sat, 2018-10-13 19:30

Originally published at technology.amis.nl on October 14, 2018.

Oracle Cloud Infrastructure is Oracle’s second generation infrastructure as a service offering — that support many components including compute nodes, networks, storage, Kubernetes clusters and Database as a Service. Oracle Cloud Infrastructure can be administered through a GUI — a browser based console — as well as through a REST API and with the OCI Command Line Interface. Oracle offers a Terraform provider that allows automated, scripted provisioning of OCI artefacts.

This article describes an easy approach to get going with the Command Line Interface for Oracle Cloud Infrastructure — using the oci-cli Docker image. Using a Docker container image and a simple configuration file, oci commands can be executed without locally having to install and update the OCI Command Line Interface (and the Python runtime environment) itself.

These are the steps to get going on a Linux or Mac Host that contains a Docker engine:

  • create a new user in OCI (or use an existing user) with appropriate privileges; you need the OCID for the user
  • also make sure you have the name of the region and the OCID for the tenancy on OCI
  • execute a docker run command to prepare the OCI CLI configuration file
  • update the user in OCI with the public key created by the OCI CLI setup action
  • edit the .profile to associate the oci command line instruction on the Docker host with running the OCI CLI Docker image

At that point, you can locally run any OCI CLI command against the specified user and tenant — using nothing but the Docker container that contains the latest version of the OCI CLI and the required runtime dependencies.

In more detail, the steps look like this:

Create a new user in OCI

(or use an existing user) with appropriate privileges; you need the OCID for the user

You can reuse an existing user or create a fresh one — which is what I did. This step I performed in the OCI Console:


I then added this user to the group Administrators.


And I noted the OCID for this user:


also make sure you have the name of the region and the OCID for the tenancy on OCI:

Execute a docker run command to prepare the OCI CLI configuration file

On the Docker host machine, create a directory to hold the OCI CLI configuration files. These files will be made available to the CLI tool by mounting the directory into the Docker container.

mkdir ~/.oci

Run the following Docker command:

docker run --rm --mount type=bind,source=$HOME/.oci,target=/root/.oci -it stephenpearson/oci-cli:latest setup config

This starts the OCI CLI container in interactive mode — with the ~/.oci directory mounted into the container at /root/oci — the and executes the setup config command on the OCI CLI (see https://docs.cloud.oracle.com/iaas/tools/oci-cli/latest/oci_cli_docs/cmdref/setup/config.html).

This command will start a dialog that results in the OCI Config file being written to /root/.oci inside the container and to ~/.oci on the Docker host. The dialog also result in a private and public key file, in that same dircetory.


Here is the content of the config file that the dialog has generated on the Docker host:

Update the user in OCI with the public key created by the OCI CLI setup action

The contents of the file that contains the public key — ~/.oci/oci_api_key_public.pem in this case — should be configured on the OCI user — kubie in this case — as API Key:

  Create shortcut command for OCI CLI on Docker host

We did not install the OCI CLI on the Docker host — but we can still make it possible to run the CLI commands as if we did. If we edit the .profile file to associate the oci command line instruction on the Docker host with running the OCI CLI Docker image, we get the same experience on the host command line as if we did install the OCI CLI.

Edit ~/.profile and add this line:

oci() { docker run --rm --mount type=bind,source=$HOME/.oci,target=/root/.oci stephenpearson/oci-cli:latest "$@"; }

On the docker host I can now run oci cli commands (that will be sent to the docker container that uses the configuration in ~/.oci for connecting to the OCI instance)

Run OCI CLI command on the Host

We are now set to run OCI CLI command — even though we did not actually install the OCI CLI and the Python runtime environment.

Note: most commands we run will require us to pass the Compartment Id of the OCI Compartment against which we want to perform an action. It is convenient to set an environment variable with the Compartment OCID value and then refer in all cli commands to the variable.

For example:

export COMPARTMENT_ID=ocid1.tenancy.oc1..aaaaaaaaot3ihdt

Now to list all policies in this compartment:

oci iam policy list --compartment-id $COMPARTMENT_ID --all

And to create a new policy — one that I need in order to provision a Kubernetes cluster:

oci iam policy create --name oke-service --compartment-id $COMPARTMENT_ID --statements '[ "allow service OKE to manage all-re sources in tenancy"]' --description 'policy for granting rights on OKE to manage cluster resources'

Or to create a new compartment:

oci iam compartment create --compartment-id $COMPARTMENT_ID --name oke-compartment --description "Compartment for OCI resources created for OKE Cluster"

From here on, it is just regular OCI CLI work, just as if it had been installed locally. But by using the Docker container, we keep our system tidy and we can easily benefit from the latest version of the OCI CLI at all times.

Resources
OCI CLI Command Reference — https://docs.cloud.oracle.com/iaas/tools/oci-cli/latest/oci_cli_docs/index.html

Terraform Provider for OCI: https://www.terraform.io/docs/providers/oci/index.html

GitHub repo for OCI CLI Docker — https://github.com/stephenpearson/oci-cli

Matrix Bullet Time Demo Take Two

Fri, 2018-10-12 10:57

By Christopher Bensen and Noel Portugal, Cloud Experience Developers

If you are attending Oracle Code One or Open World 2018, you will be happy to hear that the Matrix Bullet Time Demo will be there. You can experience it by coming to Moscone West in the Developer Exchange and inside the GroundBreakers Hub.

Last year we went into the challenges of building the Matrix Bullet Time Demo (https://developer.oracle.com/java/bullet-time). A lot of problems were encountered after that article was published so this year we pulled the demo out of storage, dusted it off and began refurbishing the demo so it could make a comeback. The first challenge was trying to remember how it all worked.

Let’s backup a bit and describe what we’ve built here so you don’t have to read the previous article. The idea is to create a demo that takes a simultaneous photo from camera’s placed around a subject, stitch these photos together and form a movie. The intended final effect is for it to appear as though the camera is moving around a subject frozen in time. To do this we used 60 individual Raspberry Pi 3 single-board computers with Raspberry Pi cameras.

Besides all of the technical challenges, there are some logistical challenges. When setup, the demo is huge! It forms a ten foot diameter circle and needs even more space for the mounting system. Not only is it huge, it’s delicate. Wires big and small are going everywhere. 15 Raspberry Pi 3s are mounted to each of the four lighting track gantries, and they are precarious at best. And to top it off we have to transport this demo to where we are going to set it up and back. An absolutely massive crate was built that requires an entire truck. Because of these logistical challenges the demo was only used at Open World and they keynote at JavaOne.

Last year at Open World the demo was not working for the full length of the show. One of the biggest reasons is aligning 60 cameras to a single point is difficult at best and impossible with a precariously delicate mounting system. So software image stabilization was written which was done by Richard Bair on the floor under the demo.

If you read the previous article about Bullet Time, then you’d know a lighting track system was used to provide power. One of the benefits of using a lighting track system is that it handles power distribution. You provide the 120 volt AC input power to the track and it carries that power through copper wires built into the track. At any point where you want to have a light, you use a mount designed for the track system, which transfers the power through the mount to the light. A 48 volt DC power supply sends 20 amps through the wires designed for 120 volts AC. Each camera has a small voltage regulator to step down to the 5 volts DC required for a Raspberry Pi. The brilliance of this system is, it is easy to send power and transmit the shutter release of the cameras and transfer of the photos via WiFi. Unfortunately WiFi is unreliable at a conference, there are far too many devices jamming up the spectrum, so that required running individual Ethernet cables to each camera which is what we were trying to avoid by using the lighting track system. So we end up with a Ethernet harness strapped to the track.

Once we opened up the crate, and setup BulletTime, only one camera was not functioning. On the software side there are four parts:

 

  1. A tablet that the user interacts with providing a name and optional mobile number and a button to start the countdown to take the photo.
  2. The Java server receives countdown, sends out a UDP packet to the Raspberry Pi cameras to take a photo. The server also receives the photos and stitches them together to make the video.
  3. Python code running on the Raspberry Pi listens for a UDP packet to take a photo and know where to send it.
  4. The cloud software uploads the video to a YouTube channel.  And a text message with the link is sent to the user.

The overall system works like this:

  1. The user would input their name on the Oracle JavaScript Extension Toolkit (Oracle JET) web UI we built for this demo, which is running on a Microsoft Surface tablet.
  2. The user would then click a button on the Oracle JET web UI to start a 10-second countdown.
  3. The web UI would invoke a REST API on the Java server to start the countdown.
  4. After a 10-second delay, the Java server would send a multicast message to all the Raspberry Pi units at the same moment instructing them to take a picture.
  5. Each camera would take a picture and send the picture data back up to the server.
  6. The server would make any adjustments necessary to the picture (see below), and then using FFMPEG, the server would turn those 60 images into an MP4 movie.
  7. The server would respond to the Oracle JET web UI's REST request with a link to the completed movie.
  8. The Oracle JET web UI would display the movie.

In general, this system worked really well. The primary challenge that we encountered was getting all 60 cameras to focus on exactly the same point in space. If the cameras were not precisely focused on the same point, then it would seem like the "virtual" camera (the resulting movie) would jump all over the place. One camera might be pointed a little higher, the next a little lower, the next a little left, and the next rotated a little. This would create a disturbing "bouncy" effect in the movie.

We took two approaches to solve this. First, each Raspberry Pi camera was mounted with a series of adjustable parts, such that we could manually visit each Raspberry Pi and adjust the yaw, pitch, and roll of each camera. We would place a tripod with a pyramid target mounted to it in the center of the camera helix as a focal point, and using a hand-held HDMI monitor we visited each camera to manually adjust the cameras as best we could to line them all up on the pyramid target. Even so, this was only a rough adjustment and the resulting videos were still very bouncy.

The next approach was a software-based approach to adjusting the translation (pitch and yaw) and rotation (roll) of the camera images. We created a JavaFX app to help configure each camera with settings for how much translation and rotation was necessary to perfectly line up each camera on the same exact target point. Within the app, we would take a picture from the camera. We would then click the target location, and the software would know how much it had to adjust the x and y axis for that point to end up in the dead center of each image. Likewise, we would rotate the image to line it up relative to a "horizon" line that was superimposed on the image. We had to visit each of the 60 cameras to perform both the physical and virtual configuration.

Then at runtime, the server would query the cameras to get their adjustments. Then, when images were received from the cameras (see step 6 above), we used the Java 2D API to transform those images according to the translation and rotation values previously configured. We also had to crop the images, so we adjusted each Raspberry Pi camera to take the highest resolution image possible, and then we cropped it to 1920x1080 for a resealing hi-def movie.

If we were to build Bullet Time version 2.0 we’d make a few changes, such as powering the Raspberry Pi using PoE, replace the lighting track with a stronger less flexible rolled aluminum square tube in eight sections rather than four, and upgrade the camera module with a better lens. But overall this is a fun project with a great user experience.

 

Microservices From Dev To Deploy, Part 3: Local Deployment & The Angular UI

Tue, 2018-10-09 10:28

In this series, we're taking a look at how microservice applications are built.  In part 1 we learned about the new open source framework from Oracle called Helidon and learned how it can be used with both Java and Groovy in either a functional, reactive style or a more traditional Microprofile manner.  Part 2 acknowledged that some dev teams have different strengths and preferences and that one team in our fictional scenario used NodeJS with the ExpressJS framework to develop their microservice.  Yet another team in the scenario chose to use Fn, another awesome Oracle open source technology to add serverless to the application architecture.  Here is an architecture diagram to help you better visualize the overall picture:

techcorp-architecture-overview

It may be a contrived and silly scenario, but I think it properly represents the diversity of skills and preferences that are the true reality of many teams that are building software today.  Our ultimate path in this journey is how all of the divergent pieces of this application come together in a deployment on the Oracle Cloud and we're nearly at that point.  But before we get there, let's take a look at how all of these backend services that have been developed come together in a unified frontend.

Before we get started, if you're playing along at home you might want to first make sure you have access to a local Kubernetes cluster.  For testing purposes, I've built my own cluster using a few Raspberry Pi's (following the instructions here), but you can get a local testing environment up and running with minikube pretty quickly.  Don't forget to install kubectl, you'll need the command line tools to work with the cluster that you set up.

With the environment set up, let's revisit Chris' team who you might recall from part 1 have built out a weather service backend using Groovy with Helidon SE.  The Gradle 'assemble' task gives them their JAR file for deployment, but Helidon also includes a few other handy features: a docker build file and a Kubernetes yaml template to speed up deploying to a K8S cluster.  When you use the Maven archetype (as Michiko's team did in part 1) the files are automatically copied to the 'target' directory along with the JAR, but since Chris' team is using Groovy with Gradle, they had to make a slight modification to the build script to copy the templates and slightly modify the paths within them.  The build.gradle script they used now includes the following tasks:

task copyDocker(type:Copy) { from "src/main/docker" into "build" doLast { def d = new File( 'build/Dockerfile' ) def dfile = d.text.replaceAll('\\$\\{project.artifactId\\}', project.name) dfile = dfile.replaceAll("COPY ${project.name}", "COPY libs/${project.name}") d.write(dfile) } } task copyK8s(type:Copy) { from "src/main/k8s" into "build" doLast { def a = new File( 'build/app.yaml' ) def afile = a.text.replaceAll('\\$\\{project.artifactId\\}', project.name) a.write(afile) } } copyLibs.dependsOn jar copyDocker.dependsOn jar copyK8s.dependsOn jar assemble.dependsOn copyLibs assemble.dependsOn copyDocker assemble.dependsOn copyK8s

So now, when Chris' team performs a local build they receive a fully functional Dockerfile and app.yaml file to help them quickly package the service into a Docker container and deploy that container to a Kubernetes cluster.  The process now becomes:

  1. Write Code
  2. Test Code
  3. Build JAR (gradle assemble)
  4. Build Docker Container (docker build / docker tag)
  5. Push To Docker Registry (docker push)
  6. Create Kubernetes Deployment (kubectl create)

Which, if condensed into a quick screencast, looks something like this:

build-jar-with-helidon-se-and-deploy

When the process is repeated for the rest of the backend services the frontend team led by Ava are now are able to integrate the backend services into the Angular 6 frontend that they have been working on.  They start by specifying the deployed backend base URLs in their environment.ts file.  Angular uses this file to provide a flexible way to manage global application variables that have different values per environment.  For example, an environment.prod.ts file can have it's own set of production specific values that will be substituted when a `ng build --prod` is performed.  The default environment.ts is used if no environment is specified so the team uses that file for development and have set it up with the following values:

export const environment = { production: false, stockApiBaseUrl: 'http://192.168.0.160:31002', weatherApiBaseUrl: 'http://192.168.0.160:31000', quoteApiBaseUrl: 'http://192.168.0.160:31001', catApiBaseUrl: 'http://localhost:31004', };

The team then creates services corresponding to each microservice.  Here's the weather.service.ts:

import {Injectable} from '@angular/core'; import {HttpClient} from '@angular/common/http'; import {environment} from '../../environments/environment'; @Injectable({ providedIn: 'root' }) export class WeatherService { private baseUrl: string = environment.weatherApiBaseUrl; constructor( private http: HttpClient, ) { } getWeatherByCoords(coordinates) { return this.http .get(`${this.baseUrl}/weather/current/lat/${coordinates.lat}/lon/${coordinates.lon}`); } }

And call the services from the view component.

getWeather() { this.weather = null; this.weatherLoading = true; this.locationService.getLocation().subscribe((result) => { const response: any = result; const loc: Array<string> = response.loc.split(','); const lat: string = loc[0]; const long: string = loc[1]; console.log(loc) this.weatherService.getWeatherByCoords({lat: lat, lon: long}) .subscribe( (weather) => { this.weather = weather; }, (error) => {}, () => { this.weatherLoading = false; } ); }); }

Once they've completed this for all of the services, the corporate vision of a throwback homepage is starting to look like a reality:

homepage-ui

In three posts we've followed TechCorp's journey to developing an internet homepage application from idea, to backend service creation and onto integrating the backend with a modern JavaScript based frontend built with Angular 6.  In the next post of this series we will see how this technologically diverse application can be deployed to Oracle's Cloud.

Microservices From Dev To Deploy, Part 2: Node/Express and Fn Serverless

Fri, 2018-10-05 08:08
.syntaxhighlighter table td.gutter div.line { padding: 0 .5em 0 1em!important; }

In our last post, we were introduced to a fictional company called TechCorp run by an entrepreneur named Lydia whose goal it is to bring back the world back to the glory days of the internet homepage. Lydia’s global team of remarkable developers are implementing her vision with a microservice architecture and we learned about Chris and Michiko who have teams in London and Tokyo.  These teams built out a weather and quote service using Helidon, a microservice framework by Oracle.  Chris’ team used Helidon SE with Groovy and Michiko’s team chose Java with Helidon MP.  In this post, we’ll look at Murielle and her Bangalore crew who are building a stock service using NodeJS with Express and Dominic and the Melbourne squad who have the envious task of building out a random cat image service with Java Oracle Fn (a serverless technology).

It’s clear Helidon makes both functional and Microprofile style services straight-forward to implement.  But, despite what I personally may have thought 5 years ago it is getting impossible to ignore that NodeJS has exploded in popularity.  Stack Overflow’s most recent survey shows over 69% of respondents selecting JavaScript as the “Most Popular Technology” among Programming, Scripting and Markup Languages and Node comes in atop the “Framework” category with greater than 49% of the respondents preferring it.  It’s a given that people are using JavaScript on the frontend and it’s more and more likely that they are taking advantage of it on the backend, so it’s no surprise that Murielle’s team decided to use Node with Express to build out the stock service.  
 
We won’t dive too deep into the Express plumbing for this service, but let’s have a quick look at the method to retrieve the stock quote:
var express = require('express'); var router = express.Router(); var config = require('config'); var fetch = require("node-fetch"); /* GET stock quote */ /* jshint ignore:start */ router.get('/quote/:symbol', async (req, res, next) => { const symbol = req.param('symbol'); const url = `${config.get("api.baseUrl")}/?function=GLOBAL_QUOTE&symbol=${symbol}&apikey=${config.get("api.apiKey")}`; try { const response = await fetch(url); const json = await response.json(); res.send(json); } catch (error) { res.send(JSON.stringify(error)); } }); /* jshint ignore:end */ module.exports = router;
Using fetch (in an async manner), this method calls the stock quote API and passes along the symbol that it received via the URL parameters and returns the stock quote as a JSON string to the consumer.  Here’s how that might look when we hit the service locally:
stock-service-response
Murielle’s team can expand the service in the future to provide historical data, cryptocurrency lookups, or whatever the business needs demand, but for now it provides a current quote based on the symbol it receives.  The team creates a Dockerfile and Kubernetes config file for deployment which we’ll take a look at in the future.
 
Dominic’s team down in Melbourne has been doing a lot of work with serverless technologies.  Since they’ve been tasked with a priority feature – random cat images – they feel that serverless is the way to go do deliver this feature and set about using Fn to build the service.  It might seem out of place to consider serverless in a microservice architecture, but it undoubtedly has a place and fulfills the stated goals of the microservice approach:  flexible, scalable, focused and rapidly deployable.  Dominic’s team has done all the research on serverless and Fn and is ready to get to work, so the developers installed a local Fn server and followed the quickstart for Java to scaffold out a function.
 
Once the project was ready to go Dominic’s team modified the func.yaml file to set up some configuration for the project, notably the apiBaseUrl and apiKey:
schema_version: 20180708 name: cat-svc version: 0.0.47 runtime: java build_image: fnproject/fn-java-fdk-build:jdk9-1.0.70 run_image: fnproject/fn-java-fdk:jdk9-1.0.70 cmd: codes.recursive.cat.CatFunction::handleRequest format: http config: apiBaseUrl: https://api.thecatapi.com/v1 apiKey: [redacted] triggers: - name: cat type: http source: /random
The CatFunction class is basic.  A setUp() method, annotated with @FnConfiguration gives access to the function context which contains the config info from the YAML file and initializes the variables for the function.  Then the handleRequest() method makes the HTTP call, again using a client library called Unirest, and returns the JSON containing the link to the crucial cat image.  
public class CatFunction { private String apiBaseUrl; private String apiKey; @FnConfiguration public void setUp(RuntimeContext ctx) { apiBaseUrl = ctx.getConfigurationByKey("apiBaseUrl").orElse(""); apiKey = ctx.getConfigurationByKey("apiKey").orElse(""); } public OutputEvent handleRequest(String input) throws UnirestException { String url = apiBaseUrl + "/images/search?format=json"; HttpResponse<JsonNode> response = Unirest .get(url) .header("Content-Type", "application/json") .header("x-api-key", apiKey) .asJson(); OutputEvent out = OutputEvent.fromBytes( response.getBody().toString().getBytes(), OutputEvent.Status.Success, "application/json" ); return out; } }
To test the function, the team deploys the function locally with:
fn deploy --app cat-svc –local
And tests that it is working:
curl -i \ -H "Content-Type: application/json" \ http://localhost:8080/t/cat-svc/random
Which produces:
HTTP/1.1 200 OK Content-Length: 112 Content-Type: application/json Fn_call_id: 01CRGBAH56NG8G00RZJ0000001 Xxx-Fxlb-Wait: 502.0941ms Date: Fri, 28 Sep 2018 15:04:05 GMT [{"id":"ci","categories":[],"url":"https://24.media.tumblr.com/tumblr_lz8xmo6xYV1r0mbi6o1_500.jpg","breeds":[]}]
Success!  Dominic’s team created the cat service before lunch and spent the rest of the day looking at random cat pictures.
 
Now that all 4 teams have implemented their respective services using various technologies, you might be asking yourself why it was necessary to implement such trivial services on the backend instead of calling the third-party APIs directly from the front end.  There are several reasons but let's take a look at just a few of them:
 
One reason to implement this functionality via a server-based backend is that third-party APIs can be unreliable and/or rate limited.  By proxying the API through their own backend, the teams are able to take advantage of caching and rate limiting of their own design to prevent the demand on the third-party API and get around potential downtime or rate limiting for a service that they have limited or no control over.  
 
Secondly, the teams are given the luxury of controlling the data before it’s sent to the client.  If it is allowed within the API terms and the business needs require them to supplement the data with other third-party or user data they can reduce the client CPU, memory, and bandwidth demands by augmenting or modifying the data before it even gets to the client.
 
Finally, CORS restrictions in the browser can be circumvented by calling the API from the server (and if you've ever had CORS block your HTTP calls in the browser you can definitely appreciate this!).
 
TechCorp has now completed the initial microservice development sprint of their project.  In the next post, we’ll look at how these 4 services can be deployed to a local Kubernetes cluster and we'll also dig into the Angular front end of the application.
 

Microservices From Dev To Deploy, Part 1: Getting Started With Helidon

Wed, 2018-10-03 11:27
.syntaxhighlighter table td.gutter div.line { padding: 0 .5em 0 1em!important; }

Microservices are undoubtedly popular.  There have been plenty of great posts on this blog that explain the advantages of using a microservice approach to building applications (or “why you should use them”).  And the reasons are plentiful:  flexibility to allow your teams to implement different services with their language/framework of choice, independent deployments, and scalability, and improved build and test times are among the many factors that make a microservice approach preferable to many dev teams nowadays.  It’s really not much of a discussion anymore as studies have shown that nearly 86% of respondents believe that a microservice approach will be their default architecture within the next 5 years.  As I mentioned, the question of “why microservices” has long been answered, so in this short blog series, I’d like to answer the question of “how” to implement microservices in your organization. Specifically, how Oracle technologies can help your dev team implement a maintainable, scalable and easy to test, develop, and deploy solution for your microservice applications.

To keep things interesting I thought I’d come up with a fictional scenario that we can follow as we take this journey.  Let’s imagine that a completely fabricated startup called TechCorp has just secured $150M in seed funding for their brilliant new project.  TechCorp’s founder Lydia is very nostalgic and she longs for the “good old days” when 56k modems screeched and buzzed their way down the on-ramp to the “interwebs” and she’s convinced BigCity Venture Capital that personalized homepages are about to make a comeback in a major way.  You remember those, right?  Weather, financials, news – even inspiring quotes and funny cat pictures to brighten your day.  With funding secured Lydia set about creating a multinational corporation with several teams of “rock star” developers across the globe.  Lydia and her CTO Raj know all about microservices and plan on having their teams split up and tackle individual portions of the backend to take advantage of their strengths and ensure a flexible and reliable architecture.

Team #1:
Location:  London
Team Lead:  Chris
Focus:  Weather Service
Language:  Groovy
Framework:  Oracle Helidon SE with Gradle

Team #2:
Location:  Tokyo
Team Lead:  Michiko
Focus:  Quote Service
Language:  Java
Framework:  Oracle Helidon MP with Maven

Team #3:
Location:  Bangalore
Team Lead:  Murielle
Focus:  Stock Service
Language:  JavaScript/Node
Framework:  Express

Team #4:
Location:  Melbourne
Team Lead:  Dominic
Focus:  Cat Picture Service
Language:  Java
Framework Oracle Fn (Serverless)

Team #5
Location:  Atlanta
Team Lead:  Ava
Focus:  Frontend
Language:  JavaScript/TypeScript
Framework:  Angular 6

As you can see, Lydia has put together quite a globally diverse group of teams with a wide-ranging set of skills and experience.  You’ll also notice some non-Oracle technologies in their selections which you might find odd in a blog post focused on Oracle technology, but that’s indicative of many software companies these days.  Rarely do teams focus solely on a single company’s stack anymore.  While we’d love it if they did, the reality is that teams typically have strengths and preferences that come into play.  I’ll show you in this series how Oracle’s new open source Helidon framework and Fn Serverless project can be leveraged to build microservices and serverless functions, but also how a team can deploy their entire stack to Oracle’s cloud regardless of the language or framework used to build the services that comprise their application.  We'll dive slightly deeper into Helidon than an introductory post, so you might want to first read this introductory blog post and the tutorial before you read the rest of this post.

Let’s begin with Team #1 who has been tasked with building out the backend for retrieving a user’s local weather.  They’re a Groovy team, but they’ve heard good things about Oracle’s new microservice framework Helidon so they’ve chosen to use this new project as an opportunity to learn the new framework and see how well it works with Groovy and Gradle as a build tool.  Team lead Chris has read through the Helidon tutorial and created a new application using the quickstart examples so his first task is to transform the Java application that was created into a Groovy application.  The first step for Chris, in this case, is to create a Gradle build file and make sure that it includes all of the necessary Helidon dependencies as well as a Groovy dependency.  Chris also adds a ‘copyLibs’ task to make sure that all of the dependencies end up where they need to when the project is built.  The build.gradle file looks like this:

apply plugin: 'java' apply plugin: 'maven' apply plugin: 'groovy' apply plugin: 'application' mainClassName = 'codes.recursive.weather.Main' group = 'codes.recursive.weather' version = '1.0-SNAPSHOT' description = """A simple weather microservice""" sourceSets.main.resources.srcDirs = [ "src/main/groovy", "src/main/resources" ] sourceCompatibility = 1.8 targetCompatibility = 1.8 tasks.withType(JavaCompile) { options.encoding = 'UTF-8' } ext { helidonversion = '0.10.0' } repositories { maven { url "http://repo.maven.apache.org/maven2" } mavenLocal() mavenCentral() } configurations { localGroovyConf } dependencies { localGroovyConf localGroovy() compile 'org.codehaus.groovy:groovy-all:3.0.0-alpha-3' compile "io.helidon:helidon-bom:${project.helidonversion}" compile "io.helidon.webserver:helidon-webserver-bundle:${project.helidonversion}" compile "io.helidon.config:helidon-config-yaml:${project.helidonversion}" compile "io.helidon.microprofile.metrics:helidon-metrics-se:${project.helidonversion}" compile "io.helidon.webserver:helidon-webserver-prometheus:${project.helidonversion}" compile group: 'com.mashape.unirest', name: 'unirest-java', version: '1.4.9' testCompile 'org.junit.jupiter:junit-jupiter-api:5.1.0' } // define a custom task to copy all dependencies in the runtime classpath // into build/libs/libs // uses built-in Copy task copyLibs(type: Copy) { from configurations.runtime into 'build/libs/libs' } // add it as a dependency of built-in task 'assemble' copyLibs.dependsOn jar copyDocker.dependsOn jar copyK8s.dependsOn jar assemble.dependsOn copyLibs assemble.dependsOn copyDocker assemble.dependsOn copyK8s // default jar configuration // set the main classpath jar { archiveName = "${project.name}.jar" manifest { attributes ('Main-Class': "${mainClassName}", 'Class-Path': configurations.runtime.files.collect { "libs/$it.name" }.join(' ') ) } }

With the build script set up Chris’ team goes about building the application.  Helidon SE makes it pretty easy to build out a simple service.  To get started you only really need a few classes:  A Main.groovy (notice that the Gradle script indentifies the mainClassName with a path to Main.groovy) which creates the server, sets up routing, configures error handling and optionally sets up metrics for the server.  Here’s the entire Main.groovy:

final class Main { private Main() { } private static Routing createRouting() { MetricsSupport metricsSupport = MetricsSupport.create() MetricRegistry registry = RegistryFactory .getRegistryFactory() .get() .getRegistry(MetricRegistry.Type.APPLICATION) return Routing.builder() .register("/weather", new WeatherService()) .register(metricsSupport) .error( NotFoundException.class, {req, res, ex -> res.headers().contentType(MediaType.APPLICATION_JSON) res.status(404).send(new JsonGenerator.Options().build().toJson(ex)) }) .error( Exception.class, {req, res, ex -> ex.printStackTrace() res.headers().contentType(MediaType.APPLICATION_JSON) res.status(500).send(new JsonGenerator.Options().build().toJson(ex)) }) .build() } static void main(final String[] args) throws IOException { startServer() } protected static WebServer startServer() throws IOException { // load logging configuration LogManager.getLogManager().readConfiguration( Main.class.getResourceAsStream("/logging.properties")) // By default this will pick up application.yaml from the classpath Config config = Config.create() // Get webserver config from the "server" section of application.yaml ServerConfiguration serverConfig = ServerConfiguration.fromConfig(config.get("server")) WebServer server = WebServer.create(serverConfig, createRouting()) // Start the server and print some info. server.start().thenAccept( { NettyWebServer ws -> println "Web server is running at http://${config.get("server").get("host").asString()}:${config.get("server").get("port").asString()}" }) // Server threads are not demon. NO need to block. Just react. server.whenShutdown().thenRun({ it -> Unirest.shutdown() println "Web server has been shut down. Goodbye!" }) return server } }

Heldion SE uses a YAML file located in src/main/resources (named application.yaml) for configuration.  You can store server related config, as well as any application variables in this file.  Chris’ team puts a few variables related to the API in this file:

app: apiBaseUrl: "https://api.openweathermap.org/data/2.5" apiKey: "[redacted]" server: port: 8080 host: 0.0.0.0

Looking back at the Main class, notice on line 13 where the endpoint “/weather” is registered and pointed at the WeatherService. That’s the class that’ll do all the heavy lifting when it comes to getting weather data.  Helidon SE services implement the Service interface.  This class has an update() method that is used to establish sub-routes for the given service and point those sub-routes at private methods of the service class.  Here’s what Chris’ team came up with for the update() method:

void update(Routing.Rules rules) { rules .any(this::countAccess as Handler) .get("/current/city/{city}", this::getByLocation as Handler) .get("/current/id/{id}", this::getById as Handler) .get("/current/lat/{lat}/lon/{lon}", this::getByLatLon as Handler) .get("/current/zip/{zip}", this::getByZip as Handler) }

Chris’ team creates 4 different routes under “/weather” giving the consumer the ability to get the current weather in 4 separate ways (by city, id, lat/lon or zip code).  Note that since we’re using Groovy we have to cast the method references as io.helidon.webserver.Handler or we’ll get an exception.  We’ll take a quick look at just one of those methods, getByZip():

private void getByZip(ServerRequest request, ServerResponse response) { def zip = request.path().param("zip") def weather = getWeather([ (ZIP): zip ]) response.headers().contentType(MediaType.APPLICATION_JSON) response.send(weather.getBody().getObject().toString()) }

The getByZip() method grabs the zip parameter from the request and calls getWeather(), which uses a client library called Unirest to make an HTTP call to the chosen weather API and returns the current weather to getByZip() which sends the response to the browser as JSON:

private HttpResponse<JsonNode> getWeather(Map params) { return Unirest .get("${baseUrl}/weather?${params.collect { it }.join('&')}&appid=${apiKey}") .asJson() }

As you can see, each service method gets passed two arguments when called by the router – the request and response (as you might have guessed if you’ve worked with a microservice framework before).  These arguments allow the developer to grab URL parameters, form data or headers from the request and set the status, body or headers into the response as necessary.  Once the team builds out the entire weather service they are ready to execute the Gradle run task to see everything working in the browser.

weather-service-response

Cloudy in London?  A shocking weather development!

There’s obviously more to Helidon SE, but as you can see it doesn’t take a lot of code to get a basic microservice up and running. We’ll take a look at deploying the services in a later post, but Helidon makes that step trivial with baked in support for generating Dockerfiles and Kubernetes config files. 

Let’s switch gears now and look at Michiko’s team who was tasked with building out a backend to return random quotes since no personalized homepage would be complete without such a feature.  The Tokyo team prefers to code in Java and they use Maven to manage compilation and dependencies.  They are quite familiar with the Microprofile family of APIs.  Michiko and team also decided to use Helidon, but with their Microprofile expertise, they decided to go with Helidon MP over the more reactive functional style of SE because it provides recognizable APIs like JAX-RS and CDI that they have been using for years.  Like Chris’ team, they rapidly scaffold out a skeleton application with the MP quickstart archetype and set out configuring their Main.java class.  The main method of that class calls startServer() which is slightly different from the SE method, but accomplishes the same task – starting up the application server using a config file (this one named microprofile-config.properties and located in /src/main/resources/META-INF):

protected static Server startServer() throws IOException { // load logging configuration LogManager.getLogManager().readConfiguration( Main.class.getResourceAsStream("/logging.properties")); // Server will automatically pick up configuration from // microprofile-config.properties Server server = Server.create(); server.start(); return server; }

Next, they create a beans.xml file in /src/main/resources/META-INF so the CDI implementation can pick up their classes:

<!--?xml version="1.0" encoding="UTF-8"?--> <beans> </beans>

Create the JAX-RS application, adding the resource class(es) as needed:

@ApplicationScoped @ApplicationPath("/") public class QuoteApplication extends Application { @Override public Set<Class<?>> getClasses() { Set<Class<?>> set = new HashSet<>(); set.add(QuoteResource.class); return Collections.unmodifiableSet(set); } }

And create the QuoteResource class:

@Path("/quote") @RequestScoped public class QuoteResource { private static String apiBaseUrl = null; @Inject public QuoteResource(@ConfigProperty(name = "app.api.baseUrl") final String apiBaseUrl) { if (this.apiBaseUrl == null) { this.apiBaseUrl = apiBaseUrl; } } @SuppressWarnings("checkstyle:designforextension") @Path("/random") @GET @Produces(MediaType.APPLICATION_JSON) public String getRandomQuote() throws UnirestException { String url = apiBaseUrl + "/posts?filter[orderby]=rand&filter[posts_per_page]=1"; HttpResponse<JsonNode> quote = Unirest.get(url).asJson(); return quote.getBody().toString(); } }

Notice the use of constructor injection to get a configuration property and the simple annotations for the path, HTTP method and content type of the response. The getRandomQuote() method again uses Unirest to make a call to the quote API and return the result as a JSON string.  Running the mvn package task and executing the resulting JAR starts the application running and results in the following:

quote-service-responseMichiko’s team has successfully built the initial implementation of their quote microservice on a flexible foundation that will allow the service to grow with time as the user base expands and additional funding rolls in from the excited investors!  As with the SE version, Helidon MP generates a Dockerfile and Kubernetes app.yaml file to assist the team with deployment.  We’ll look at deployment in a later post in this series.

In this post, we talked about a fictitious startup getting into microservices for their heavily funded internet homepage application.  We looked at the Helidon microservice framework which provides a reactive, functional style version as well as a Microprofile version more suited to Java EE developers who are comfortable with JAX-RS and CDI.  Lydia’s teams are moving rapidly to get their backend architecture built out and are well on their way to implementing her vision for TechCorp.  In the next post, we’ll look at how Murielle and Dominic’s teams build out their services and in future posts we’ll see how all of the teams ultimately test and deploy the services into production.

Oracle Offline Persistence Toolkit — After Request Sync Listener

Thu, 2018-09-27 21:30
.oracle-singlepageview__featuredimage, .cb11v2-cover{display:none !important;}

Originally published at andrejusb.blogspot.com 

In my previous post, we learned how to handle replay conflict — Oracle Offline Persistence Toolkit — Reacting to Replay Conflict. Additional important thing to know — how to handle response from request which was replayed during sync (we are talking here about PATCH). It is not as obvious as handling response from direct REST call in callback (there is no callback for response which is synchronised later). You may think, why you would need to handle response, after successful sync. Well there could be multiple reasons — for instance you may read returned value and update value stored on the client.

Listener is registered in Persistence Manager configuration, by adding event listener of type syncRequest for given endpoint:


This is listener code. We are getting response, reading change indicator value (it was updated on the backend and new value is returned in response) and storing it locally on the client. Additionally we maintain array with mapping of change indicator value to updated row ID (in my next post I will explain why this is needed). After request listener must return promise:


On runtime — when request sync is executed, you should see in the log message printed, which shows new change indicator value:


Double check in payload, to make sure request was submitted with previous value:


Check response, you will see new value for change indicator (same as in after request listener):


Sample code can be downloaded from GitHub repository

Which Way to Go: Code One Presenters Help You Select Which Sessions to Attend

Thu, 2018-09-27 19:54

According to the Oracle Code One session catalog, there are 625 HOLs, BOFs, and other sessions for you to choose from when planning your itinerary over the 3 1/2 days of that event. Math was never my strong suit, but by my calculation that's roughly 15 sessions for every hour the doors are open. Unless you have developed the Dr. Manhattan-like ability to be in several places at once, you're going to have to make some tough decisions about which sessions to attend.

In the interest of helping you to make that choice we've put together a series of interviews with 30 of the most recognized and influential thought-leaders who are presenting at Code One. Each video provides you with deep technical background on the sessions these pros will present, and you'll even learn about the sessions they plan to attend.

The videos below are just a sample to get you started.

Chris Richardson

Eventuate founder and Java Champion Chris Richardson is the go-to pro for expertise on microservice architecture. In this interview he previews his session Developing Message-Driven Asynchronous Microservices, and talks about his upcoming book, Microservices Patterns (2018, Manning Publications).

Sebastian Daschner

No stranger to developer conferences, Java Champion/Developer Champion Sebastian Daschner will present four Code One technical sessions, and serve as a keynote panelist. What can you expect from these appearances? Who better to explain than Sebastian himself?

Trisha Gee

The ever-busy Trisha Gee, developer advocate at JetBrains and a Java Champion, will travel from her home in Spain all the way to San Francisco to present five sessions at Code One. Trisha takes you into the nuts and bolts of each session in this preview.

Josh Long

Java Champion Josh Long, Spring Developer Advocate at Pivotal, teams up with Trisha Gee at Code One to present Fully Reactive: Spring, Kotlin, JavaFX, and MongoDB Playing Together. Why should you add this session to your schedule? We'll let Josh explain.

 

While this is just a small sampling of the full schedule of Code One sessions, we hope these videos will provide some help in building out your itinerary. Of course, with the stellar roster of Code One presenters, any way you go will be OK. There are no bad choices.

  Additional Resources

Complete Code One Preview Playlist

Code One Featured Speakers

Code One Session Catalog

Code One Home Page

 

Generic Docker Container Image for running and live reloading a Node application based on a ...

Fri, 2018-09-21 21:30

Originally published at technology.amis.nl

My desire: find a way to run a Node application from a Git(Hub) repository using a generic Docker container and be able to refresh the running container on the fly whenever the sources in the repo are updated. The process of producing containers for each application and upon each change of the application is too cumbersome and time consuming for certain situations — including rapid development/test cycles and live demonstrations. I am looking for a convenient way to run a Node application anywhere I can run a Docker container — without having to build and push a container image — and to continuously update the running application in mere seconds rather than minutes. This article describes what I created to address that requirement.

Key ingredient in the story: nodemon — a tool that monitors a file system for any changes in a node.js application and automatically restarts the server when there are such changes. What I had to put together:

a generic Docker container based on the official Node image — with npm and a git client inside

  • adding nodemon (to monitor the application sources)
  • adding a background Node application that can refresh from the Git repository — upon an explicit request, based on a job schedule and triggered by a Git webhook
  • defining an environment variable GITHUB_URL for the url of the source Git repository for the Node application
  • adding a startup script that runs when the container is ran first (clone from Git repo specified through GITHUB_URL and run application with nodemon) or restarted (just run application with nodemon)

I have been struggling a little bit with the Docker syntax and operations (CMD vs RUN vs ENTRYPOINT) and the Linux bash shell scripts — and I am sure my result can be improved upon.

The Dockerfile that builds the Docker container with all generic elements looks like this:

FROM node:8 #copy the Node Reload server - exposed at port 4500 COPY package.json /tmp COPY server.js /tmp RUN cd tmp && npm install EXPOSE 4500 RUN npm install -g nodemon COPY startUpScript.sh /tmp COPY gitRefresh.sh /tmp CMD ["chmod", "+x", "/tmp/startUpScript.sh"] CMD ["chmod", "+x", "/tmp/gitRefresh.sh"] ENTRYPOINT ["sh", "/tmp/startUpScript.sh"]

Feel free to pick any other node base image — from https://hub.docker.com/_/node/. For example: node:10.

The startUpScript that is executed whenever the container is started up — that takes care of the initial cloning of the Node application from the Git(Hub) URL to directory /tmp/app and the running of that application using nodemon is shown below. Note the trick (inspired by StackOverflow) to run a script only when the container is ran for the very first time.

#!/bin/sh CONTAINER_ALREADY_STARTED="CONTAINER_ALREADY_STARTED_PLACEHOLDER" if [ ! -e $CONTAINER_ALREADY_STARTED ]; then touch $CONTAINER_ALREADY_STARTED echo "-- First container startup --" # YOUR_JUST_ONCE_LOGIC_HERE cd /tmp # prepare the actual Node app from GitHub mkdir app git clone $GITHUB_URL app cd app #install dependencies for the Node app npm install #start both the reload app and (using nodemon) the actual Node app cd .. (echo "starting reload app") & (echo "start reload";npm start; echo "reload app finished") & cd app; echo "starting nodemon for app cloned from $GITHUB_URL"; nodemon else echo "-- Not first container startup --" cd /tmp (echo "starting reload app and nodemon") & (echo "start reload";npm start; echo "reload app finished") & cd app; echo "starting nodemon for app cloned from $GITHUB_URL"; nodemon fi The startup script runs the live reloader application in the background — using (echo “start reload”;npm start)&. That final ampersand (&) takes care of running the command in the background. This npm start command runs the server.js file in /tmp. This server listens at port 4500 for requests. When a request is received at /reload, the application will execute the gitRefresh.sh shell script that performs a git pull in the /tmp/app directory where the git clone of the repository was targeted.

 

const RELOAD_PATH = '/reload' const GITHUB_WEBHOOK_PATH = '/github/push' var http = require('http'); var server = http.createServer(function (request, response) { console.log(`method ${request.method} and url ${request.url}`) if (request.method === 'GET' && request.url === RELOAD_PATH) { console.log(`reload request starting at ${new Date().toISOString()}...`); refreshAppFromGit(); response.write(`RELOADED!!${new Date().toISOString()}`); response.end(); console.log('reload request handled...'); } else if (request.method === 'POST' && request.url === GITHUB_WEBHOOK_PATH) { let body = []; request.on('data', (chunk) => { body.push(chunk);}) .on('end', () => { body = Buffer.concat(body).toString(); // at this point, `body` has the entire request body stored in it as a string console.log(`GitHub WebHook event handling starting ${new Date().toISOString()}...`); ... (see code in GitHub Repo https://github.com/lucasjellema/docker-node-run-live-reload/blob/master/server.js console.log("This commit involves changes to the Node application, so let's perform a git pull ") refreshAppFromGit(); response.write('handled'); response.end(); console.log(`GitHub WebHook event handling complete at ${new Date().toISOString()}`); }); } else { // respond response.write('Reload is live at path '+RELOAD_PATH); response.end(); } }); server.listen(4500); console.log('Server running and listening at Port 4500'); var shell = require('shelljs'); var pwd = shell.pwd() console.info(`current dir ${pwd}`) function refreshAppFromGit() { if (shell.exec('./gitRefresh.sh').code !== 0) { shell.echo('Error: Git Pull failed'); shell.exit(1); } else { } }

Using the node-run-live-reload image
Now that you know a little about the inner workings of the image, let me show you how to use it (also see instructions here: https://github.com/lucasjellema/docker-node-run-live-reload).

To build the image yourself, clone the GitHub repo and run

docker build -t "node-run-live-reload:0.1" .

using of course your own image tag if you like. I have pushed the image to Docker Hub as lucasjellema/node-run-live-reload:0.1. You can use this image like this:

docker run --name express -p 3011:3000 -p 4505:4500 -e GITHUB_URL=https://github.com/shapeshed/express_example -d lucasjellema/node-run-live-reload:0.1

In the terminal window — we can get the logging from within the container using

docker logs express --follow

After the application has been cloned from GitHub, npm has installed the dependencies and nodemon has started the application, we can access it at <host>:3011 (because of the port mapping in the docker run command):


When the application sources are updated in the GitHub repository, we can use a GET request (from CURL or the browser) to <host>:4505 to refresh the container with the latest application definition:


The logging from the container indicates that a git pull was performed — and returned no new sources:


Because there are no changed files, nodemon will not restart the application in this case.

One requirement at this moment for this generic container to work is that the Node application has a package.json with a scripts.start entry in its root directory; nodemon expects that entry as instruction on how to run the application. This same package.json is used with npm install to install the required libraries for the Node application.

Summary

The next figure gives an overview of what this article has introduced. If you want to run a Node application whose sources are available in a GitHub repository, then all you need is a Docker host and these are your steps:

  1. Pull the Docker image: docker pull lucasjellema/node-run-live-reload:0.1 (this image currently contains the Node 8 runtime, npm, nodemon, a git client and the reloader application) 
    Alternatively: build and tag the container yourself.
  2. Run the container image, passing the GitHub URL of the repo containing the Node application; specify required port mappings for the Node application and the reloader (port 4500): docker run –name express -p 3011:3000 -p 4500:4500 -e GITHUB_URL=<GIT HUB REPO URL> -d lucasjellema/node-run-live-reload:0.1
  3. When the container is started, it will clone the Node application from GitHub
  4. Using npm install, the dependencies for the application are installed
  5. Using nodemon the application is started (and the sources are monitored so to restart the application upon changes)
  6. Now the application can be accessed at the host running the Docker container on the port as mapped per the docker run command
  7. With an HTTP request to the /reload endpoint, the reloader application in the container is instructed to
  8. git pull the sources from the GitHub repository and run npm install to fetch any changed or added dependencies
  9. if any sources were changed, nodemon will now automatically restart the Node application
  10. the upgraded Node application can be accessed

Note: alternatively, a WebHook trigger can be configured. This makes it possible to automatically trigger the application reload facility upon commits to the GitHub repo. Just like a regular CD pipeline this means running Node applications can be automatically upgraded.


Next Steps

Some next steps I am contemplating with this generic container image — and I welcome your pull requests — include:

  • allow an automated periodic application refresh to be configured through an environment variable on the container (and/or through a call to an endpoint on the reload application) instructing the reloader to do a git pull every X seconds.
  • use https://www.npmjs.com/package/simple-git instead of shelljs plus local Git client (this could allow usage of a lighter base image — e.g. node-slim instead of node)
  • force a restart of the Node application — even it is not changed at all
  • allow for alternative application startup scenarios besides running the scripts.start entry in the package.json in the root of the application
Resources

GitHub Repository with the resources for this article — including the Dockerfile to build the container: https://github.com/lucasjellema/docker-node-run-live-reload

My article on my previous attempt at creating a generic Docker container for running a Node application from GitHub: https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/

Article and Documentation on nodemon: https://medium.com/lucjuggery/docker-in-development-with-nodemon-d500366e74df and https://github.com/remy/nodemon#nodemon

NPM module shelljs that allows shell commands to be executed from Node applications: https://www.npmjs.com/package/shelljs

Podcast: DevOps to NoOps: State of Play

Tue, 2018-09-18 23:00

What is the current state of play in DevOps? What forces are having the greatest impact on the evolution and adoption of DevOps? Is NoOps a valid prospect for the future? Those questions notwithstanding, one thing is certain: while everybody is talking about DevOps, getting from talk to action is proving to be a substantial hurdle for many organizations.

"What I see so far is lack of knowledge," says podcast panelist Davide Fiorentino. "People don't know the tools. Most of the time they don't know what they are talking about." In some cases the problem can be a lot like trying to turn a battleship.

As panelist Bert Jan Schrijver explains, "it's typically easier for smaller organizations to move to a definite way of working, and a bit harder for larger organizations," where the stakes can be high. "I typically try to find organization projects to work on where the IT department has no more than 50 to 60 people. Then there's a good opportunity to get the organization in the right mindset and to get everybody on deck."

But in Bert's experience, smaller doesn't always mean easier. "It can be easier to convince 1500 people who have the same mindset than 50 people who are basically against all that you're saying."

In that situation management support can be invaluable. "It's always been about having unconditional support in all levels of the organization, especially in management," Bert says. "Because when you're changing an organization you're always going to hit resistance. And if you're going to get resistance from somebody who's higher up in the tree than you, then you better have support from that person's manager."

"The key to working as a DevOps team is not being blocked by people or departments outside your team that you don't have influence on," Bert adds. "A true DevOps team is a cross-functional team which is a team that can do anything necessary to go from idea to working software in production."

"That's a very important point!" agrees Michael. "I really appreciate the ops guys having strong experiences and skills about non-functional parts of the solution, and running and scaling out infrastructure."

Of course, there is a lot more to getting from DevOps talk to real transformation, and what you're reading here is only a fraction of the insight Davide, Bert, and Michael offer in this podcast. So strap on your headphones and dig in.

BTW: Each of these panelists have sessions on the schedule for Oracle Code One, Oct 22-25, 2018 in San Francisco, CA. If you haven't already done so, there's plenty of time to register for that event. You'll find information on those sesssion below.

Special thanks to my Developer Community colleague Javed Mohammed for his help in organizing this program, and for co-hosting the discussion.

The Panelists Davide Fiorentino
Principal DevOps Engineer, Cambridge Broadband Networks Limited (CBNL)
Consultant, Food and Agriculture Organization, United Nations

Twitter LinkedIn

Code One Session:

  • DevOps in Action [BOF5289]
    Monday, Oct 22, 7:30 p.m. - 8:15 p.m. | Moscone West - Room 2009
Michael Hutterman
Java Champion
Oracle Developer Champion
Independent DevOps Consultant

Twitter LinkedIn

Code One Session:
  • Continuous Delivery/DevOps: Live Cooking Show [DEV4762]
    Monday, Oct 22, 2:30 p.m. - 3:15 p.m. | Moscone West - Room 2010
Bert Jan Schrijver
Java Champion
Oracle Developer Champion
CTO, OpenValue
Software Craftsman, JPoint

Twitter LinkedIn

Code One Sessions:
  • Better Software, Faster: Principles of Continuous Delivery and DevOps [DEV5118]
    Monday, Oct 22, 4:00 p.m. - 4:45 p.m. | Moscone West - Room 2010
  • Angular for Java Developers [DEV4345]
    Wednesday, Oct 24, 10:30 a.m. - 11:15 a.m. | Moscone West - Room 2003
  • Microservices in Action at the Dutch National Police [DEV4344]
    Monday, Oct 22, 2:30 p.m. - 3:15 p.m. | Moscone West - Room 2007
Javed Mohammed
Podcast Co-Host
Systems Community Manager, Oracle

Twitter LinkedIn 


Additional Resources Coming Soon

Talking about microservices is a useful thing. But at some point the talk has to stop and the real work has to begin. And that's when the real challenges appear. In this upcoming podcast a panel of experts discusses how to overcome the challenges inherent in designing microservices that will fulfill their potential.

Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

Pages