OTN TechBlog

Subscribe to OTN TechBlog feed
Oracle Blogs
Updated: 17 hours 47 min ago

Podcast: JET-Propelled JavaScript

Tue, 2019-02-19 23:00

JavaScript has been around since 1995. But a lot has changed in nearly a quarter-century. No longer limited to the browser, JavaScript has become a full fledged programming language, finding increasing use in enterprise application development. In this program a panel of experts explores the evolution of JavaScript, discusses how it is used in modern development projects, and then takes a close look at Oracle JavaScript Extension Toolkit, otherwise known as JET. Take a listen!

This program is Oracle Groundbreakers podcast #363. It was recorded on Thursday January 17, 2019.

The Panelists Listed alphabetically Joao Tiago Abreu Joao Tiago Abreu
Software Engineer and Oracle JET Specialist, Crossjoin Solutions, Portugal
Twitter  LinkedIn  Andrejus Baranovskis Andrejus Baranovskis
Oracle Groundbreaker Ambassador
Oracle ACE Director
CEO & Oracle Expert, Red Samurai Consulting
Twitter LinkedIn Luc Bors Luc Bors
Oracle Groundbreaker Ambassador
Oracle ACE Director
Partner & Technical Director, eProseed, Netherlands
Twitter LinkedIn John Brock John Brock
Senior Manager, Product Management, Development Tools, Oracle, Seattle, WA
Twitter LinkedIn  Daniel Curtis Daniel Curtis
Oracle Front End Developer, Griffiths Waite, UK
Author of Practical Oracle JET: Developing Enterprise Applications in JavaScript (June 2019, Apress)
Twitter LinkedIn    Additional Resources Coming Soon
  • DevOps, Streaming, Liquid Software, and Observability. Featuring panelists Baruch Sadogursky, Leonid Igolnik, and Viktor Gamov
  • Polyglot Programming and GraalVM. Featuring panelists Rodrigo Botafogo, Roberto Cortez, Dr. Chris Seaton, Oleg Selajev.
  • Serverless and the Fn Project. A discussion of where Serverless fits in the IT landscape. Panelists TBD panel.
Subscribe Never miss an episode! The Oracle Groundbreakers Podcast is available via: Participate

If you have a topic suggestion for the Oracle Groundbreakers Podcast, or if you are interested in participating as a panelist, please post a comment. We'll get back to you right away.

Setting up OCI Compute and Storage for Builds on Oracle Developer Cloud

Mon, 2019-02-18 17:59

With the 19.1.3 release of Oracle Developer Cloud, we have started supporting OCI based Build slaves for the continuous integration and continuous deployment. So now you are enabled to use OCI Compute, Storage for the Build VMs and for the artifact storage respectively. This blog will help you understand how you can configure the OCI account for Compute and Storage in Oracle Developer Cloud.

How to get to the OCI Account configuration screen in Developer Cloud?

If your user has Organization Administrator privileges then you will by default land on the Organization Tab after you successfully login into you Developer Cloud instance. In the Organization screen, you need to click on the OCI Account tab.

Note: You will not be able to access this tab if you do have the Organization Administrator privileges. 

 

Existing users of Developer Cloud will see their OCI Classic account configuration and will notice that unlike the previous version, both Compute and Storage configuration have now been consolidated to a single screen. Click on the Edit button for configuring the OCI account.

Click on the OCI radio button to get the form for configuring OCI account. This wizard will help you configure both compute and storage for OCI to be used on Developer Cloud.

 

 

Before we start to understand, what each of the fields in the wizard means and where to retrieve its value from the OCI console, let us understand what does the message displayed on top of the Configure OCI Account wizard(as shown in the screenshot below) means:

 

It means that, if you change from OCI Classic to OCI Account, the Build VMs that were created using  Compute on OCI Classic will now be migrated to OCI based Build VMs. It also gives the count of the existing Build VMs created using OCI Classic compute that will be migrated. This change will also result in the migration of the build and Maven artifacts from Storage Classic to OCI storage automatically.

Prerequisite for the OCI Account configuration:

You should have access to the OCI account and you should also have a native OCI user with the Admin privilege created in the OCI instance.

Note: You will not be able to use the IDCS user or the user with which you are able to login into the Oracle Cloud Myservices console, until and unless that user also exists as native OCI user.

By native user, it means that you should be able to see the user (eg: ociuser) in the Governance & Administration > Identity > Users tab on the OCI console as shown in the screenshot below. If not then you will have to go ahead and create a user following this link.

OCI Account Configuration:

Below are the list of values, explanation of what it is and finally a screenshot of OCI console to show where it can be found. You will need these values to configure the OCI account in Developer Cloud.

Tenancy OCID - This is the cloud tenancy identifier in OCI. Go to Governance and Administration > Administration > Tenancy Details in the OCI console. Under Tenancy Information, click on the Copy link for the Tenancy OCID.

 

User OCID: ID for the native OCI user. Go to Governance and Administration > Identity > Users in the OCI console. For the user of your choice click on the Copy link for the User OCID.

 

Home Region: On the OCI console look at the right-hand top corner and you should find the region for your tenancy, as highlighted in the screenshot below.

 

Private Key: The user has to generate a Public and Private Key pair in the PEM format. The Public key in the PEM format has to be configured in the OCI console. Use this link to see understand how you can create the Public and Private Key Pair.  You will have to go to Governance and Administration > Identity > Users in the OCI console. Select the user by clicking on the username link and then click on the Add Public Key button and then configure the Public Key here. While the Private key needs to be copied in the Private Key field of the Configure OCI Account wizard in Developer Cloud.

 

Passphrase: If you have given any passphrase while generating the Private Key, then you will have to configure the same here, else you can leave it empty.

Fingerprint: It is the fingerprint value of the OCI user who’s OCID you had copied earlier from the OCI console. You will have to go to Governance and Administration > Identity > Users in the OCI console. Select the user by clicking on the username link and for the Public Key created, copy the fingerprint value as shown in the screenshot below.

 

Compartment OCID: You can either select the root compartment for which the OCID would be the same as the Tenancy OCID. But it is recommended that you create a separate compartment for the Developer Cloud Build VMs for the better management. You can create a new compartment by going to Governance and Administration > Identity > Compartments in the OCI console and then click on the Create Compartment button, give the Compartment Name, Description values of your choice and select the root compartment as the Parent Compartment.

Click on the link in the OCID column for the compartment that you have created and then click on the Copy link to copy the DevCSBuild compartment OCID.

 

Storage Namespace: This is the Storage Namespace where the artifacts will be stored on the OCI. Go to Governance and Administration > Administration > Tenancy Details in the OCI console. Under Object Storage Settings, copy the Storage Namespace name as shown in the screenshot below.

 

After you have entered all the values, select the checkbox to accept the terms and conditions. Click the Validate button, if validation is successful, then click the Save button to complete the OCI Account configuration. 

 

You will get a confirmation dialog for the account switch from OCI Classic to OCI. Select the checkbox and click the Confirm button. By doing this you are giving your consent to migrate the VMs, build and Maven artifacts to OCI compute and storage respectively. This action will also remove the artifacts from the Storage classic.

On confirmation, you should see the OCI Account configured with the provided details. You can edit it at any point of time by clicking the Edit button.

 

You can check for the Maven and build artifacts in the projects to confirm the migration.

 

To know more about Oracle Developer Cloud, please refer the documentation link.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

Oracle Cloud on a Roll: From One ‘Next Big Things’ Session to Another…

Sat, 2019-02-09 02:00

The Oracle Open World Showcase in London this January

We wrapped up an exciting Open World in London last month with a spotlight on all things Oracle Cloud. Hands-on Labs and demos took center stage to showcase the hottest use cases in apps and converged infrastructure (IaaS + PaaS). 

From autonomous databases and analytics, platform solutions for SaaS -  like a digital assistant (Chatbot), app and data integration, and API gateways for any SaaS play across verticals, and cloud-native application development on OCI, we ran a series of use cases. Several customers joined us on stage for various keynote streams to share their experience and to demonstrate the richness of Oracle’s offering.

Macty’s (an Oracle Scaleup partner) Move from AWS to the Oracle Cloud

Macty is one such customer who transitioned out of AWS to Oracle Cloud to build their fashion e-commerce platform with a focus on AI/ML to power visual search. Navigating AWS was hard for Macty. Expensive support, complex pricing choices, lack of automated backups for select devices, and delays in getting to the support workforce were some of the reasons why Macty embarked on to Oracle’s Cloud Infrastructure.

Macty used Oracle’s bare metal GPU to train deep learning models. They used the compartments to isolate and use the correct billing for customers and the DevCS platform (Terraform and Ansible) to update and check the environment from a deployment and configuration perspective.

Macty’s CEO @Susana Zoghbi presented the Macty success story with the VP of Oracle Cloud, Ashish Mohindroo. She demonstrated the power of the Macty chatbot (through Facebook Messenger) that was built on Oracle’s platform to enable e-commerce vendors to engage with their customers better. 

The other solutions that Macty brings with their AI/API powered platform are: a recommendation engine to complete the look in real time, find similar items, customize the fashion look, and get customer analytics to connect e-commerce with the in-store experience. Any of these features can be used by e-commerce stores to delight their customers and up their game against big retailers.

And now, Oracle Open World is Going to Dubai!

Ashish Mohindroo, VP of Oracle Cloud will be keynoting the Next Big Things session again and this time at the Oracle Open World in Dubai next week. He will be accompanied by Asser Smidt, Founder of BotSupply (an Oracle Scaleup partner). BotSupply assists companies with conversational bots, have an award-winning multi-lingual NLP and are also a leader in conversational design. While Ashish and Asser are going to explore conversational AI and design via bots powered by Oracle cloud, Ashish is also going to elaborate on how Oracle Blockchain and Oracle IoT are becoming building blocks for extending modern applications in his ‘Bringing Enterprises to Blockchain’ session. He will be accompanied by Ghassan Sarsak from ICS Financial Services, and Thrasos Thrasyvoulu from the Oracle Cloud Platform App Dev team. Last, but never the least, Ashish will explain how companies can build compelling user interfaces with augmented reality (AR/VR) and show how content is at the core of this capability. Oracle content cloud makes it easy for customers to build these compelling experiences on any channel: mobile, web, and other device. If you're in Dubai next week, swing by Open World to catch the action.

 

OSvC BUI Extension - How to create a library extension

Fri, 2019-02-08 13:45

Library is an existing extension type that you can find as part of BUI Extensibility Framework. If you are not familiar with the library concept, it is a collection of non-volatile resources or implementations of behavior that can be invoked from other programs (in our case, across extensions that share the same resources or behaviors). For example, your extension project requires a common behavior such as a method for authentication, global variable, a method for trace/log, and others. In this case, a library is a useful approach because it can wrap all common methods in a single extension that can be invoked from others, it prevents your project from inconsistently repeated methods over different extensions. The following benefits can be observed when this approach is used:

  1. centralized maintenance of core methods;
  2. reduced size of other extensions, which might improve the time of download content;
  3. standardized methods;
  4. and others...
 

Before we can get further, let's see what this sample code delivers. 

  • Library
    • myLibrary.js: This file includes a common implementation of behavior such as a method to trace, to return authentication credentials and to execute ROQL queries.
  • myGlobalHeaderMenu
    • init.html: This file is initializing the required js. ** if you have experience with require.js, probably you are thinking why not use require.js. We can work with require.js in another post. Although, the library concept is still needed.
    • js/myGlobalHeaderMenu.js: This file is creating our user interface extension. We want to see a Menu Header with thumbs-ups icon like we did before. As it is a sample code, we want to have something simple to trigger our methods implemented as a library and see it in action.
 

The global header menu is invoking a trace log and ROQL Query function that was implemented as part of the library sample code. When the thumbs-up is clicked a ROQL Query Statement( "select count(*) from accounts") is passed as a parameter to a function that was implemented as part of the library. The result is presented by another library behavior which was defined to trace any customization. In order to have the trace log function on, we've implemented a local storage item (localStorage.setItem('debugMyExtension',true);) as you can see in the animation gif below.

 

It will make more sense in the next session of this post where you can read the code line with comments to understand the logic under the hood. For now, let's see what you should expect when this sample code is uploaded to your site.

 

Demo Sample Code

 

Here are the sample codes to create a ‘Global Header Menu’ and a ‘Library.’ Please, download from attachment and add each one of the add-is as Agent Browser UI Extension, then select Console for the myGlobalHeaderMenu extension, and init.html as the init file. Lastly, upload myLibrary.js as a new extension (the extension name should be Utilities), then select library as extension type

 

Library  

Here is the code line implemented in myLibrary.js. Read the code line with comments for a better understanding.

  /* As mentioned in other posts, we want to keep the app name and app version consistent for each extension. Later, it will help us to better troubleshoot and read the logs provided by BUI Extension Log Viewer.*/ var appName = "UtilityExtension"; var appVersion = "1.0"; /*We have created this function in order to troubleshoot our extensions. You don't want to have your extension tracing for all agents, so in this sample code, we are using a local storage to check whether the trace mode is on or off. In order to have the trace on, with the console object opened set a local item as follow; localStorage.setItem('debugMyExtension',true);*/ let myExtensions2log = function(e){ if (localStorage.getItem('debugMyExtension') == 'true') window.console.log("[My Extension Log]: " + e); } /* Authentication is required to connect to Oracle Service Cloud APIs. This function returns the current session token and the REST API end-point, you don't want to have this information hard-coded.*/ let myAuthentication = new Promise(function(resolve, reject){ ORACLE_SERVICE_CLOUD.extension_loader.load(appName,appVersion).then(function(extensionProvider){ extensionProvider.getGlobalContext().then(function(globalContext){ _urlrest = globalContext.getInterfaceServiceUrl("REST"); _accountId = globalContext.getAccountId(); globalContext.getSessionToken().then( function(sessionToken){ resolve({'sessionToken': sessionToken,'restEndPoint': _urlrest, 'accountId': _accountId}); }); }); }); }); /* This function will receive a ROQL statement and will return the result object. With this function, other extensions can send a ROQL statement and receive a JSON object as result.*/ let myROQLQuery = function(param){ return new Promise(function(resolve, reject){ var xhr = new XMLHttpRequest(); myAuthentication.then(function(result){ xhr.open("GET", result['restEndPoint'] + "/connect/latest/queryResults/?query=" + param, true); xhr.setRequestHeader("Authorization", "Session " + result['sessionToken']); xhr.setRequestHeader("OSvC-CREST-Application-Context", "UtilitiesExtension"); xhr.onload = function(e) { if (xhr.readyState === 4) { if (xhr.status === 200) { var obj = JSON.parse(xhr.responseText); resolve(obj); } else { reject('myROQLQuery from Utilities Library has failed'); } } } xhr.onerror = function (e) { console.error(xhr.statusText); }; xhr.send(); }); }); }   myGlobalHeaderMenu

 

init.html

This is the init.html file. The important part here is to understand the src path. If you are not familiar with "src path" here is a quick explanation. Notice that each extension resides in a directory and the idea is to work with directory paths.

 

/   = Root directory

.   = This location

..  = Up a directory

./  = Current directory

../ = Parent of current directory

../../ = Two directories backwards

 

In our case, it is ./../[Library Extension Name]/[Library File name]  -> "./../Utilities/myLibrary.js"

  <!--This HTML file was created to make a call on the required files to run this extension--> <!--myLibrary is the first extension to be called. This file has the common resources that is needed to run the second .js file--> <script src="./../Utilities/myLibrary.js"></script> <!--myGlobalHeaderMenu is the main extension which will create the Global Header Menu and call myLibrary for dependet resources--> <script src="./js/myGlobalHeaderMenu.js"></script>  

 

  js/myGlobalHeaderMenu.js   let myHeaderMenu = function(){ ORACLE_SERVICE_CLOUD.extension_loader.load("GlobalHeaderMenuItem", "1.0").then(function (sdk) { sdk.registerUserInterfaceExtension(function (IUserInterfaceContext) { IUserInterfaceContext.getGlobalHeaderContext().then(function (IGlobalHeaderContext) { IGlobalHeaderContext.getMenu('').then(function (IGlobalHeaderMenu) { var icon = IGlobalHeaderMenu.createIcon("font awesome"); icon.setIconClass("fas fa-thumbs-up"); IGlobalHeaderMenu.addIcon(icon); IGlobalHeaderMenu.setHandler(function (IGlobalHeaderMenu) { myROQLQuery("select count(*) from accounts").then(function(result){ result["items"].forEach(function(rows){ rows["rows"].forEach(function(value){ myExtensions2log(value); }) }); }); }); IGlobalHeaderMenu.render(); }); }); }); }); } myHeaderMenu();  

We hope that you find this post useful. We encourage you to try the sample code from this post and let us know what modifications you have made to enhance it. What other topics would you like to see next? Let us know in the comments below.

Oracle Functions: Serverless On Oracle Cloud - Developer's Guide To Getting Started (Quickly!)

Fri, 2019-02-08 13:44

Back in December, as part of our larger announcement about several cloud native services, we announced a new service offering called Oracle Functions. Oracle Functions can be thought of as Functions as a Service (FaaS), or hosted serverless that utilizes Docker containers for execution.  The offering is built upon the open source Fn Project, which itself isn't new, but the ability to quickly deploy your serverless functions and invoke them via Oracle's Cloud makes implementation much easier than it was previously.  This service is currently in Limited Availability (register here if you'd like to give it a try), but recently I have been digging in to the offering and wanted to put together some resources to make things easier for developers looking to get started with serverless on Oracle Cloud. This post will go through the necessary steps to get your tenancy configured and create, deploy and invoke your first application and function with Oracle Functions.

Before getting started you'll need to configure your Oracle Cloud tenancy.  If you're in the Limited Availability trial, make sure your tenancy is subscribed to the Phoenix region because that's currently the only region where Oracle Functions is available.  To check and/or subscribe to this region, see the following GIF:

Before moving on, if you haven't yet installed the OCI CLI, do so now.  And if you haven't, what are you waiting for?  It's really helpful for doing pretty much anything with your tenancy without having to log in to the console.

The rest of the configuration is a multi-step process that can take some time, and since no one likes to waste time on configuration when they could be writing code and deploying functions, I've thrown together a shell script to perform all the necessary configuration steps for you and get your tenancy completely configured to use Oracle Functions. 

Before we get to the script, please do not simply run this script without reading it over and fully understanding what it does.  The script makes the following changes/additions to your cloud tenancy:

  1. Creates a dedicated compartment for FaaS
  2. Creates a IAM group for FaaS users
  3. Creates a FaaS user
  4. Creates a user auth token that can be later used for Docker login
  5. Adds the FaaS user to the FaaS group
  6. Creates a group IAM policy
  7. Creates a VCN
  8. Creates 3 Subnets within the VCN
  9. Creates an internet gateway for the VCN
  10. Updates the VCN route table to allow internet traffic to hit the internet gateway
  11. Updates the VCN default security list to allow traffic on port 80
  12. Prints a summary of all credentials that it creates

That's quite a lot of configuration that you'd normally have to manually perform via the console UI.  Using the OCI CLI via this script will get all that done for you in about 30 seconds.  Before I link to the script, let me reiterate, please read through the script and understand what it does.  You'll first need to modify (or at least verify) some environment variables on lines 1-20 that contain the names and values for the objects you are creating.

So with all the necessary warnings and disclaimers out of the way, here's the script.  Download it and make sure it's executable and then run it.  You'll probably see some failures when it attempts to create the VCN because compartment creation takes a bit of time before it's available for use with other objects.  That's expected and OK, which is why I've put in some auto-retry logic around that point.  Other than that, the script will configure your tenancy for Oracle Functions and you'll be ready to move on to the next step.  Here's an example of the output you might see after running the script:

Next, create a signing key.  I'll borrow from the quick start guide here:

If you'd rather skip heading to the console UI in the final step, you can use the OCI CLI to upload your key like so:

oci iam user api-key upload --user-id ocid1.user.oc1..[redacted]ra --key-file <path-to-key-file> --region <home-region>

Next, open your OCI CLI config file (~/.oci/config) in a text editor, paste the profile section that was generated in the script above and populate it with the values from your new signing key.  

At this point we need to make sure you've got Docker installed locally.  I'm sure you do, but if not, head over to the Docker docs and install it for your particular platform.  Verify your installation with:

docker version

While we're here, let's login to Docker using the credentials we generated with the script above:

docker login phx.ocir.io

For username, copy the username from the script output (format <tenancy>/<username>) and the generated auth token will be used as your Docker login password.

Now let's get the Fn CLI installed. Jump to the Fn project on GitHub where you'll find platform specific instructions on how to do that. To be sure all's good, run:

fn version

To see all the available commands with the Fn CLI, refer to the command reference docs. Good idea to bookmark that one!

Cool, now we're ready to finalize your Fn config.  Again, I'll borrow from the Fn quick start for that step:

Log in to your development environment as a functions developer and:

Create the new Fn Project CLI context by entering:

fn create context <my-context> --provider oracle

Specify that the Fn Project CLI is to use the new context by entering:

fn use context <my-context>

Configure the new context with the OCID of the compartment you want to own deployed functions:

fn update context oracle.compartment-id <compartment-ocid>

Configure the new context with the api-url endpoint to use when calling the Fn Project API by entering:

fn update context api-url <api-endpoint>

For example:

fn update context api-url https://functions.us-phoenix-1.oraclecloud.com

Configure the new context with the address of the Docker registry and repository that you want to use with Oracle Functions by entering:

fn update context registry <region-code>.ocir.io/<tenancy-name>/<repo-name>

For example:

fn update context registry phx.ocir.io/acme-dev/acme-repo

Configure the new context with the name of the profile you've created for use with Oracle Functions by entering:

fn update context oracle.profile <profile-name>

And now we're ready to create an application.  In Oracle Functions, an application is a logical grouping of serverless functions that share a common context of config variables that are available to all functions within the application.  The quick start shows how you use the console UI to create an application, but let's stick to the command line here to keep things moving quickly.  To create an application, run the following:

fn create app faas-demo --annotation oracle.com/oci/subnetIds='["ocid1.subnet.oc1.phx.[redacted]ma"]'

You'll need to pass at least one of your newly created subnet IDs in the JSON array to this call above. For high availability, pass additional subnets. To see your app, run:

fn list apps

To create your first function, run the following:

fn init --runtime node faas-demo-func-1

Note, I've used NodeJS in this example, but the runtime support is pretty diverse. You can currently choose from go, java8, java9, java, node, python, python3.6, python, python3.7, ruby, kotlin as your runtime. Once your function is generated, you'll see output similar to this:

Creating function at: /faas-demo-func-1 Function boilerplate generated. func.yaml created.

Go ahead and navigate into the new directory and take a look at the generated files. Specifically, the func.yaml file which is a metadata definition file that is used by Fn to describe your project, it's triggers, etc. Leave the YAML file for now and open up func.js in a text editor. It ought to look something like so:

const fdk=require('@fnproject/fdk'); fdk.handle(function(input){ let name = 'World'; if (input.name) { name = input.name; } return {'message': 'Hello ' + name} })

Just a simple Hello World, but your function can be as powerful as you need it to be. It can interact with a DB within the same subnet on Oracle Cloud, or utilize object storage, etc. Let's deploy this function and invoke it. To deploy, run this command from the root directory of the function (the place where the YAML file lives). You'll see some similar output:

fn deploy --app faas-demo Deploying faas-demo-func-1 to app: faas-demo Bumped to version 0.0.2 Building image phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1:0.0.2 . Parts: [phx.ocir.io [redacted] faas-repo faas-demo-func-1:0.0.2] Pushing phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1:0.0.2 to docker registry...The push refers to repository [phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1] 1bf689553076: Pushed 9703c7ab5d87: Pushed 0adc398bfc34: Pushed 0b3e54ee2e85: Pushed ad77849d4540: Pushed 5bef08742407: Pushed 0.0.2: digest: sha256:94d9590065a319a4bda68e7389b8bab2e8d2eba72bfcbc572baa7ab4bbd858ae size: 1571 Updating function faas-demo-func-1 using image phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1:0.0.2... Successfully created function: faas-demo-func-1 with phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1:0.0.2

Fn has compiled our function into a Docker container, pushed the Docker container to the Oracle Docker registry, and at this point our function is ready to invoke. Do that with the following command (where the first argument is the application name and the second is the function name):

fn invoke faas-demo faas-demo-func-1 {"message":"Hello World"}%

The first invocation will take a bit of time since Fn has to pull the Docker container and spin it up, but subsequent runs will be quick. This isn't the only way to invoke your function; you can also use HTTP endpoints via a signed request, but that's a topic for another blog post.

Now let's add some config vars to the application:

fn update app faas-demo --config defaultName=Person

As mentioned above, config is shared amongst all functions in an application. To access a config var from a function, grab it from the environment variables. Let's update our Node function to grab the config var, deploy it and invoke it:

const fdk=require('@fnproject/fdk'); fdk.handle(function(input){ let name = process.env.defaultName || 'World'; if (input.name) { name = input.name; } return {'message': 'Hello ' + name} }) $ fn deploy --app faas-demo Deploying faas-demo-func-1 to app: faas-demo Bumped to version 0.0.3 Building image phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1:0.0.3 . Parts: [phx.ocir.io [redacted] faas-repo faas-demo-func-1:0.0.3] Pushing phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1:0.0.3 to docker registry...The push refers to repository [phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1] 7762ea1ed77f: Pushed 1b0d385392d8: Pushed 0adc398bfc34: Layer already exists 0b3e54ee2e85: Layer already exists ad77849d4540: Layer already exists 5bef08742407: Layer already exists 0.0.3: digest: sha256:c6537183b5b9a7bc2df8a0898fd18e5f73914be115984ea8e102474ccb4126da size: 1571 Updating function faas-demo-func-1 using image phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1:0.0.3... $ fn invoke faas-demo faas-demo-func-1 {"message":"Hello Person"}%

So that's the basics on getting started quickly developing serverless functions with Oracle Functions. The Fn project has much more to offer and I encourage you to read more about it. If you're interested in taking a deeper look, make sure to sign up for access to the Limited Availability program.

Expertise in Your Ear: Top 10 Groundbreakers Podcasts for 2018

Wed, 2019-01-30 07:13

On March 27, 2009 the Arch2Arch Podcast released its first program. Over nearly ten years, more than 360 programs, and several name changes, what is now known as the Oracle Groundbreakers Podcast enters 2019 as the most downloaded Oracle podcast, a status it has maintained since early 2015. The credit for that accomplishment is shared by an incredible roster of panelists who have taken the time to lend their voices and share their insight and expertise, and by a faithful and growing audience that keeps listening year after year.

The list below reflects the top ten most-downloaded Groundbreakers Podcast programs for the past year. Big thanks and congratulations to the panelists for putting these programs on top! (Note that several people appeared in multiple programs.)

Also note that IT careers, like IT itself, are constantly evolving. The titles and jobs listed for the panelists are those they held at the time the particular podcast was recorded.

  Program Panelists 1 Chatbots: First Steps and Lessons Learned - Part 1

Released 09/19/2017

Chabot development comes with a unique set of requirements and considerations that may prove challenging to those making their first excursion into this new breed of services. This podcast features a panel of developers who have been there, done that, and are willing to talk about it.

2 Blockchain: Beyond Bitcoin

Released December 20, 2017

Blockchain has emerged from under crypto-currency’s shadow to become a powerful trend in enterprise IT -- and something that should be on every developer's radar. This program assembles a panel of Blockchain experts to discuss the technology's impact, examine use cases, and offer suggestions for developers who want to learn more in order to take advantage of the opportunities blockchain represents.

  • Lonneke Dikmans (Oracle ACE Director, Groundbreaker Ambassador, Head of Center of Excellence, eProseed)
  • John King (Senior Principal Enablement Specialist, Oracle)
  • Robert van Mölken (Oracle ACE, Groundbreaker Ambassador, Senior Integration / Cloud Specialist, AMIS)
  • Arturo Viveros (Oracle ACE, Grounbreaker Ambassador, Principal Architect, Sysco AS)
3 DevOps in the Real World: Culture, Tools, Adoption

Released February 21, 2018

Is the heat behind DevOps driving adoption? Are organizations on the adoption path making headway in the cultural and technological changes necessary for DevOps success? A panel of DevOps experts discusses these and other issues in this wide-ranging conversation.

4 Jfokus Panel: Building a New World Out of Bits

Released January 17, 2018

In this freewheeling conversation a panel of JFokus 2018 speakers discusses the trends and technologies that have captured their interest, the work that consumes most of their time, and the issues that concern them as IT professionals.

  • Jesse Anderson (Data Engineer, Creative Engineer, Managing Director, Big Data Institute)
  • Benjamin Cabé (IoT Program Manager, Evangelist, Eclipse Foundation)
  • Kevlin Henney (Consultant, programmer, speaker, trainer, writer, owner, Curbralan)
  • Siren Hofvander (Chief Security Officer, Min Doktor
  • Dan Bergh Johnsson (Agile aficionado, Domain Driven Design enthusiast, code quality craftsman, Omegapoint)
5 On Microservice Implementation and Design

Released October 17, 2018

Microservices are a hot topic. But that's exactly the wrong reason to dive into designing and implementing microservices. Before you do that, check out what this panel of experts has to say about what makes microservices a wise choice.

  • Sven Bernhardt (Oracle ACE; Solution Architect, OPITZ Consulting)
  • Lucas Jellema (Oracle ACE Director; Groundbreaker Ambassador; CTO, Consulting IT Architect, AMIS Services
  • Chris Richardson (Java Champion, Founder, Eventuate, Inc.)
  • Luis Weir (Oracle ACE Director; Groundbreaker Ambassador; CTO, Oracle Practice, Capgemini)
6 Women in Technology: Motivation and Momentum

Released February 6, 2018

Community leaders share insight on what motivated them in their IT careers and how they lend their expertise and energy in driving momentum in the effort to draw more women into technology.

  • Natalie Delemar (Senior Consultant, Ernst and Young; President, ODTUG Board of Directors)
  • Heli Helskyaho (Oracle ACE Director; CEO, Miracle Finland; Ambassador, EMEA Oracle Usergroups Community)
  • Michelle Malcher (Oracle ACE Director; Security Architect, Extreme Scale Solutions)
  • Kellyn Pot'Vin-Gorman (Technical Intelligence Manager, Office of CTO, Delphix; President, Board Of Directors, Denver SQL Server User Group)
  • Laura Ramsey (Manager, Database Technology and Developer Communities, Oracle)
7 What's Hot? Tech Trends That Made a Real Difference in 2017

Released November 15, 2017

Forget the hype! Which technologies made a genuine difference in the work of software developers in 2017? We gathered five highly respected developers in a tiny hotel room in San Francisco, tossed in a couple of microphones, and let the conversation happen.

  • Lonneke Dikmans (Oracle ACE Director, Groundbreaker Ambassador, Chief Product Officer, eProseed)
  • Lucas Jellema (Oracle ACE Director, Groundbreaker Ambassador, Chief Technical Officer, AMIS Services)
  • Frank Munz (Oracle ACE Director, Software Architect, Cloud Evangelist, Munz & More)
  • Pratik Patel (Java Champion, Chief Technical Officer, Triplingo, President, Atlanta Java Users Group)
  • Chris Richardson (Java Champion, Founder, Chief Executive Officer, Eventuate Inc.)
8 Developer Evolution: What's Rockin’ Roles in IT?

Released August 15, 2018

Powerful forces are driving change in long-established IT roles. This podcast examines the trends and technologies behind this evolution, and looks at roles that may emerge in the future.

  • Rolando Carrasco (Oracle ACE, Groundbreaker Ambassador, Co-owner, Principal SOA Architect, S&P Solutions)
  • Martin Giffy D'Souza (Oracle ACE Director, Director of Innovation, Insum Solutions)
  • Mark Rittman (Oracle ACE Director, Chief Executive Officer, MJR Analytics)
  • Phil Wilkins (Oracle ACE, Senior Consultant, Capgemini)
9 Chatbots: First Steps and Lessons Learned - Part 2

Released October 18, 2017

This podcast continues the discussion of chatbot development with an entirely new panel of developers who also had the opportunity to work with that same Oracle Intelligent Bots beta release.

  • Mia Urman (Oracle ACE Director; Chief Executive Officer, AuraPlayer Limited)
  • Peter Crew (Director, SDS Group; Chief Technical Officer, MagiaCX Solutions)
  • Christoph Ruepprich (Oracle ACE; Infrastructure Senior Principal, Accenture Enkitec Group)
10 Beyond Chatbots: An AI Odyssey

Released April 18, 2018

This program looks beyond chatbots to explore artificial intelligence -- its current capabilities, staggering potential, and the challenges along the way.

 

Get Involved

Most Oracle Groundbreakers Podcast programs are the result of suggestions by community members. If there is a topic you'd like to have discussed on the program, post a comment here, or contact me (@ArchBeatDev / email). You can even serve as a panelist or a guest host/producer!

Subscribe Never miss an episode! The Oracle Groundbreakers Podcast is available via:

Code Merge as part of a Build Pipeline in Oracle Developer Cloud

Mon, 2019-01-28 12:11

This blog will help you understand how to use code merge as part of a build pipeline in Oracle Developer Cloud. You’ll use out-of-the-box build job functionality only. This information should also help you see how useful this feature can be for developers in their day-to-day development work.

Creating a New Git Repository

Click Project Home in the navigation bar. In the Project page, select a project to use (I chose DemoProject), and then click the + Create Repository button to create a new repository. I’ll use this repository for the code merge in this blog.

In the New Repository dialog box, enter a name for the repository. I used MyMergeRepo, but you can use whatever name you want. Then, select the Initialize repository with README file option and click the Create button.

Creating the New Branch

Click Git in the navigation bar. In the Refs view of the Git page, from the Repositories drop-down list, select MyMergeRepo.git. Click on the + New Branch button to create a new branch.

In the New Branch dialog, enter a unique branch name. I used change, but you can use any name you want. Select the appropriate Base branch from the drop-down list. For this repository, master branch is the only option we have. Click the Create button to create the new branch.

 

Creating the Build Job Configuration

In the navigation bar, click Builds. In the Jobs tab, click on the + Create Job button to create a new build job.

 

In the New Job dialog, enter a unique name for the job name. I’ll use MergeCode but you can enter any name you want. Select the Use for Merge Request checkbox, the Create New option, and then select any Software Template from the drop-down list. You don’t need a specific software bundle to execute a merge. The required software bundle, which by default is part of any software template you create, is sufficient. Finally, click the Create Job button.

Note: If you are new to creating Build VM templates and Build VMs, see Set Up the Build System.

 

When you create a build job with the Use for Merge Request checkbox selected, the Merge Request Parameters get placed in the Repository and Branch fields of the Source Control tab. You can go ahead and select the Automatically perform build on SCM commit checkbox.

In the Build Parameters tab, you’ll notice that Merge Request parameters like GIT_REPO_URL, GIT_REPO_BRANCH, and MERGE_REQ_ID were added automatically. After reviewing it, click on the Save button.

 

Creating the Merge Request

In the navigation bar, click Merge Requests.  Then click on the + Create Merge Request button.

In the New Merge Request wizard, select the Git repository (MyMergeRepo), the target branch (master), and the review branch (change). You won’t see any commits because we haven’t done any yet. Click the Next button to advance to the second page.

On the Details page, select MergeCode for Linked Builds and select a reviewer. If you created an issue that needs to be linked to the merge request, link it with Linked Issues. Click the Next button to advance to the last page.

You can change the description for the merge request or just use the default one. Then click the Create button to create the merge request.

In the Linked Builds tab, you should see the MergeCode build job as the linked build job.

 

Changing a File and Committing the Change to the Git Repository

In the Git page, select the MyMergeRepo.git repository from the repository drop-down list and the change branch in the branches drop-down list. Then click the README.md file link.

Click the pencil icon to edit the file.

Add some text (any text will do), and then click the Commit button.

 

The code commit triggers the MergeCode build.

 

When a build of a linked job runs, a comment is automatically added to the Conversation tab. When the MergeCode build completes successfully, it auto-approves the merge request and adds itself to the Approve section of the Review Status list, waiting for an approval from the reviewer assigned to the merge request.

Once the reviewer approves the merge request, the review branch code is ready to be merged into the target branch. To merge the code, click the Merge button.

Note: For this merge request, Alex Admin, the user who is logged in, is the reviewer.

 

By including Merge Request Parameters as part of a build job, you can be sure that every commit will be auto-validated to be free of conflicts and approved. This comes in handy when multiple commits are linked to a merge request by linking the build job enabled for merge requests. The merge request will still wait for the assigned reviewer(s) to do the code review, approve the changes, and then merge the code in the target branch.

This feature helps developers collaborate efficiently with their team members in their day-to day-development activities.

Happy Coding!

 **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

New Features in Oracle Developer Cloud - January 2019

Sat, 2019-01-26 01:57

We are happy to announce the January update for Oracle Developer Cloud - your team collaboration and CI/CD platform in the Oracle cloud. Here is a list of the key new features you can now leverage

Oracle Cloud Infrastructure Compute based Build Servers

Customer can now leverage OCI based compute and storage instances to run their build pipelines and store their CI/CD artifacts. New wizards allow you to configure OCI based environments with ease. 

OCI Account Configuration Screen

Project Level Password Variable

You can now configure project level variables that will store passwords. You then refer to these password variables in your build scripts, build steps etc. When a password changes in your system, you'll only need to update it in a single place in Developer Cloud Service and the new password will be used everywhere it is referenced.

Draft Mode for Wiki Pages

We added support for draft status for wiki pages, allowing you to create and edit pages before you publish them to the public. As you edit a wiki page DevCS auto-saves the content, so even if you leave the page without publishing it, you are able to return to the draft later on to complete it and publish the content.

 

New Organization Page

Organization Administrators have an updated Organization Page that provides easy access to all the admin tasks making it simpler to configure the environment for your organization including managing build servers, build templates, projects stats, and more.

There's more

These are just some of the highlights in this new version. Make sure to read about the rest in our "What's New" section of the documentation. Also check out the new tutorials and doc to help you leverage the new features. If you run into any questions, you can ask them on our new public Slack channel or our Oracle Cloud Customer Connect Forum.

Bite-Sized Chunks of Expertise: The Top Ten 2 Minute Tech Tips for 2018

Tue, 2019-01-22 23:00

The fans can't be wrong, and the numbers don't lie. 

The list of the Top Ten 2 Minute Tech Tips for 2018 includes insight and expertise covering containers, bots, REST APIs, database performance, and more, as contributed by experts from across the community. 

What's a 2 Minute Tech Tip? As the title implies, a 2 Minute Tech Tip is a short video that presents useful technical information for developers, architects, DBAs, and other roles. 2MTTs cover a wide range of topics, and the presentation can be anything from a simple talking head to a full-on presentation with slides and demos. The manner of presentation and topics are chosen entirely by the people delivering the tips. The only rule is that the tip can't take more than two minutes.

I record some 2MTTs at events, as I did with Sergio Leunissen's #1 video, which was recorded at Oracle Open World. But most are recorded by the tipsters themselves at home, on webcams or mobile devices, as is the case with the #2 tip from Kevlin Henney, which was recorded on the streets of London using selfie stick. 

The list below represents the most-watched 2 Minute Tech Tips for 2018. Congratulations to the year's top tipsters!

1 Sergio Leunissen Zero to Docker Sandbox in 2 Minutes
by Sergio Leunissen
Vice President, Open Source and Virtualization Group, Oracle 2 Kevlin Henney Concurrency Versus Locking
By Kevlin Henney
Independent Consultant, Trainer, Writer 3 Guido Schmutz Apache Kafka? Why Should I Care?
By Guido Schmutz
Oracle Groundbreaker Ambassador
Oracle ACE Director
Principal Consultant / Technology Mangager, Trivadis 4 Peter Raganitsch What's Wrong with your APEX Application?
By Peter Raganitsch
Oracle ACE Director
CEO, FOEX GmbH 5 Matt Hornung Setting Up Basic Intents with Oracle Intelligent Bots
By Matt Hornung
Software Consultant, Fishbowl Solutions 6 Mohamed Taman Database Performance Tip
By Mohamed Taman
Java Champion, Oracle Groundbreaker Ambassador
Sr. Enterprise Architect, Comtrade Digital Services
Owner / CEO, SiriusXI d.o.o. 7 Adam Bien Java EE 8 Quick Start
By Adam Bien
Java Champion
Independent Java Architect and Developer
Freelance Author 8 Blaine Carter Securing an Oracle Data Services REST API
By Blaine Carter
Developer Advocate for Open Source, Oracle 9 Chris Saxon Oracle Database: Working with External Tables
By Chris Saxon
Developer Advocate for SQL, Oracle

 

10

Mark Williams When Should You Use SQL Bind Variables?
By Mark Williams
Principal Consultant, Method R Advocate, Cintra Software and Services Where is your 2 Minute Tech Tip?

Think you've got an idea for a great tip? Go for it! Anyone is eligible to submit a 2 Minute Tech Tip. All you need is an idea and a a camera. These guidelines will help:

  • 1280x720 minimum resolution (Most cell phone cameras record video in this resolution)
  • MP4 or MOV format.
  • You need to record two segments:
    • Intro: ("Hi, My name is Jane Smith, and I'm a developer with XYZ Corp.") The intro does not count against your 2 minutes.
    • Tip (Must not exceed 120 seconds)
    • Intro and tip segments can be in separate video files, or can be combined in a single file.
  • You don't have to fill the entire 2 minutes, but you can't go over.
  • I have Google Drive and Dropbox accounts to facilitate file transfer.
  • If you absolutely can't be bothered to record yourself, I can record you remotely via Skype. 

Watch the videos above to get a sense of what works. Maybe your tip will make the 2019 Top Ten list

If you have any questions about the process, post a comment, and include your email address. 

Podcast: Database Golden Rules: When (and Why) to Break Them

Tue, 2019-01-15 23:00

American inventor Thomas Edison once said, “Hell, there are no rules here. We're trying to accomplish something.”

What we hope to accomplish with this episode of the Groundbreaker Podcast is an exploration of the idea that the evolution in today’s architectures makes it advantageous, perhaps even necessary, to challenge some long-established concepts that have achieved “golden rule” status as they apply to the use of databases.

Does ACID (Atomicity, Consistency, Isolation, and Durability) still carry as much weight? In today’s environments, how much do rigorous data integrity enforcement, data normalization, and data freshness matter? This program explores those and other questions.

Bringing their insight and expertise to this discussion are three recognized IT professionals who regularly wrestle with balancing the rules with innovation. If you’ve struggled with that same balancing act, you’ll want to listen to this program.

The Panelists Listed alphabetically Heli Helskyaho Heli Helskyaho
CEO, Miracle Finland Oy, Finland
Oracle ACE Director
Twitter LinkedIn  Lucas Jellema Lucas Jellema
CTO, Consulting IT Architect, AMIS Services, Netherlands
Oracle ACE Director
Oracle Groundbreaker Ambassador
Twitter LinkedIn  Guido Schmutz Guido Schmutz
Principal Consultant, Technology Manager, Trivadis, Switzerland
Oracle ACE Director
Oracle Groundbreaker Ambassador
Twitter LinkedIn 

 

Additional Resources Coming Soon
  • Baruch Sadogursky, Leonid Igolnik, and Viktor Gamov discuss DevOps, streaming, liquid software, and observability in this podcast captured during Oracle Code One 2018
  • What's Up with Serverless? A panel discussion of where Serverless fits in the IT landscape.
  • JavaScipt Development and Oracle JET. A discussion of the evolution of JavaScript development and the latest features in Oracle JET.
Subscribe Never miss an episode! The Oracle Groundbreakers Podcast is available via: Participate

If you have a topic suggestion for the Oracle Groundbreakers Podcast, or if you are interested in participating as a panelist, please let me know by posting a comment. I'll get back to you right away.

Automated Generation For OCI IAM Policies

Thu, 2019-01-10 10:58

As a cloud developer evangelist here at Oracle, I often find myself playing around with one or more of our services or offerings on the Oracle Cloud.  This of course means I end up working quite a bit in the Identity and Access Management (IAM) section of the OCI Compute Console.  It's a pretty straightforward concept, and likely familiar if you've worked with any other cloud provider.  I won't give a full overview here about IAM as it's been covered plenty already and the documentation is concise and easy to understand.  But one task that always ends up taking me a bit longer to accomplish than I'd like it to is IAM policy generation.  The policy syntax in OCI is as follows:

Allow <subject> to <verb> <resource-type> in <location> where <conditions>

Which seems pretty easy to follow - and it is.  The issue that I often have though is actually remembering the values to plug in for the variable sections of the policy.  Trying to remember the exact group name, or available verbs and resource types, as well as the exact compartment name that I want the policy to apply to is troublesome and usually ends up with me opening two or three tabs to look up exact spellings and case and then flipping over to the docs to get the verb and resource type just right.  So, I decided to do something to make my life a little easier when it comes to policy generation and figured that I'd share it with others in case I'm not the only one who struggles with this.  

So, born out of my frustration and laziness, I present a simple project to help you generate IAM policies for OCI.  The tool is intended to be run from the command line and prompts you to make selections for each variable.  It gives you choices of available options based on actual values from your OCI account.  For example, if you choose to create a policy targeting a specific group, the tool gives you a list of your groups to choose from.  Same with verbs and resource types - the tool has a list of them built in and lets you choose which ones you are targeting instead of referring to the IAM policy documentation each time.  Here's a video demo of the tool in action:

The code itself isn't a masterpiece - there's hardcoded values for verbs and resource types because those aren't exposed via the OCI CLI or SDK in anyway.  But it works, and makes policy generation a bit less painful.  The code behind the tool is located on GitHub, so feel free to submit a pull request to keep the tool up to date or enhance it in any way.  It's written in Groovy and can be run as a Groovy script, or via java -jar.  If you'd rather just get your hands on the binary and try it out, grab the latest release and give it a shot.

The tool uses the OCI CLI behind the scenes to query the OCI API as necessary.  You'll need to make sure the OCI CLI is installed and configured on your machine before you generate a policy.  I decided to use the CLI as opposed to the SDK in order to minimize external dependencies and keep the project as light as possible while still providing value.  Besides, the OCI CLI is pretty awesome and if you work with the Oracle Cloud you should definitely have it installed and be familiar with it.

Please check out the tool and as always, feel free to comment below if you have any questions or feedback.

Controlling Your Cloud - A Look At The Oracle Cloud Infrastructure Java SDK

Wed, 2019-01-02 17:09

A few weeks ago our cloud evangelism team got the opportunity to spend some time on site with some amazing developers from one of Oracle's clients in Santa Clara, CA for a 3-day cloud hackfest.  During the event, one of the developers mentioned that a challenge his team faced was handling file uploads for potentially extremely large files.  I've faced this problem before as a developer and it's certainly challenging.  The web just wasn't really built for large file transfers (though, things have gotten much better in the past few years as we'll discuss later on).  We didn't end up with an opportunity to fully address the issue during the hackfest, but I promised the developer that I would follow-up with a solution after digging deeper into the Oracle Cloud Infrastructure APIs once I got back home.  So yesterday I got down to digging into the process and engineered a pretty solid demo for that developer on how to achieve large file uploads to OCI Object Storage, but before I show that solution I wanted to give a basic introduction to working with your Oracle Cloud via the available SDK so that things are easier to follow once we get into some more advanced interactions. 

Oracle offers several other SDKs (Python, Ruby and Go), but since I typically write my code in Groovy I went with the Java SDK.  Oracle provides a full REST API for working with your cloud, but the SDK provides a nice native solution and abstracts away some of the painful bits of signing your request and making the HTTP calls into a nice package that can be bundled within your application. The Java SDK supports the following OCI services:

  • Audit
  • Container Engine for Kubernetes
  • Core Services (Networking, Compute, Block Volume)
  • Database
  • DNS
  • Email Delivery
  • File Storage
  • IAM
  • Load Balancing
  • Object Storage
  • Search
  • Key Management

Let's take a look at the Java SDK in action, specifically how it can be used to interact with the Object Storage service.  The SDK is open source and available on GitHub.  I created a very simple web app for this demo.  Unfortunately, the SDK is not yet available via Maven (see here), so step one was to download the SDK and include it as a dependency in my application.  I use Gradle, so I dropped the JARs into a "libs" directory in the root of my app and declared the following dependencies block to make sure that Gradle picked up the local JARs (the key being the "implementation" method on line 8):

The next step is to create some system properties that we'll need for authentication and some of our service calls.  To do this, you'll need to set up some config files locally and generate some key pairs, which can be mildly annoying at first, but once you're set up you're good to go in the future and you get the added bonus of being set up for the OCI CLI if you want to use it later on.  Once I had the config file and keys generated, I set my props into a file in the app root called 'gradle.properties'.  Using this properties file and the key naming convention shown below Gradle makes the variables available within your build script as system properties.

Note that having the variables as system properties in your build script does not make them available within your application, but to do that you can simply pass them in via your 'run' task:

Next, I created a class to manage the provider and service clients.  This class only has a single client right now, but adding additional clients for other services in the future would be trivial.

I then created an 'ObjectService' for working with the Object Storage API.  The constructor accepts an instance of the OciClientManager that we looked at above, and sets some class variables for some things that are common to many of the SDK methods (namespace, bucket name, compartment ID, etc):

At this point, we're ready to interact with the SDK.  As a developer, it definitely feels like an intuitive API and follows a standard "request/response" model that other cloud providers use in their APIs as well.  I found myself often simply guessing what the next method or property might be called and often being right (or close enough for intellisense to guide me to the right place).  That's pretty much my benchmark for a great API - if it's intuitive and doesn't get in my way with bloated authentication schemes and such then I'm going to love working with it.  Don't get me wrong, strong authentication and security are assuredly important, but the purpose of an SDK is to hide the complexity and expose a method to use the API in a straightforward manner.  All that said, let's look at using the Object Storage client.  

We'll go rapid fire here and show how to use the client to do the following actions (with a sample result shown after each code block):

  1. List Buckets
  2. Get A Bucket
  3. List Objects In A Bucket
  4. Get An Object

List Buckets:

 

Get Bucket:

List Objects:

Get Object:

The 'Get Object' example also contains an InputStream containing the object that can be written to file.

As you can see, the Object Storage API is predictable and consistent.  In another post, we'll finally tackle the more complex issue of handling large file uploads via the SDK.

Controlling Your Cloud - Uploading Large Files To Oracle Object Storage

Wed, 2019-01-02 16:42

In my last post, we took an introductory look at working with the Oracle Cloud Infrastructure (OCI) API with the OCI Java SDK.  I mentioned that my initial motivation for digging into the SDK was to handle large file uploads to OCI Object Storage, and in this post, we'll do just that.  

As I mentioned, HTTP wasn't originally meant to handle large file transfers (Hypertext Transfer Protocol).  Rather, file transfers were typically (and often, still) handled via FTP (File Transfer Protocol).  But web developers deal with globally distributed clients and FTP requires server setup, custom desktop clients, different firewall rules and authentication which ultimately means large files end up getting transferred over HTTP/S.  Bit Torrent can be a better solution if the circumstances allow, but distributed files aren't often the case that web developers are dealing with.  Thankfully, many advances in HTTP over the past several years have made large file transfer much easier to deal with, the main advance being chunked transfer encoding (known as "chunked" or "multipart" file upload).  You can read more about Oracle's support for multipart uploading, but to explain it in the simplest possible way a file is broken up into several pieces ("chunks"), uploaded (at the same time, if necessary), and reassembled into the original file once all of the pieces have been uploaded.

The process to utilize the Java SDK for multipart uploading involves, at a minimum, three steps.  Here's the JavaDocs for the SDK in case you're playing along at home and want more info.

  1. Initiate the multipart upload
  2. Upload the individual file parts
  3. Commit the upload

The SDK provides methods for all of the steps above, as well as a few additional steps for listing existing multipart uploads, etc.  Individual parts can be up to 50 GiB.  The SDK process using the ObjectClient (see the previous post) necessary to complete the three steps above are explained as such:

1.  Call ObjectClient.createMultipartUpload, passing an instance of a CreateMultipartUploadRequest (which contains an instance of CreateMultipartUploadRequestDetails)

To break down step 1, you're just telling the API "Hey, I want to upload a file.  The object name is "foo.jpg" and it's content type is "image/jpeg".  Can you give me an identifier so I can associate different pieces of that file later on?"  And the API will return that to you in the form of a CreateMultipartUploadResponse.  Here's the code:

So to create the upload, I make a call to /oci/upload-create and pass the objectName and contentType param.  I'm invoking it via Postman, but this could just as easily be a fetch() call in the browser:

So now we've got an upload identifier for further work (see "uploadId", #2 in the image above).  On to step 2 of the process:

2.  Call ObjectClient.uploadPart(), passing an instance of UploadPartRequest (including the uploadId, the objectName, a sequential part number, and the file chunk), which receives an UploadPartResponse.  The response will contain an "ETag" which we'll need to save, along with the part number, to complete the upload later on.

Here's what the code looks like for step 2:

And here's an invocation of step 2 in Postman, which was completed once for each part of the file that I chose to upload.  I'll save the ETag values along with each part number for use in the completion step.

Finally, step 3 is to complete the upload.

3.  Call ObjectClient.commitMultipartUpload(), passing an instance of CommitMultipartUploadRequest (which contains the object name, uploadId and an instance of CommitMultipartUploadDetails - which itself contains an array of CommitMultipartUploadPartDetails).

Sounds a bit complicated, but it's really not.  The code tells the story here:

When invoked, we get a simple result confirming the completion of the multipart upload commit!  If we head over to our bucket in Object Storage, we can see the file details for the uploaded and reassembled file:

And if we visit the URL via a presigned URL (or directly, if the bucket is public), we can see the image.  In this case, a picture of my dog Moses:

As I've hopefully illustrated, the Oracle SDK for multipart upload is pretty straightforward to use once it's broken down into the steps required.  There are a number of frontend libraries to assist you with multipart upload once you have the proper backend service in place (in my case, the file was simply broken up using the "split" command on my MacBook).  

Podcast: REST or GraphQL? An Objective Comparison

Tue, 2018-12-18 23:00

Are you a RESTafarian? Or are you a GraphQL aficionado? Either way you'll want to listen to the latest Oracle Groundbreaker Podcast, as a panel of experts weighs the pros and cons of each technology.

Representational State Transfer, known to its friends as REST, has been around for nearly two decades and has a substantial following. GraphQL, on the other hand, became publicly available in 2015, and only a few weeks ago moved under the control of the GraphQL Foundation, a project of the Linux Foundation. But despite its relative newcomer status, GraphQL has gained a substantial following of its own.

So which technology is best suited for your projects? That's your call. But this discussion will help you make that decision, as the panel explores essential questions, including: 

  • What circumstances or conditions favor one over the other?
  • How do the two technologies complement each other?
  • How difficult is it for long-time REST users to make the switch to GraphQL?

This program is Oracle Groundbreak Podcast #361. It was recorded on Wednesday December 12, 2018. Listen!

 

The Panelists Luis Weir Luis Weir
CTO | Oracle Practice, Capgemini
Twitter LinkedIn Oracle Groundbreaker Ambassssdor; Oracle ACE Director Chris Kincanon Chris Kincanon
Engineering Manager / Technical Product Owner, Spreemo
Twitter LinkedIn  Dolf Dijkstra Dolf Dijkstra
Consulting Solutions Architect | A-Team - Cloud Solutions Architect, Oracle
Twitter LinkedIn James Neate James Neate
Oracle PaaS Consultant, Capgemini
Twitter LinkedIn Additional Resources Coming Soon
  • Baruch Sadogursky, Leonid Igolnik, and Viktor Gamov discuss DevOps, streaming, liquid software, and observability in this podcast captured during Oracle Code One 2018.
  • Database: Breaking the Golden Rules: There comes a time question, and even break,  long-established rules. This program presents a discussion of the database rules that may no longer be advantageous. 
  • What's Up with Serverless? A panel discussion of where Serverless fits in the IT landscape.
Subscribe

Never miss an episode! The Oracle Groundbreakers Podcast is available via:

Announcing Oracle Cloud Infrastructure Resource Manager

Mon, 2018-12-17 14:17

We are excited to announce a new service, Oracle Cloud Infrastructure Resource Manager, that makes it easy to manage your infrastructure resources on Oracle Cloud Infrastructure. Resource Manager enables you to use infrastructure as code (IaC) to automate provisioning for infrastructure resources such as compute, networking, storage, and load balancing.

Using IaC is a DevOps practice that makes it possible to provision infrastructure quickly, reliably, and at any scale. Changes are made in code, not in the target systems. That code can be maintained in a source control system, so it’s easy to collaborate, track changes, and document and reverse deployments when required.

HashiCorp Terraform

To describe infrastructure Resource Manager uses HashiCorp Terraform, an open source project that has become the dominant standard for describing cloud infrastructure. Oracle is making a strong commitment to Terraform and will enable all its cloud infrastructure services to be managed through Terraform. Earlier this year we released the Terraform Provider, and we have started to submit Terraform modules for Oracle Cloud Infrastructure to the Terraform Module Registry. Now we are taking the next step by providing a managed service.

Managed Service

In addition to the provider and modules, Oracle now provides Resource Manager, a fully managed service to operate Terraform. Resource Manager integrates with Oracle Cloud Infrastructure Identity and Access Management (IAM), so you can define granular permissions for Terraform operations. It further provides state locking, gives users the ability to share state, and lets teams collaborate effectively on their Terraform deployments. Most of all, it makes operating Terraform easier and more reliable.

With Resource Manager, you create a stack before you run Terraform actions. Stacks enable you to segregate your Terraform configuration, where a single stack represents a set of Oracle Cloud Infrastructure resources that you want to create together. Each stack individually maps to a Terraform state file that you can download.

To create a stack, you define a compartment and upload the Terraform configuration while creating this stack. This zip file contains all the .tf files that define the resources that you want to create. You can optionally include a variables.tf file or define your variables in a (key,value) format on the console.

After your stack is created, you can run different Terraform actions like planapply, and destroy on this stack. These Terraform actions are called jobs. You can also update the stack by uploading a new zip file, download this configuration, and delete the stack when required.

Plan: Resource Manager parses your configuration and returns an execution plan that lists the Oracle Cloud Infrastructure resources describing the end state.

Apply: Resource Manager creates your stack based on the results of the plan job. After this action is completed, you can see the resources that have been created successfully in the defined compartments.

Destroy: Terraform attempts to delete all the resources in the stack.

You can define permissions on your stacks and jobs through IAM policies. You can define granular permissions and let only certain users or groups perform actions like plan, apply, or destroy.

Availability

Resource Manager will become generally available in early 2019. We are currently providing access to selected customers through our Cloud Native Limited Availability Program. The currently available early version offers access to the Compute, Networking, Block Storage, Object Storage, IAM, and Load Balancing services. To learn more about Resource Manager or to request access to the technology, please register.

Building the Oracle Code One 2018 Escape Rooms

Mon, 2018-12-17 13:33

By Chris Bensen, Cloud Experience Developer at Oracle

I’ve built a lot of crazy things in my life but the “Escape Rooms” for Code One 2018 might just be one of the craziest. And funnest! The initial idea for our escape room came from Toni Epple where a Java based escape room was built for a German conference. We thought it was rather good, and escape rooms are trendy and fun so we decided to dial it up to eleven for 2018 Code One attendees. The concept was to have two escape rooms, one with a Java developer theme and one with the superhero theme of the developer keynote, and that’s when Duke’s Lab and Superhero Escape were born.

We wanted to build a demo that was different than what is normally at a conference and make the rooms feel like real rooms. I actually built two rooms with 2x4 construction in my driveway. Each room consisted of two eight foot cubed rooms that could be split in two pieces for easy shipping. And shipping wasn’t easy as we only had 1/4” of clearance! Inside the walls were faux brick to have the Brooklyn New York look and feel where many of the Marvel comics take place. The faux brick is a particle board product that can be purchased at your favorite local hardware store and is fire retardant so it’s a turnkey solution.


 

Many escape rooms contain physical puzzles and with CodeOne being a conference about programming languages it seemed fitting to infuse electronics and software into each puzzle. Each room was powered by a 24 volt 12 amp power supply which is the same power supply used to power an Ultimaker 3D printers. Using voltage regulators this was stepped down to 12 volts and in some cases 5 and 3.3 volts depending on the needs. Throughout the room conduit was run with custom 3D printed outlets to power each device using aviation connectors because they are super secure.

The project took just over two months to build, over 100 unique 3D printed parts were created and four 3D printers were running nearly 24by7 to produce over 400 parts total. 8 Arduinos and 5 Raspberry Pi ran the rooms with various electronics for sensors, displays, sounds and movement. The custom software was written using Python, Bash, C/C++ and Java.

At the heart of Duke’s Lab and the final puzzle is a wooden crate with two locks. The intention was to look like something out of a Bond film or Indiana Jones. Once you open it you are presented with two devices as seen in the photo below. I wouldn’t want to ruin the surprise but let’s just say most people that open the crate get a little heart thump as the countdown timer starts ticking when the create is opened!

At the heart of Superhero Escape we have The Mighty Thor’s hammer Mjölnir, Captain America’s shield and Iron Man’s arc reactor. The idea was to bring these three props to life and integrate them into an escape room of super proportions. And given the number of people that solved the puzzle and exited the room with Cap’s shield on one arm and Mjölnir in the other, I would say it was a resounding success!

The goal and final puzzle for Superhero Escape is to wield Mjölnir. Mjölnir was held to the floor of the escape room by a very powerful electromagnet. At the heart of the hammer is a piece of solid 1” thick steel I had custom machined to my specifications connected to a pipe.

The shell is one solid 3D print taking over 24 hours and an entire 1 kilogram of filament. For those that don’t know, that is an entire roll. Exactly an entire roll!

As with any project I learned a lot. I leveraged all my knowledge of digital fabrication, traditional fabrication, electronics, programming, wood working and puzzles and did things I wasn’t sure were possible, especially in the timeframe we had. That’s what being an Oracle Groudbreaker is all about. And for all those Groudbreakers out there, keep dreaming and learning because you will never know when you’ll be asked to build something that will take every bit of knowledge you have to build something amazing.

Announcing Oracle Functions

Tue, 2018-12-11 12:57

Photo by Tim Easley on Unsplash

[First posted on the Oracle Cloud Infrastructure Blog]

At KubeCon 2018 in Seattle Oracle announced Oracle Functions, a new cloud service that enables enterprises to build and run serverless applications in the cloud. 

Oracle Functions is a serverless platform that makes it easy for developers to write and deploy code without having to worry about provisioning or managing compute and network infrastructure. Oracle Functions manages all the underlying infrastructure automatically and scales it elastically to service incoming requests.  Developers can focus on writing code that delivers business value.

Pay-per-use

Serverless functions change the economic model of cloud computing as customers are only charged for the resources used while a function is running.  There’s no charge for idle time! This is unlike the traditional approach of deploying code to a user provisioned and managed virtual machine or container that is typically running 24x7 and which must be paid for even when it’s idle.  Pay-per-use makes Oracle Functions an ideal platform for intermittent workloads or workloads with spiky usage patterns. 

Open Source

Open source has changed the way businesses build software and the same is true for Oracle. Rather than building yet another proprietary cloud functions platform, Oracle chose to invest in the Apache 2.0 licensed open source Fn Project and build Oracle Functions on Fn. With this approach, code written for Oracle Functions will run on any Fn server.  Functions can be deployed to Oracle Functions or to a customer managed Fn cluster on-prem or even on another cloud platform.  That said, the advantage of Oracle Functions is that it’s a serverless offering which eliminates the need for customers to manually manage an Fn cluster or the underlying compute infrastructure. But thanks to open source Fn, customers will always have the choice to deploy their functions to whatever platform offers the best price and performance. We’re confident that platform will be Oracle Functions.

Container Native

Unlike most other functions platforms, Oracle Functions is container native with functions packaged as Docker container images.  This approach supports a highly productive developer experience for new users while allowing power users to fully customize their function runtime environment, including installing any required native libraries.  The broad Docker ecosystem and the flexibility it offers lets developers focus on solving business problems and not on figuring out how to hack around restrictions frequently encountered on proprietary cloud function platforms. 

As functions are deployed as Docker containers, Oracle Functions is seamlessly integrated with the Docker Registry v2 compliant Oracle Cloud Infrastructure Registry (OCIR) which is used to store function container images.  Like Oracle Functions, OCIR is also both serverless and pay-per-use.  You simply build a function and push the container images to OCIR which charges just for the resources used.

Secure

Security is the top priority for Oracle Cloud services and Oracle Functions is no different. All access to functions deployed on Oracle Functions is controlled through Oracle Identity and Access Management (IAM) which allows both function management and function invocation privileges to be assigned to specific users and user groups.  And once deployed, functions themselves may only access resources on VCNs in their compartment that they have been explicitly granted access to.  Secure access is also the default for function container images stored in OCIR.  Oracle Functions works with OCIR private registries to ensure that only authorized users are able to access and deploy function containers.  In each of these cases, Oracle Function takes a “secure by default” approach while providing customers full control over their function assets.  

Getting Started

Oracle Functions will be generally available in 2019 but we are currently providing access to selected customers through our Cloud Native Limited Availability Program. To learn more about Oracle Functions or to request access, please let us know by registering with this form.  You can also learn more about the underlying open source technology used in Oracle Function at FnProject.io.

Announcing Oracle Cloud Native Framework at KubeCon North America 2018

Tue, 2018-12-11 12:00

This blog was originally published at https://blogs.oracle.com/cloudnative/

At KubeCon + CloudNativeCon North America 2018, Oracle has announced the Oracle Cloud Native Framework - an inclusive, sustainable, and open cloud native development solution with deployment models for public cloud, on premises, and hybrid cloud. The Oracle Cloud Native Framework is composed of the recently-announced Oracle Linux Cloud Native Environment and a rich set of new Oracle Cloud Infrastructure cloud native services including Oracle Functions, an industry-first, open serverless solution available as a managed cloud service based on the open source Fn Project.

With this announcement, Oracle is the only major cloud provider to deliver and support a unified cloud native solution across managed cloud services and on-premises software, for public cloud (Oracle Cloud Infrastructure), hybrid cloud and on-premises users, supporting seamless, bi-directional portability of cloud native applications built anywhere on the framework.  Since the framework is based on open, CNCF certified, conformant standards it will not lock you in - applications built on the Oracle Cloud Native Framework are portable to any Kubernetes conformant environment – on any cloud or infrastructure

Oracle Cloud Native Framework – What is It?

The Oracle Cloud Native Framework provides a supported solution of Oracle Cloud Infrastructure cloud services and Oracle Linux on-premises software based on open, community-driven CNCF projects. These are built on an open, Kubernetes foundation – among the first K8s products released and certified last year. Six new Oracle Cloud Infrastructure cloud native services are being announced as part of this solution and build on the existing Oracle Container Engine for Kubernetes (OKE), Oracle Cloud Infrastructure Registry, and Oracle Container Pipelines services.

Cloud Native at a Crossroads – Amazing Progress

We should all pause and consider how far the cloud native ecosystem has come – evidenced by the scale, excitement, and buzz around the sold-out KubeCon conference this week and the success and strong foundation that Kubernetes has delivered! We are living in a golden age for developers – a literal "First Wave" of cloud native deployment and technology - being shaped by three forces coming together and creating massive potential:

  • Culture: The DevOps culture has fundamentally changed the way we develop and deploy software and how we work together in application development teams. With almost a decade’s worth of work and metrics to support the methodologies and cultural shifts, it has resulted in many related off-shoots, alternatives, and derivatives including SRE, DevSecOps, AIOps, GitOps, and NoOps (the list will go on no doubt).

  • Code: Open source and the projects that have been battle tested and spun out of webscale organizations like Netflix, Google, Uber, Facebook, and Twitter have been democratized under the umbrella of organizations like CNCF (Cloud Native Computing Foundation). This grants the same access and opportunities to citizen developers playing or learning at home, as it does to enterprise developers in the largest of orgs.

  • Cloud: Unprecedented compute, network, and storage are available in today’s cloud – and that power continues to grow with a never-ending explosion in scale, from bare metal to GPUs and beyond. This unlocks new applications for developers in areas such as HPC apps, Big Data, AI, blockchain, and more. 

Cloud Native at a Crossroads – Critical Challenges Ahead

Despite all the progress, we are facing new challenges to reach beyond these first wave successes. Many developers and teams are being left behind as the culture changes. Open source offers thousands of new choices and options, which on the surface create more complexity than a closed, proprietary path where everything is pre-decided for the developer. The rush towards a single source cloud model has left many with cloud lock-in issues, resulting in diminished choices and rising costs – the opposite of what open source and cloud are supposed to provide.

The challenges below mirror the positive forces above and are reflected in the August 2018 CNCF survey:

  • Cultural Change for Developers: on premises, traditional development teams are being left behind. Cultural change is slow and hard.

  • Complexity: too many choices, too hard to do yourself (maintain, administer), too much too soon?

  • Cloud Lock-in: proprietary single-source clouds can lock you in with closed APIs, services, and non-portable solutions.

The Cloud Native Second Wave – Inclusive, Sustainable, Open

What’s needed is a different approach:

  • Inclusive: can include cloud and on-prem, modern and traditional, dev and ops, startups and enterprises

  • Sustainable: managed services versus DIY, open but curated, supported, enterprise grade infrastructure

  • Open: truly open, community-driven, and not based on proprietary tech or self-serving OSS extensions

Introducing the Oracle Cloud Native Framework – What’s New?

The Oracle Cloud Native Framework spans public cloud, on-premises, and hybrid cloud deployment models – offering choice and uniquely meeting the broad deployment needs of developers. It includes Oracle Cloud Infrastructure Cloud Native Services and the Oracle Linux Cloud Native Environment. On top of the existing Oracle Container Engine for Kubernetes (OKE), Oracle Cloud Infrastructure Registry, and Oracle Container Pipelines services, a rich set of new Oracle Cloud Infrastructure cloud native services has been announced with services across provisioning, application definition and development, and observability and analysis.

 

  • Application Definition and Development

    • Oracle Functions: A fully managed, highly scalable, on-demand, functions-as-a-service (FaaS) platform, built on enterprise-grade Oracle Cloud Infrastructure and powered by the open source Fn Project. Multi-tenant and container native, Oracle Functions lets developers focus on writing code to meet business needs without having to manage or even address the underlying infrastructure. Users only pay for execution, not for idle time.

    • Streaming: Enables applications such as supply chain, security, and IoT to collect from many sources and process in real-time. Streaming is a highly available, scalable and multi-tenant platform that makes it easy to collect and manage streaming data.

  • Provisioning

    • Resource Manager: A managed Oracle Cloud Infrastructure provisioning service based on industry standard Terraform. Infrastructure-as-code is a fundamental DevOps pattern, and Resource Manager is an indispensable tool to automate configuration and increases productivity by managing infrastructure declaratively.

  • Observation and Analysis

    • Monitoring: An integrated service that reports metrics from all resources and services in Oracle Cloud Infrastructure. Monitoring provides predefined metrics and dashboards, and also supports a service API to obtain a top-down view of the health, performance, and capacity of the system. The monitoring service includes alarms to track these metrics and act when they vary or exceed defined thresholds, helping users meet service level objectives and avoid interruptions.

    • Notification Service: A scalable service that broadcasts messages to distributed components, such as email and PagerDuty. Users can easily deliver messages about Oracle Cloud Infrastructure to large numbers of subscribers through a publish-subscribe pattern.

    • Events: Based on the CNCF Cloud Events standard, Events enables users to react to changes in the state of Oracle Cloud Infrastructure resources, both when initiated by the system or by user action. Events can store information to Object Storage, or they can trigger Functions to take actions, Notifications to inform users, or Streaming to update external services.

Use Cases for the Oracle Cloud Native Framework: Inclusive, Sustainable, Open

Inclusive: The Oracle Cloud Native Framework includes both cloud and on-prem, supports modern and traditional applications, supports both dev and ops, can be used by startups and enterprises. As an industry, we need to create more on-ramps to the cloud native freeway – in particular by reaching out to teams and technologies and connecting cloud native to what people know and work on every day. The WebLogic Server Operator for Kubernetes is a great example of just that. It enables existing WebLogic applications to easily integrate into and leverage Kubernetes cluster management. 

As another example, the Helidon project for Java creates a microservice architecture and framework for Java apps to move more quickly to cloud native.

Many Oracle Database customers are connecting cloud native applications based on Kubernetes for new web front-ends and AI/big data processing back-ends, and the combination of the Oracle Autonomous Database and OKE creates a new model for self-driving, securing, and repairing cloud native applications. For example, using Kubernetes service broker and service catalog technology, developers can simply connect Autonomous Transaction Processing applications into OKE services on Oracle Cloud Infrastructure.

 

Sustainable: The Oracle Cloud Native Framework provides a set of managed cloud services and supported on-premises solutions, open and curated, and built on an enterprise grade infrastructure. New open source projects are popping up every day and the rate of change of existing projects like Kubernetes is extraordinary. While the landscape grows, the industry and vendors must face the resultant challenge of complexity as enterprises and teams can only learn, change, and adopt so fast.

A unified framework helps reduce this complexity through curation and support. Managed cloud services are the secret weapon to reduce the administration, training, and learning curve issues enterprises have had to shoulder themselves. While a do-it-yourself approach has been their only choice up to recently, managed cloud services such as OKE give developers a chance to leapfrog into cloud native without a long and arduous learning curve.

A sustainable model – built on an open, enterprise grade infrastructure, gives enterprises a secure, performant platform from which to build real hybrid cloud deployments including these five key hybrid cloud use cases:

  1. Development and DevOps: Dev/test in the cloud, production on-prem

 

 

  1. Application Portability and Migration: enables bi-directional cloud native application portability (on-prem to cloud, cloud to on-prem) and lift and shift migrations.  The Oracle MySQL Operator for Kubernetes is an extremely popular solution that simplifies portability and integration of MySQL applications into cloud native tooling.  It enables creation and management of production-ready MySQL clusters based on a simple declarative configuration format including operational tasks such as database backups and restoring from an existing backup. The MySQL Operator simplifies running MySQL inside Kubernetes and enabling further application portability and migrations.

 

 

  1. HA/DR: Disaster recovery or high availability sites in cloud, production on-prem

  1. Workload-Specific Distribution: Choose where you want to run workloads, on-prem or cloud, based on specific workload type (e.g., based on latency, regulation, new vs. legacy)

  1. Intelligent Orchestration: More advanced hybrid use cases require more sophisticated distributed application intelligence and federation – these include cloud bursting and Kubernetes federation

 

  • Open: Over the course of the last few years, development teams have typically chosen to embrace a single-source cloud model to move fast and reduce complexity – in other words the quick and easy solution. The price they are paying now is cloud lock in resulting from proprietary services, closed APIs, and non-portable solutions. This is the exact opposite of where we are headed as an industry – fueled by open source, CNCF-based, and community-driven technologies.

 

An open ecosystem enables not only a hybrid cloud world but a truly multi-cloud world – and that is the vision that drives the Oracle Cloud Native Framework!

Podcast: Inspiring Innovation and Entrepreneurism in Young People

Wed, 2018-12-05 07:37

A common thread connecting the small army of IT professionals I've met over the last 20 years is that their interest in technology developed when they were very young, and that youthful interest grew into a full-fledged career. That's truly wonderful. But what happens if a young person never has a chance to develop that interest? And what can be done to draw those young people to careers in technology? In this Oracle Groundbreakers Podcast extra you will meet someone who is dedicated to solving that very problem.

Karla Readshaw is director of development for Iridescent, a non-profit organization focused on bringing quality STEM education (science, technology, engineering, and mathematics) to young people -- particularly girls -- around the globe.

"Our end goal is to ensure that every child, with a specific focus on underrepresented groups -- women and minorities -- has the opportunity to learn, and develop curiosity, creativity and perseverance, what real leaders are made of," Karla explains in her presentation.

Iridescent, through its Technovation program, provides middle- and high-school girls with the resources to develop solutions to real problems in their local communities, "leveraging technology and engineering for social good," as Karla explains.

Over a three-month period, the girls involved in the Technovation program identify a problem within their community, design and develop a mobile app to address the issue, and then build a business around that app, all under the guidance of an industry mentor.

The results are impressive. In one example, a team of hearing-impaired girls in Brazil developed an app that teaches American Sign Language, and then developed a business around it. In another example, a group of high-school girls in Guadalajara, Mexico drew on personal experience to develop an app that strengthens the relationship between Alzheimers patients and their caregivers. And a group of San Francisco Bay area girls created a mobile app that will help those with autism to improve social skills and reduce anxiety.

Want to learn more about the Technovation program, and about how you can get involved? Just listen to this podcast. 

This program was recorded during Karla's presentation at the Women In Technology Breakfast held on October 22, 2018 as part of Oracle Code One.

Additional Resources Coming Soon
  • Baruch Sadogursky, Leonid Igolnik, and Viktor Gamov discuss DevOps, streaming, liquid software, and observability in this podcast captured during Oracle Code One 2018.
  • GraphQL and REST: An Objective Comparison: a panel of experts weighs the pros and cons of each of these approaches in working with APIs. 
  • Database: Breaking the Golden Rules: There comes a time question, and even break,  long-established rules. This program presents a discussion of the database rules that may no longer be advantageous. 
Subscribe

Never miss an episode! The Oracle Groundbreakers Podcast is available via:

Pages