A Learning Organization


A lot of talk has gone around about the importance of becoming a “learning organization”. But what does that mean and how can it be achieved? Is an organization safe because it provides a professional development stipend in its benefits package?

Providing time and resources to your employees for professional development is great. But it is the very first step in a long journey. While having a workforce that is capable of adapting to changing practices and techniques is necessary, it is not sufficient to keep you competitive in the modern software landscape. Sending your workers to conferences and buying them books will not make you a “learning organization”.

Becoming a learning organization is about continuous experimentation. The primary type of learning that you will need to be doing is not the type that can come from books or conferences or classes. The competitive knowledge that you are after is mostly hidden. It is knowledge that either does not yet exist, or exists as a trade secret locked deep within your competitor’s corridors.

The kind of knowledge that a learning organization acquires is a deep, mostly hidden knowledge. It is the knowledge that informs you of what will provide the most value to your customers. It is the knowledge that enables you to increase your customer base while keeping your existing customers. This knowledge is something that only your customers can provide to you.

Your customers may not even know that they possess this knowledge. They give up this knowledge through increased use of your platform. They give it up through increased satisfaction and NPS scores. They give you this knowledge when your UX tooling informs you that a new feature allows your customers to complete the same workflows in 28% less time. They give up this knowledge when they spend 15% more annually on your products and services or when the internal system you built increases communication between departments and reduces mistakes by 18%.

To be a learning organization you must experiment. You must experiment faster than your competitors, or you will ultimately lose.

The results of those experiments become the secret knowledge that gives you an edge in the marketplace (but only for a little while: there is always someone else experimenting). It is the foundation upon which you build the future of your company. It is how you provide radical value to your customers.


To be a learning organization you must experiment often. To experiment often you must reduce the cost of experimentation. To reduce the cost of experimentation you must reduce the size of the experiments. You must automate everything about the experiment that can be automated. You must reduce or eliminate wasted effort and unplanned work. You must constantly work to identify bottlenecks within the delivery workflow and eliminate or control them.

“The bottleneck should be the creation of good ideas”

Eric Ries and Steve Blank (quoted in Beyond the Phoenix Project)


I would argue that continuous experimentation lays at the very heart of all of agile and all of DevOps. If you could give the product team a magic wand that allowed them to instantly create and deploy new ideas and just as quickly eliminate them if they are not successful you would have finally hit the target of everything that agile and DevOps principles are aiming for.

We will of course likely never get there. In the meantime, though, we can aim to get as close to continuous experimentation as possible. Small improvements and features deployed continuously with strong feedback loops and telemetry provide the foundations for an organization that is continuously learning; enabling you to tap into the arcane and prized knowledge hidden deep within the customer’s experience. The knowledge that your competitors currently lack.

A Learning Organization

Coupled Microservices

There are a few primary high-level advantages and one major drawback that occur to me when I think about making the transition from monolith to “microservices” (I use quotes here for these reasons). Some of the major advantages are logical isolation, ease of deployment and independent scaling on your infrastructure. I will go into more detail on each of these, but that is not the point of this post. My goal here is to point out how you can easily lose most of the advantages that microservices confer and be left with only the drawback(s).

Logical Isolation

Logical isolation means that you have separated one logical, cohesive “chunk” of your application into its own independent system. It reduces cognitive overhead for the developers working on that system by allowing them to focus on the function, structure and features of this service alone. This service can be independently managed by a small team without as much regard to the system as a whole. This makes these teams more autonomous, reducing the burden of the “mythical man month” problem and enabling faster parallel development of the system as a whole.

Ease of Deployment

Because each service is smaller and maintains a high level of independence from the rest of the system, services can be deployed much more easily. The communication between teams and team members that is necessary around each deployment is reduced (ideally to nearly zero). Changes can become much smaller, and smaller is significantly better in software development. Smaller changes can also be deployed more often thus enabling one of the major goals of devops — continuous delivery. More deploys means that we significantly reduce the average amount of time between a developer introducing a bug to the codebase and the discovery of that bug. This leads to a much lower mean time to resolution as the developers that worked on the buggy portion of the codebase worked on it more recently. Lower mean time to resolution leads to happier customers and happier, more productive developers who are freed up to work on more important things. All in all, ease of deployment should be a major goal of any agile team and having many smaller, independent services helps to enable this.

Independent Scaling

Independent scaling refers to the fact that smaller services can more easily be split up and grouped according to usage and infrastructure requirements. Services that receive more usage and traffic can be given more raw horsepower to do their job. Services that are used infrequently can be given much less in the way of compute resources, or they can be grouped together. All in all, having many small services gives you a lot more flexibility in how you build your infrastructure, often enabling quite a bit in cost savings.

Drawbacks

One “drawback” to microservices is that when you transition from monolith to microservices, you move from a centralized system to a distributed system. This is usually touted as an advantage, but if you lose the other advantages, this quickly becomes a liability.

Distributed systems are inherently more complex, overall, than centralized systems. They are more difficult to reason about. Consistency of data comes to the forefront as an issue that you need to worry about. Tracking down errors becomes more difficult as your logging systems are (by default) also now distributed. In a distributed system, you need to be much more deliberate about your logging and event tracking systems to track data and events as they propagate through your system (so you can quickly isolate errors and track down bugs).

It can also be more difficult (and expensive) to find developers with expertise in distributed systems. While developers are increasingly becoming accustomed to working in such environments and thinking in a distributed manner, it is still a relatively new way of thinking for your average-sized company.

How To Be Left With Only The Drawbacks

When you have the advantages of logic isolation, ease of deployment and independent scaling the issues that come with a distributed system are often worth the trade off. In addition, they can be mitigated to some extent by deliberate effort and a thoughtful implementation of your architecture and the support systems that you provide your developers and operations people.

So how do you ensure that you gain the advantages of logic isolation, ease of deployment and scaling?

The simple answer is: decouple your services.

You must ensure that your services remain independent to the highest degree possible. There are many ways to achieve this, and much of the Old Wisdom™ of good software engineering and systems design can be applied here. But I want to point out one of the most important factors: versioning.

Version your services. Without versioning, you end up with a highly coupled distributed system and you lose many of the most important advantages that microservices provide. You essentially end up with a distributed monolith, the worst of all possible scenarios

If your services are not versioned, you lose logical isolation. Developers on one team have to think about how their changes will affect developers on another team. This increases their cognitive load and developers are resigned to thinking about the system as a whole again. Cross-team communication is now much more critical and you lose much of the team autonomy and team parallelization that microservices are supposed to allow.

Deploys become a coordination nightmare. Teams will spend hours coordinating their deploys so as to ensure no interruption to other services. Once again, cross-team communication is increased and autonomy is decreased. Cross-team blaming is also reintroduced, as you will very often have to release different apps simultaneously. When something fails, both sides are apt to blame the other (even if politely…). If you have versioned your services, however, they can (and usually will) be released independently.

What about independent scaling? Well, the good news is that even if your services are deeply coupled, you can still scale the machine(s) they rely on independently. However, if your services are still tightly coupled, how much money will you actually save every year through independent scaling? Does this savings outweigh the cost of the added cognitive overhead (read: more developer time/developers) that your poorly designed distributed system has created?

Conclusion

The conclusion is simple: if you are building out a microservices architecture, decouple your services or else you will likely end up with a more costly distributed monolith. Implement good systems design and thoughtful abstractions and by all means, version your APIs.

Coupled Microservices

Gaining Visibility Into Your State Changes In Redux

Two of the most important skills to have in a programming are to be able to get quick visibility into what is actually going on in your program at a given point in time, and second, to be able to quickly isolate any problems that you run into. The former of these actually greatly facilitates the latter.

While working on a strange and infuriating issue recently with a React Native/Redux application that I was working on, I found myself in a situation where I felt blind as to what was actually going on in my application (and why it was failing).

You see, mapStateToProps was never being called, yet my reducer was definitely being called and I was definitely returning a new state object from my reducer. I had confirmed this with console.logs. Additionally, I had already confirmed that mapStateToProps was definitely not being called at all (again, using console.logs).

After quintuple checking that everything was hooked up properly, and that my reducer was indeed returning new state (and not making the mistake of altering existing state in the reducer), I decided that whatever the error was, it was happening in between the time that the reducer returned the new state and the time that mapStateToProps was being called. In other words, it seemed like it wasn’t something in my code.

Since it is a rather rare occurrence that the bug is in the framework code and not my own code, I wanted some additional visibility into what was going on. I needed to first confirm that Redux knew the state had been updated and that it should therefore be calling mapStateToProps on all of my connected components.

It turns out, Redux has a perfect little tool for this. It is called state.subscribe().

You can add a few lines that look like this:

store = createStore( ... ); store.subscribe(() => { console.log("Store state changed"); console.log(store.getState()); });
Code language: JavaScript (javascript)

wherever you have defined your store (using createStore). Using this, you get some log output any time the store’s state is changed. Additionally, you can see exactly what the new state is.

I should also note that much of this visibility can also be achieved with the Redux DevTools extension (which also comes with quite a few nice additional features), but I wanted to be able to play with these events, and in so doing I gained more of an understanding of what is really going on under the hood in Redux.

Gaining Visibility Into Your State Changes In Redux

Retrieving Files As Blob in React Native (and Expo)

tl;dr
Sometimes you may need to create a valid blob file from a remote file or a local file (for example, for uploading a screenshot to firebase storage). If the Blob object returned by the fetch API (e.g. blob = await fetch(`file://${local_uri}`).blob()) is returning an empty type and/or 0 bytes, then upgrading to React Native 0.59.x+ (expo sdk 33+) should fix the issue. If upgrading is not a possibility, there are other options such as creating a (hacky) blob utility function, or passing around base64 encoded data (see below).

I want to note that if you simply want to use an Image component to display the local file in the UI, you can specify the local location of the file as the source.

Note also that I am not referring to the situation where your file can be packaged with your app ahead of time or where it can be directly loaded into the UI from the internet. If your file is being downloaded from the internet it can usually be loaded directly via URL. If it is something that you can package with your app, the best practice is to simply use an import or require to include the file at compile time.

However, for more complicated use cases, such as uploading the file, it can get quite a bit trickier.

Uploading files that are dynamically created in-app (such as a screenshot), for example, can be tricky. Also, fetching any file prior to React Native 0.59.x with the fetch API has issues associated with it.

If you are not using Expo, it is worth looking into rn-fetch-blob. Unfortunately, since I was using Expo, I could not use rn-fetch-blob without detaching and using ExpoKit (something I would like to avoid for a task that should be relatively simple).

In my case, uploading a dynamically created tempfile to firebase took quite a bit of effort. I would like to share a couple of solutions I came up with.

The main problem for me came down to an issue with React Native’s custom whatwg-fetch polyfill in versions prior to 0.59. RN’s custom polyfill specifically does not use a ‘blob‘ response type (blob support was added to React Native and Expo in early 2018). Therefore, fetching files with it (vs, for example, fetching JSON) caused an issue where the resulting blob would have 0 bytes and an empty string as the ‘type’. Let me demonstrate with an example using a basic expo app.

We will start by attempting to download a jpg from the internet as a blob, and then we will output the fetch Response object and the Blob object that results from calling blob() on it (note that fetch’s Response.blob() returns a promise, so we have to await or resolve it to get the actual object):

App.js: import React from 'react'; import { Text, View } from 'react-native'; const getFile = async () => { const img_url = "https://picsum.photos/200/300.jpg"; let result = await fetch(img_url); console.log(result); console.log(await result.blob()); return result; } export default function App() { getFile(); return ( <View> <Text>Open up App.js to start working on your app!</Text> </View> ); }
Code language: JavaScript (javascript)

And the data looks like this:

Notice that the size of the Blob object is 0 and the type is an empty string.

We can try something similar by fetching JSON and calling json() on the Response object, but this time we get the expected JSON output:

const getFile = async () => { const json_url = "https://jsonplaceholder.typicode.com/todos/1"; let result = await fetch(json_url); console.log(result); console.log(await result.json()); return result; }
Code language: JavaScript (javascript)

And we get exactly what we expected – a javascript object with the correct JSON values.

Additionally, we can upgrade to expo SDK 33, which uses React Native 0.59 and we also get what we expect:

App.js (Expo SDK 33 / React Native 0.59): import React from 'react'; import { Text, View } from 'react-native'; const getFile = async () => { const img_url = "https://picsum.photos/200/300.jpg"; let result = await fetch(img_url); console.log(result); console.log(await result.blob()); return result; } export default function App() { getFile(); return ( <View> <Text>Open up App.js to start working on your app!</Text> </View> ); }
Code language: JavaScript (javascript)

Success! We have successfully fetched a file as a valid JS Blob object (note the correct type/size info). Now we can use it in the large number of javascript File API’s that accept Blobs.

So what is happening here?

For very specific reasons, prior to version 0.59 React Native had a custom whatwg-fetch polyfill implementation that specifically does NOT return a blob responseType by default.

However for these reasons that custom polyfill has become unnecessary. In version 0.59 it was altered to return a responseType of ‘blob’ by default (which is what enables the Response.blob() function to work correctly). And then after 0.59 the custom polyfill was removed altogether as it was now redundant.

The simple solution is to upgrade to React Native 0.59 or higher (or Expo SDK 33, the highest at the time of this writing).

After upgrading to RN 0.59 my resulting Blob objects have a type of ‘image/jpeg’ and a correct size value. More importantly, now they properly upload to Firebase storage as valid images without any problems.

But what if upgrading is not an option?

If you are downloading the file from the internet you can write a urlToBlob utility that uses XMLHttpRequest to grab the file as a blob:

function urlToBlob(url) { return new Promise((resolve, reject) => { var xhr = new XMLHttpRequest(); xhr.onerror = reject; xhr.onreadystatechange = () => { if (xhr.readyState === 4) { resolve(xhr.response); } }; xhr.open('GET', url); xhr.responseType = 'blob'; // convert type xhr.send(); }) }
Code language: JavaScript (javascript)

If you are creating a screenshot using takeSnapshotAsync (as I was) you also have the option to set the result as a ‘data-uri’ to get a base64 encoded data version of the file. This base64 data can then be turned into a blob or, in the case of Firebase, uploaded directly as a data_url encoded string.

I found that the base64 string created by takeSnapshotAsync was exceptionally large compared to the actual image, so I chose to utilize an actual image, storing it in the local cache, grabbing it with fetch and then uploading it to firebase from there.

Retrieving Files As Blob in React Native (and Expo)

Scaffolding

In the talk “Inventing on Principle“, Bret Victor talks about the importance of establishing a guiding principle that defines your work.

His guiding principle is that ideas are important. He believes that it is ideas that give meaning to our lives. But ideas start off fragile — they need to be nurtured and enabled to grow. They need an environment in which to mature. In order to nurture ideas, he emphasizes the importance of creators having an immediate connection with their creations and he offers several brilliant examples of ways that we, as engineers, can establish immediacy in our creative loops.

In doing this, he demos several amazing environments that he has built for various engineering disciplines. All of these environments give immediate and useful feedback to the creator.

Some tools of this nature may exist for your editor, and you absolutely should dig up what you can find to increase the quantity of feedback and decrease the time to receiving it. However, it also occurs to me that in many cases this immediate feedback may be a non generalize-able special purpose kind of feedback. It could be considered ‘scaffolding’ for your project.

There are very few engineering disciplines, after all, that output a product that doesn’t need some kind of ephemeral single-use material in its construction. Effort is often put into building something that will be thrown away when then final product is complete.

I hate wasted work. I think all programmers hate wasted work. I especially hate it. Even though I love what I do, I am always so hesitant to build something when there is a possibility that it won’t ever be useful for anything. I have a low tolerance for risk when it comes to my time.

However, I would like to identify what kinds of things I should build that could considered “scaffolding” for my projects; those things which can’t be automated or toolified or used on every project, but which will make the development process for that project much smoother, feedback faster and development more fun. I would like to constantly look for ways to maximize immediate visibility into my software. And I would like to be more okay with throwing stuff away at the end of a project once it has served its purpose.

Scaffolding

Retrieve Column Based On Column Name In AWK

This is a quick snippet to show how to grab a column based on its name (rather than number) in AWK. This can be useful when you are unsure if the output you are processing may one day change. This makes for a slightly more flexible / resilient script.

Lets say we have some columnar output, in this case from df -h, that looks like this:

It is unlikely that df -h will change its output, but many of the tools you might use on a regular basis may be less stable.

Lets say we want to grab the ‘Size’ column.

Rather than using df -h | awk 'NR>1 { print $2 } we can instead match the column named ‘Size’ with the following:

df -h | awk 'NR==1 { for (i=1;i<=NF;i++) if ($i ~ /Size/) COLUMN_NUM=i } NR>1 { print $COLUMN_NUM }'
Code language: HTML, XML (xml)

This starts by checking which row we are on (NR==1). If we are on the first row (the header row) we loop through each field until one matches “Size”. Then we record the number of this field in the COLUMN_NUM variable. This variable is now available through the subsequent AWK actions. On subsequent rows we use to print only the Size column.

Retrieve Column Based On Column Name In AWK

Ruby: Pass By Value or Pass By Reference?

Recently I wanted to build a method that takes an array as an argument and mutates the original array without needing to return a new one. I wanted to quickly find out online if Ruby treats arrays as a value or reference when you pass them into a method (see this wonderful illustration if you are unfamiliar with the concept).

Most of the answers involved long winded technical descriptions, but I just wanted a quick, practical answer, so here it is:

How it works in practice:

Everything in Ruby is an object, and when that object is passed into a method it is treated, for most practical purposes, as you would expect a pass-by-reference language to behave. In other words, if you pass an array into a method and do an operation on it that mutates its values, it will alter that array everywhere. If you perform an operation on it that returns a new object (as many array methods do), it will not alter the original. So, whether or not the original object is altered will depend on what method you call.

The following uses the “<<“ method inside a function, which mutates the value of an existing array:

def do_something_to_array(arr) arr << rand(1..10) end mutate_me = [] do_something_to_array(mutate_me) puts mutate_me => [4] do_something_to_array(mutate_me) puts mutate_me => [4, 6] do_something_to_array(mutate_me) puts mutate_me => [4, 6, 3]
Code language: Ruby (ruby)

On the other hand, if you perform an operation that returns a new object, the original object will not be changed. Here is an example with the “+” method, which always returns a new array:

def do_something_to_array(arr) arr + [rand(1..10)] end i_wont_mutate = [] do_something_to_array(i_wont_mutate) => [5] i_wont_mutate => []
Code language: Ruby (ruby)

Strings work in much the same way. Many string methods will mutate the original. If you mutate the original inside a method, it will be changed everywhere:

def do_something_to_string(str) str << "Append me!" end stringy = "I will be changed!" do_something_to_string(stringy) => "I will be changed!Append me!" stringy => "I will be changed!Append me!"
Code language: Ruby (ruby)

Now we call a method that always returns a new string, therefore the original string is not altered:

def do_something_to_string(str) str += "Append me!" end stringy = "I won't be changed!" do_something_to_string(stringy) => "I won't be changed!Append me!" stringy => "I won't be changed!"
Code language: Ruby (ruby)

Numeric values, for all practical purposes, are immutable. So they tend to act more like “pass by value” because the methods you call on a numeric value will always return a brand new object, leaving the original unaltered.

Now for a little more computer-sciencey nuance…

Is the following output what you would expect if Ruby were truly “pass by reference”?

def do_something_to_string(str) str << "Append me! " str << "Append me too! " str << "Append me three!" str = "What object am I?" end stringy = "What will become of me? " do_something_to_string(stringy) => "What object am I?" stringy => "What will become of me? Append me! Append me too! Append me three!"
Code language: Ruby (ruby)

What is the explanation for these confusing results? The reason this happens is because Ruby isn’t truly pass by reference. It just often acts in a similar way. At a technical level, Ruby is entirely pass-by-value. The value that gets passed into do_something_to_string is a reference to the same object that stringy points at. This is not the same as pass-by-reference. str is an entirely new object, with a reference to the same object that stringy points at. When we use the append operator (<<), we are altering that underlying object, and therefore we alter both str and stringy. However, when we use the assignment operator (=), a new string object is created and str‘s reference is updated to point to that new object.

If ruby were truly “pass by reference” altering what str points to would actually alter what stringy points to as well.

Theoretically confusing? Yes. But practically speaking it is very simple. Immutable objects like those of type Numeric will always act like “pass by value”. For other object types passed as arguments, if you call a method on that argument that mutates the object, it will mutate that object everywhere, even outside of the method. If you call a method that returns a new object, it will not affect the original object and its effects will be localized inside the method.

Ruby: Pass By Value or Pass By Reference?

PubSub Vs Observer Pattern

Both PubSub and the Observer Pattern allow a central source to send out messages (these could include data, event notifications etc) that then cause/allow various other objects to update or change. There is a simple but subtle difference, however.

In the observer pattern, the publisher (subject/observable) has knowledge of the subscribers (observers). In the PubSub pattern, any service that needs the information being published can willingly subscribe to that publisher. The publisher has no knowledge of the specific subscribers, but simply publishes the message to a channel. Depending on the implementation, it does not need to be a public channel. Amazon SNS, for example, allows you to restrict the subscribers to specific servers or networks.

You can think of PubSub as being akin to a broadcast model. Think of a radio station “publishing” its content on the airwaves, with no knowledge of who will tune in. The “subscribers” are those who have chosen to tune their radio to that station.

The observer pattern is more akin to a home delivery newspaper. The publishing company must know the name and address of every subscriber ahead of time to deliver their content.

Both have advantages and disadvantages. For example, in the PubSub model your publisher is allowed to be a much more abstract service. It does not need any specific (concrete) knowledge of the other services or objects in a system. All it needs is to know how to publish its message when it is time. Everything else is left up to the other services.

The observer pattern allows more centralized control. While the subject (publisher) needs to have intimate knowledge of all observers, it also has control over who receives its messages. It can centrally decide to stop publishing updates to specific observers, or it can choose to add an observer.

PubSub Vs Observer Pattern

Dynamic Typing Vs Static Typing Vs Weak Typing Vs Strong Typing

A common misconception is that dynamic typing (as in variable types… not as in your keyboard…) is synonymous with weak typing and that static typing is synonymous with strong typing. This is an untrue assumption. Dynamic typing is the opposite of static typing and weak typing is the opposite of strong typing and both of these sets represent entirely different concepts.

Dynamic Vs Static Typing

A statically typed language is one in which you must declare all variable types before compilation (and in a few rare cases, interpretation). Types are checked at compilation thus allowing the compiler to detect any errors with your variable types. A dynamically typed language, on the other hand, will check your variable types at run time. Dynamically typed languages are thus able to dynamically assign variable types depending on the input. This often saves programmer time and makes for simpler code.

While a statically typed language can save system resources by compiling the code ahead of time thus eliminating the need for type-checking and resource allocation at run time, there are some memory advantages to a dynamically typed language. A dynamically typed language can (in some cases) save some memory as the interpreter is able to assign only what is necessary to fit the data entered. In a statically typed language, on the other hand, the programmer must predict ahead of time the maximum amount of space that some variable input (for example, from a file or database entry) will consume and reserve that space ahead of time. If a smaller amount of data enters the program, any extra memory that has been reserved for this data is wasted. Overall, however, a program written in a statically typed language is typically more efficient than a program written in a dynamically typed language. The advantages that a dynamically typed language offers primarily have to do with making the programmer’s job faster and easier (at the expense of some extra system resources). In many circumstances in our modern world, this trade off is well worth it.

Weak Typing Vs. Strong Typing

The first thing that should be noted when talking about a weakly typed vs strongly typed language is that there is not really a rigorous technical definition for either term; therefore, using such terms should be avoided altogether. However, I include them here for the sake of contrasting the common usage of these terms with that of dynamic/static typing.

A strongly typed language is generally regarded as a language in which variables must be explicitly converted when comparing two variables of a different type. In other words 1 + “1” is an impossible operation and 1==”1″ is an impossible comparison. In general, in a strongly typed language both of these statements would return an error. In a strongly typed language variables of different types must be explicitly converted in order to perform operations and/or comparisons.

A weakly typed language, on the other hand, will usually implicitly convert variables of different types when the programmer attempts a comparison or operation on them. Usually weakly typed languages have elaborate rules (though often consistent with many other weakly typed languages) stating how conversions occur implicitly (or automatically, without the programmer needing to explicitly state that a variable is to be converted). For example, the operation 1.0 + 1 will likely return a float value in a weakly typed language, whereas it would likely return an error in a language that is considered to be strongly typed. This is because technically this statement is comparing a floating point number with an integer.

Dynamically Strong and Statically Weak

It is also a misconception that a dynamically typed language must also be weakly typed and that a statically typed language must be strongly typed. This is also untrue. Python is a good counterexample to the first scenario. Python is a dynamically typed language; however, it is also generally considered to be strongly typed. On the other hand, while both C and C++ are statically typed, they are also considered to be weakly typed; that is, there are many ways in which variable conversions occur where the programmer does not have to explicitly direct the program to make such a conversion (1.0/1, for example, will return a floating point number).

Dynamically Compiled and Statically Interpreted

Finally, I should also note that, while dynamically typed languages are generally interpreted and statically typed languages are generally compiled, this is certainly not a requirement either. Many dynamically typed languages can be compiled, for example, Python and Ruby. However, when these languages are compiled they are still not typically as efficient as their statically typed counterparts due to the extra overhead of runtime type checking.

While knowing the differences in these terms will not make or break you as a programmer, it can be useful to understand the differences so you properly understand what is going on “under the hood” with your language. It is also important to understand these definitions so you can properly understand and possibly even partake in discussions with your peers and coworkers and speak intelligently about such things to those around you.

Dynamic Typing Vs Static Typing Vs Weak Typing Vs Strong Typing