What is Rust? Safe, fast, and easy software development

What is Rust? Safe, fast, and easy software development

Fast, safe, easy to write—pick any two. That’s been the state of software development for a good long time now. Languages that emphasize convenience and safety tend to be slow (like Python). Languages that emphasize performance tend to be difficult to work with and easy to blow off your feet with (like C and C++).

Can all three of those attributes be delivered in a single language? More important, can you get the world to work with it? The Rust language, originally created by Graydon Hoare and currently sponsored by Mozilla Research, is an attempt to do just those things. (The Google Go languagehas similar ambitions, but Rust aims to make as few concessions to performance as possible.)

What makes Rust a better development language

Rust started as a Mozilla research project partly meant to reimplement key components of the Firefox browser. A few key reasons drove that decision: Firefox deserved to make better use of modern, multicore processors; and the sheer ubiquity of web browsers means they need to be safe to use.

But those benefits are needed by all software, not just browsers, which is why Rust evolved into a language project from a browser project. Rust accomplishes its safety, speed, and ease of use through the following characteristics:

Rust satisfies the need for speed. Rust code compiles to native machine code across multiple platforms. Binaries are self-contained, with no runtime, and the generated code is meant to perform as well as comparable code written in C or C++.

Rust won’t compile programs that attempt unsafe memory usage. Most memory errors are discovered when a program is running. Rust’s syntax and language metaphors ensure that common memory-related problems in other languages—null or dangling pointers, data races, and so on—never make it into production. The compiler flags those issues and forces them to be fixed before the program ever runs.

Rust controls memory management via strict rules. Rust’s memory-management system is expressed in the language’s syntax through a metaphor called ownership. Any given value in the language can be “owned,” or held/manipulated, only by a single variable at a time.

The way ownership is transferred between objects is strictly governed by the compiler, so there are no surprises at runtime in the form of memory-allocation errors. The ownership approach also means there is no garbage-collected memory management, as in languages like Go or C#. (That also gives Rust another performance boost.) Every bit of memory in a Rust program is tracked and released automatically through the ownership metaphor.

Rust lets you live dangerously if you need to, to a point. Rust’s safeties can be partly suspended where you need to manipulate memory directly, such as dereferencing a raw pointer à la C/C++. The key word is partly, because Rust’s memory safety operations can never be completely disabled. Even then, you almost never have to take off the seatbelts for common use cases, so the end result is software that’s safer by default.

Rust is designed to be easy to use. None of Rust’s safety and integrity features add up to much if they aren’t used. That’s why Rust’s developers and community have tried to make the language as useful and welcoming as possible to newcomers.

Everything needed to produce Rust binaries comes in the same package. External compilers, like GCC, are needed only if you are compiling other components outside the Rust ecosystem (such as a C library that you’re compiling from source). Microsoft Windows users are not second-class citizens, either; the Rust tool chain is as capable there as it is on Linux and MacOS.

On top of all that, Rust provides several other standard-issue items you’d expect or want:

  • Support for multiple architectures and platforms. Rust works on all three major platforms: Linux, Windows, and MacOS. Others are supported beyond those three. If you want to cross-compile, or produce binaries for a different architecture or platform than the one you’re currently running, a little more work is involved, but one of Rust’s general missions is to minimize the amount of heavy lifting needed for such work. Also, although Rust works on the majority of current platforms, it’s not its creators’ goal to have Rust compile absolutely everywhere—just on whatever platforms are popular, and wherever they don’t have to make unnecessary compromises to do so.
  • Powerful language features. Few developers want to start work in a new language if they find it has fewer, or weaker, features than the ones they’re used to. Rust’s native language features compare favorably to what languages like C++ have: Macros, generics, pattern matching, and composition (via “traits”) are all first-class citizens in Rust.
  • A useful standard library. One part of Rust’s larger mission is to encourage C and C++ developers to use Rust instead of those languages whenever possible. But C and C++ users expect to have a decent standard library—they want to be able to use containers, collections, and iterators, perform string manipulations, manage processes and threading, perform network and file I/O, and so on. Rust does all that, and more, in its standard library. Because Rust is designed to be cross-platform, its standard library can contain only things that can be reliably ported across platforms. Platform-specific functions like Linux’s epoll have to be supported via functions in third-party libraries such as libc, mio, or tokio.
  • Third-party libraries, or “crates.” One measure of a language’s utility is how much can be done with it thanks to third parties. Cargo, the official repository for Rust libraries (called “crates”) lists some ten thousand crates. A healthy number of them are API bindings to common libraries or frameworks, so Rust can be used as a viable language option with those frameworks. However, the Rust community does not yet supply detailed curation or ranking of crates based on their overall quality and utility, so you can’t easily tell what works well.
  • IDE tools. Again, few developers want to embrace a language with little or no support in the IDE of their choice. That’s why Rust recently introduced the Rust Language Server, which provide live feedback from the Rust compiler into an IDE such as Microsoft Visual Studio Code.This article is shared by www.wonderscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.
AI and the continuous delivery model are the future of software development

AI and the continuous delivery model are the future of software development

Software might be eating the world, but most businesses are sitting out the feeding frenzy because they can’t release software fast enough to meet changing customer needs.

No matter what kind of business you’re in — technology, consumer goods, manufacturing — companies are driving engagement with consumers through applications across web, mobile, point-of-sale and more. The faster an organization can improve its software, the more it can drive loyalty for itself and build switching costs for the competition. According to one study, high-performing IT units with faster software releases are twice as likely to achieve their goals in customer satisfaction, profitability, market share and productivity.

Acknowledgement of this has fueled a headlong rush toward what software developers call “continuous delivery.” Whereas in the past organizations would spend months or even years perfecting a release full of new features, continuous delivery allows teams to make incremental changes to software and send them live as each improvement is made.

It’s a process most technology departments aspire to but only a fraction have achieved. According to a recent survey by Evans Data, 65 percent of organizations are using continuous delivery on at least some projects, but only 28 percent are using it for all their software. Among non-SaaS companies, that proportion is just 18 percent.

These leading organizations recognized years ago that they needed to speed up the development process and made organizational changes accordingly. Now they’re eating the competition’s lunch while everyone else is catching up.

The small percentage of organizations that are continuously delivering software lean heavily on automation to speed development and simplify handoffs between developers, QA testers, operations staff and so on. But continuous delivery isn’t an end-state — it’s a process of constant improvement, meaning even the highest performing teams can’t rest on their laurels.

So what comes next?

The future of application development depends on using artificial intelligence (AI) within the continuous delivery model. As a data scientist, I see automation and AI as being on the same spectrum, with automation referring to simpler, rule-based solutions and AI being more complex. A basic automated system might observe something relatively static that a human does over and over, record that action and then repeat it automatically. An AI engine, on the other hand, can generalize a range of actions, learn and anticipate necessary behavior to improve a system.

For example, current automation technology can make it easier to deploy and automatically choose from a set of website modifications for A/B testing. AI could not only do that faster, but also use semantic understanding of page structure to suggest its own novel changes that make the most sense for the user.

We’re at the precipice of a new world of AI-aided development that will kick software deployment speeds — and therefore a company’s ability to compete — into high gear.

“AI can improve the way we build current software,” writes Diego Lo Giudice of Forrester Research in a recent report. “It will change the way we think about applications — not programming step by step, but letting the system learn to do what it needs to do — a new paradigm shift.”

The possibilities are limited only by our creativity and the investment organizations are willing to make. As Lo Giudice notes, using AI within the development tool chain is largely “still science fiction.” But progress is afoot.

At my company, our goal is to help developers test their code more swiftly. We do that by crowdsourcing QA testers, and we use AI both to manage those workers and to rank their reputation, so we can prioritize work for clients based on those findings. Other examples are cropping up every day. Take the progress we’ve seen in AI-powered project management assistants, which prod developer teams to make better decisions based on current activities and deliverables.

The industry should expect more of this process-oriented AI with the recent release by Amazon Web Services of Lex, the natural language processing framework that powers Amazon Alexa. Lex enables developers to build conversational interfaces for their applications. It’s not a stretch to think that creative teams could use Lex within the internal tools developers use to make those applications.

But that’s just the tip of the iceberg. AI could play a bigger role in test automation, for example. And for security stress testing, Netflix made a huge contribution by releasing to open source an automated engine called Chaos Monkey, which it uses to randomly close down application environments to see how well its systems failover and keep things running. What if something similar were powered by an AI engine that explored stress tests more systematically? AI could also be used to configure servers, making small changes that optimize performance based on the applications and workload.

AI could also play a huge role in software documentation. The more incremental an organization gets about improving features and components, the harder it is to keep track of those changes and how they’re made. The same kind of natural language processing used by Google to automate news writing could be used to document feature change lists, API technical details and processes used by DevOps teams.

As with automation, AI won’t put developers out of a job, but it will force them to evolve their skillsets to make better use of machine learning in the development process. To achieve get closer to that future, businesses need to invest in the data science brain trust required to support it. And they need to start collecting the data necessary for training effective AI systems.

There’s a long road to achieving these efficiencies, but smart businesses are moving now to incorporate AI in their software development. The pace of technology continues to accelerate, and consumers have come to expect the best and latest experiences a business can provide. If software is eating the world, automation and AI will be the stakes for a seat at the table.

This article is shared by www.wonderscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

How Much Do Software Engineers Really Earn?

How Much Do Software Engineers Really Earn?

If you’re a software engineer, chances are that at some point you considered Silicon Valley for your career. Even if you aren’t considering moving there, it’s a good bet that the Valley is still the first place that comes to mind when you wonder where the best software engineering jobs are.

While it’s true that Silicon Valley and the Bay Area in general offer enticing salaries—an average of $110,554 USD per year, according to Glassdoor—many engineers may not consider the effect that cost of living has on this salary.

This is the focus of a recent study by CodeMentor, which analyzed the “real earnings” of software engineers in 43 cities across the U.S. and around the world to determine the best places for them to work.

“Real earnings” in this case is the value of the salary software engineers earn given the essential costs of living associated with each city, such as rent or mortgage payments, taxes and social security.

The CodeMentor report used a “real earnings formula” to calculate average earnings for a software engineer who lives alone in the city. That formula is:

Real Earnings = Income – Taxes – Social Security – Living Costs – Rent

The biggest effect on real earnings came from taxes and rental costs, both of which vary significantly from city to city.

So which cities are the most affordable, resulting in the highest real earnings for software engineers?

Top U.S. Cities for Software Engineering Salaries

Seattle comes out as the clear winner in the report. The presence of top-tier tech companies such as Microsoft and Amazon serves to put wages on par with Silicon Valley, but the costs for rent are significantly lower – meaning more money in your bank account each month.

Also at the top of the list are Phoenix, Austin and Houston, which offer real earnings above $30,000.  However, these cities are still growing their software industries, and so have relatively modest numbers of software engineering job openings in their job market.

Interestingly, New York and Washington D.C. sit at the bottom of the list, even though these two cities have the largest number of openings in the job market.  This is largely due to the astronomical cost of rent – averaging as high as $3,000 per month – leaving software engineers working in these cities with real earnings in the $15,000 – $18,000 range.

International Cities with Highest Real Earnings for Software Engineers

The report also analyzes cities outside the U.S., for those who may be considering working abroad.  Oslo and Tel Aviv offer the highest real earnings in this group, at an average of $28.1K and $22.9K respectively. However, they also have small – albeit, growing – job markets.

Three Canadian cities also make a good showing: Toronto, Montreal and Vancouver, with real earnings in USD between $16 – $19K, and job markets that are moderate and increasing.

The full report is available and includes a more detailed breakdown of the calculations, methodology and additional analyses of affordability and quality of life across the 43 cities featured.

While salary – or real earnings – alone shouldn’t be the only deciding factor in choosing where you want to pursue your career in software engineering, having this information can help you plan for success.

For more tips on finding a great engineering career, check out Winning Strategies to Land That Great Engineering Job.

This article is shared by www.wonderscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

5 Lightweight PHP Frameworks for REST API Development

5 Lightweight PHP Frameworks for REST API Development

To develop a REST API in PHP quickly and easily it’s a good idea to use a lightweight PHP framework. Developing your own from scratch with plain PHP, apart from being a pain and taking too much time, is likely to require a lot of testing and deviate from REST standards. Ankur Kumar over at Find Nerd takes you through the top five PHP frameworks to make your life easier when creating a REST API.

First up, is Slim. This PHP microframework has a scalable, modular architecture that lets you use exactly what you need and nothing more. Even better, you can put your entire web service in a single PHP file. Features include: the ability to enable and disable debugging for the API, the ability to inspect and manipulate HTTP messages including headers and cookies, and support for dependency injection. A big plus is that its HTTP Router maps route callbacks to target HTTP request methods and URIs.

Next in line is Silex. It’s a microframework based on Symfony. It comes in two flavors. The fat version includes Symfony components and a template engine. The slim version just has a basic routing engine and some procedures to work with other libraries. The slim version is what you want if you’re keeping it lightweight. It’s fast and has features like one step controllers and easy-to-manage testing tools.

Third in line is Wave. It’s a popular PHP microframework based on the MVC design pattern. It comes with a view controller and special gateway for web functionality. It doesn’t, however, come with any optional libraries so it’s extremely lightweight and built for speed.

Next up is Limonade. Like Wave, it’s lightweight and easy to use. Ankur advises its use for prototyping and rapid web development. It’s not suited to larger projects mainly because it’s functionality is limited and it can’t be easily integrated with other libraries or extended. On the plus side, it’s easy to learn.

Last but not least is Lumen. It’s a Laravel-based microframework that’s recommended for microservice architectures because of its blazing speed. Not surprisingly, it’s best used with Laravel, since its code can be easily be dropped into a Laravel app. Out-of-the-box features include caching, validation and routing.

This article is shared by www.wonderscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.

Want to Learn PHP? Here are Tips and Sources to Start

Want to Learn PHP? Here are Tips and Sources to Start

What is PHP?

And why should I learn to use and where would I begin once I decide to use it? To start, PHP stands for ‘PHP: Hypertext Preprocessor’, and it is one of the most common server-side languages in programming today. The reason it is so popular is the fact that it is completely open sourced, which means anyone with access to the internet, is free to complete access to everything PHP has available to them.

There are obvious reasons why a programmer should, and would, want to learn this language on that basis alone. Not only is it a programming skill you can use and implement free of cost, but there are a vast number of employers who incorporate this language on that basis alone. With the ease that it can be embedded into HTML, it won’t be long before you start seeing professional web development come to life in front of your very eyes.

Now that you have PHP successfully installed, we need somewhere we can actually write the code. I use Visual Studio Code. Once it is downloaded and installed, it is as simple as saving a blank notepad document, but instead of saving it with a ‘.txt’ extension at the end of the file name, use a ‘.php’. Now when you right click on that file and go to ‘Open With’, select Visual Studio Code, and it will open up to you a blank template to use. In order to write PHP code, it needs to be within PHP tags. The starting tag looks like ‘’. Anything in between these tags will be read as PHP code.

Now that we have PHP installed, and a place to write our code, the next thing we will need is a software that will allow us to run the code so we can see our work come to life. I use Xampp. Once it is downloaded there are some slight modifications needed to make it run, but not that many. If you don’t alter any of the steps in the Xampp installation process, Xampp will be saved within your C:\ (Local Disk C:) file path on your computer. If we go there and click on Xampp, it will open into many different options. Now it is the ‘htdocs’ folder where you will want to save all your projects of work. For example, go inside the htdocs folder and save an empty document called ‘hello-world.php’. Now find where Xampp is on your computer and run it. You will get a pop-up that looks as follows except you won’t have any messages displayed yet I am sure.

Hope you liked the article.

—Sanket Wankhede

This article is shared by www.wonderscripts.com | A leading resource of inspired clone scripts. It offers hundreds of popular scripts that are used by thousands of small and medium enterprises.